url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/32281/on-the-naturalness-problem
|
# On the naturalness problem
I know that there are several questions about the naturalness (or hierarchy or fine-tunning) problem of scalars masses in physics.stackexcange.com, but I have not found answers to any of the following questions. Suppose that we add to the SM Lagrangian the following piece:
$(\partial b)^2-M^2 \, b^2-g\, b^2 \, h^2+ \, ....$
where $b$ is a real scalar field (that is not contained in the SM) and $h$ is the Higgs real field. Then the physical mass $m_P$ of the Higgs is given by the pole of its propagator (I am omitting numerical factors):
$m^2_P=m^2_R (\mu)+I_{SM}(\mu)-g\, M^2\, ln(M/\mu)$
where $m_R(\mu)$ is the renormalized Higgs mass, $I_{SM}(\mu)$ (which also depends on the SM couplings and masses) is the radiative contribution of the SM fields (with the Higgs included) to the two point function of the Higgs fields (note that is cut-off independent because we have subtracted an unphysical "divergent" part) and the last term is the one-loop contribution of the new field $b$ (where we have also subtracted the divergent part).
I have two independent questions:
1. The contribution of the $b$ particle (the last term) is cut-off independent (as it has to be) so the correction to Higgs mass is independent of the limit of validity of the theory, contrary to what is usually claimed. However, it does depend on the mass of the new particle. Therefore, if there were no new particles with masses much higher than the Higgs mass, the naturalness problem would not arise. It could be new physics at higher energies (let's say beyond 126 GeV) as long as the new particles were not much heavier than the Higgs (note that I'm not discussing the plausibility of this scenario). Since this is not what people usually claim, I must be wrong. Can you tell me why?
2. Let's set aside the previous point. The naturalness problem is usually stated as the fine-tunning required to have a Higgs mass much lighter than the highest energy scale of the theory $\Lambda$, which is often taken as GUT scale or the Planck scale. And people write formulas like this: $\delta m^2 \sim \Lambda^2$ that I would write like that: $m^2_P=m^2 (\Lambda) + g\, \Lambda^2$. People think it is a problem to have to fine-tune $m^2 (\Lambda)$ with $\Lambda^2$ in order to get a value for $m^2_P$ much lower than $\Lambda^2$. And I would also think that it is a problem if $m^2 (\Lambda)$ were an observable quantity. But it is not, the observable quantity is $m^2_P$ (the pole of the propagator). I think that the misunderstanding can come from the fact that "interacting couplings" (coefficients of interacting terms instead of quadratic terms) are observables at different energies, but this is not the case, in my opinion, of masses. For example, one talks about the value of the fine structure constant at different energies, but the mass of the electron is energy independent. In other words, the renormalized mass is only observable at the energy at which it coincides with the physical mass (the specific value of the energy depends on the renormalization procedure but it is usually of the order of the very physical mass), while one can measure (i.e. observe) interacting couplings at different energies and thus many different renormalized couplings (one for every energy) are observables. Do you agree?
*(Footnote: since free quarks cannot be observed the definition of their masses is different and one has to give the value of their renormalized mass at some energy and renormalization scheme.)
Thank you in advance.
-
All you've noticed is yet another runaway instability in the Higgs mass, from adding heavy particles. But you don't need heavy particles. This has been asked many times here already--- the bare couplings and bare masses are observable you extract them from very high energy scattering experiments. They have to be tuned just so to make the Higgs nearly massless, and this is ridiculous. That's heirarchy. – Ron Maimon Jul 18 '12 at 1:39
Thank you, Ron. This is the key point: "bare masses are observable you extract them from very high energy scattering experiments", I though that physical (observable) masses were propagator's poles... I'm very surprised to hear that. Can you give me a reference? – drake Jul 18 '12 at 1:47
I don't know the precise reference, I can give you the argument--- the parameters you measure for scattering of Higgses at high energy $\mu$ are those which are roughly the bare parameters with the cutoff of order $\mu$. The reason is that the renormalization scheme fixing the scale at $\mu$ makes the corrections at $\mu$ vanish, which corresponds to removing the degrees of freedom at scales higher than $\mu$. Modern measurements are made at high energies, and the parameters are therefore at a subtraction scale which varies. – Ron Maimon Jul 18 '12 at 2:11
I have a general problem with your question -- I think that I misunderstand something. You have stuff, that depends on $\mu$ and you are keeping saying that it is "cutoff independent". Can you clarify what $\mu$ is then? – Kostya Jul 18 '12 at 9:05
Sure, Kostya. That expresion has been regulated with dimensional regularization. The meaning of $\mu$ depends on the subtraction scheme one chooses. In Minimal Subtraction (or its sister MS bar), $\mu$ is the parameter with mass dimension one has to introduce to keep the couplings dimensionless. In non-minimal subtraction schemes, $\mu$ is the energy scale at which one subtracts the cut-off ($1/\epsilon$) dependent part. Of course, the argument does not change if one uses another regularization procedure like Pauli-Villars or a sharp cut-off. Thank you. – drake Jul 18 '12 at 15:47
show 12 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553906321525574, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54736/about-the-1d-singularity-of-black-hole
|
# about the 1D singularity of black hole
I saw some responses here saying that the singularity into the black hole is one dimension object so my question is : is it possible that the singularity is simply a merger of the 4 dimensions of the spacetime ? Is it possible physically and mathematically that many dimensions can merge into one ?
-
1
Could you post a link to the questions where the one-dimensional nature of the singularity is discussed, just so we have some context? – twistor59 Feb 22 at 16:56
– user21251 Feb 22 at 17:46
Points are 0D, and rings are 1D. – alexarvanitakis Feb 24 at 15:22
The topology/geometry of the singularity itself is actually quite a tricky issue, since the singularity isn't really part of spacetime. If no one else comes up with an answer along these lines, I'll see if I can put one together. – twistor59 Feb 24 at 18:54
## 2 Answers
I'll attempt to address your question in the context of classical general relativity since the dimensionality of the relevant manifold is more involved in some of the more recent holographic pictures, or may not even be well defined at all in the fundamental theory (whatever that is) until some sort of low energy limit is taken.
When people try to conceptualize a singularity, they often consider the case of a point charge in classical electromagnetism, in which the electric field is given by $${\bf{E}}({\mathbf{r}}) = \frac{e}{4\pi\epsilon_0}\frac{({\mathbf{r}}-{\mathbf{r_0}})}{|{\mathbf{r}}-{\mathbf{r_0}}|^3}$$ Here the electric field diverges as $\mathbf{r}\rightarrow \mathbf{r_0}$. In this case , it's perfectly adequate to define something like "a singularity is a location where one or more components of the electric field diverges".
The natural approach is to attempt to define a GR singularity as a location where some measure of the spacetime curvature diverges. The first example that comes to mind might be $r=0$ in a Schwarzschild metric. Unfortunately, $r=0$ isn't a location where the manifold has a smooth Lorentz signature metric, so it is not part of spacetime, i.e. the problem is with the highlighted words. This is more than just a mathematical technicality: if the metric isn't well behaved, spacetime just doesn't have the properties we expect it (classically) to have.
One approach that has been tried to partially resurrect the idea of a singular location is to provide a prescription for attaching extra points to the spacetime manifold. These are thought of as representing some sort of "boundary" to spacetime, which represents the singular points. This works for straightforward cases like the Schwarzschild solution, but doesn't work in general.
Another approach, and perhaps the most successful one, to the definition of singularity in GR has been to examine the behaviour of curves in the spacetime. A physical pointlike object moving through spacetime traces out a curve. As you wind the object's proper time forward, it traces out a path through spacetime. This path is a map from the real numbers, representing this proper time (or some other parameter) into the manifold. Now normally, this parameter can be extended into the infinite future (we're considering classical objects, not particles in QFT which can disappear in the sense that they turn into something else at a given time). If there are curves in the spacetime for which the parameter cannot be extended into the infinite future, this is a sign that the spacetime is singular.
You can produce a singularity like this by just removing points from spacetime. For example, if you remove some points from Minkowski space, then curves which would otherwise have run into those points now can't have their parameters extended indefinitely, so the spacetime is flagged as singular. However, in this case where we removed some points, the manifold can be extended by putting the points back, and the singularity goes away! To circumvent this difficulty, you only apply the "inextendible curves" criterion to maximally extended spacetimes - i.e. ones which use analytic extension of the metric to "put back" any points that could possibly be put back.
Singular spacetimes can exhibit some extremely bizarre behaviour. For example one observer freely falling through a compact region of spacetime, can experience unbounded curvature forces. This observer's worldline has a limit point p. Another observer can travel through p with no such problems (Hawking and Ellis prop 8.5.2)!
Returning to the specific question, whereas loss of a spatial dimension in a manifold certainly would be singular behaviour in a differential-geometric sense, it doesn't capture all the peculiar behaviours which are part of singular spacetimes in GR. Perhaps even without looking at these exotic singular behaviours, it is clear that the "simple" case of the Schwarzschild singularity doesn't involve a straightforward dimensional reduction. If you take a look at the conformal diagram of the maximally extended Schwarzschild solution you can see that the spacetime has topology $\mathbb{R}^2XS^2$. The future singularity is represented as part of an attached boundary, and timelike curves in that region approaching from any spatial direction will encounter it. There are no directions for which the curves can "slip through" as there would be if the singularity consisted of a set of points where the dimensionality of spacetime was reduced.
-
Thank you for the response :) – user21251 Feb 27 at 14:12
Is it possible physically and mathematically that many dimensions can merge into one ?
There is compactification of dimensions.
Here's a short explanation from Extra Dimensions - Why String Theory,
So perhaps there could be extra dimensions, so small that we don’t perceive them. The process of curling up space to produce these tiny invisible dimensions is known as compactification.
(Rest of explanation in the link)
As far as I know (and I don't know much really), we don't expect extra dimensions to just get deleted, rather they're so small we can ignore them (compactified) or we don't consider them at all (like in solving 2d kinematics problems)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426800608634949, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/229297/transcendence-of-e-proof?answertab=oldest
|
# Transcendence of $e$ (proof)
I'm trying to get through the proof of transcendence of $e$ (the base of the natural logarithm) already for a couple of days, but now I got seriously stuck.
Proof is in most sources roughly the same. I followed this version.
We prove the transcendence of $e$ by contradiction - suppose that $e$ is an algebraic number. That means there is a nonzero polynomial $P \in \mathbb{Z}[x]$ such that $P(e) = 0$.
Let $P(x) = \sum_{i=0}^{n}w_{i}x^{i}.$
Lemma
Let $f\in \mathbb{R}[x]$ be a polynomial, $t\in \mathbb{R}^{+}$. Then the following equality holds: $$\int_{0}^{t} e^{-x}f(x)\text{d}x = \sum_{i=0}^{\infty}f^{(i)}(0) - e^{-t}\sum_{i=0}^{\infty}f^{(i)}(t)$$
Lemma can be proved by integration per partes. For clarity let's define $F(x) = \sum_{i=0}^{\infty}f^{(i)}(x)$.
Now the lemma looks like this:
$$\int_{0}^{t} e^{-x}f(x)\text{d}x = F(0) - e^{-t}F(t)$$
$$\int_{0}^{t} e^{t-x}f(x)\text{d}x = e^{t}F(0) - F(t)$$
Because $e$ is (by contradiction) an algebraic number, we have $\sum_{i=0}^{n}w_{i}e^{i} = 0$ for some $w_{i}\in \mathbb{Z}$. Now a rather complicated step. Write the last integral identity for $t = 0$, $1$, ..., $n$, multiply each of them by $w_{t}$ and add the results.
You should obtain: $$\sum_{i=0}^{n}w_{i}\int_{0}^{i} e^{i-x}f(x)\text{d}x = F(0)\sum_{i=0}^{n}w_{i}e^{i} - \sum_{i=0}^{n}w_{i}F(i)$$ See why we did this? Using the property of an algebraic number mentioned above we get: $$\sum_{i=0}^{n}w_{i}F(i) = - \sum_{i=0}^{n}w_{i}\int_{0}^{i} e^{i-x}f(x)\text{d}x$$ Now the idea of the proof is to choose the polynomial $f$ so wisely that the left side is a non-zero integer and the right side is small (say, less than $\frac{1}{10}$ in absolute value), which gives the contradiction. The wisely chosen polynomial is $f(x) = \frac{1}{(p-1)!}x^{p-1}\prod_{i=1}^{n}(x-i)^p$, where $p\in \mathbb{P}$ is some prime that will be specified later.
And here comes my digging. I am brave, I don't need such a giant polynomial to conclude the contradiction. My chosen polynomial will be $f(x)=x$. Now my equality looks like this: $\sum_{i=0}^{n}w_{i}(i+1) = - \sum_{i=0}^{n}w_{i}\int_{0}^{i} e^{i-x}x\text{d}x$. Uhh, no contradiction. :( Maybe wrong polynomial. Okay, let's take a general polynomial f(x). Now let's compute that integral and find out whether there's a contradiction or not: $$\sum_{i=0}^{n}w_{i}F(i) = - \sum_{i=0}^{n}w_{i}\int_{0}^{i} e^{i-x}f(x)\text{d}x$$ $$\sum_{i=0}^{n}w_{i}F(i) = - \sum_{i=0}^{n}w_{i}(e^{i}F(0) - F(i))$$ $$0 = 0$$ Damn! Neither for $f(x)=x$ nor for $f(x) = x^{2}$ there's no contradiction. Okay, that's understandable - the wisely chosen polynomial wouldn't be so giant if it didn't have to be. But now I showed that when I try to substitute any possible polynomial and compute the integral, it doesn't lead to contradiction! Can you help me? What am I missing here?
-
2
You haven't. Note that when passing to $0=0$, you used the assumption $\sum_i w_ie^i=0$ that you want to demonstrate false, so you just effectively cancelled your initial step (suppose that...) when doing so. You need to keep it and not try to use it again (or, at least, use it more wisely than that). A simple example of what you've done would be "Assume 1=2". Take any $n$. Then $n=1+(n-1)=2+(n-1)=n+1$. Then, since $1=2$, we have $n-1=(n+1)-2$, i.e., $n-1=n-1$. Now I see nothing wrong with that! – fedja Nov 5 '12 at 0:12
@fedja "you just effectively cancelled your initial step" Ah, now I see that. It all started with those easy polynomials. I guess that the trick is to do whatever but not to evaluate that integral, which always leads back. That might be the reason why the proof continues with establishing the upper bound for the integral instead of evaluating it. – Jeyekomon Nov 5 '12 at 0:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932126522064209, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/115149/is-there-a-composite-number-that-satisfies-these-conditions/115954
|
## Is there a composite number that satisfies these conditions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We know that if $q=4k+3$ ($q$ is a prime), then $(a+bi)^q=a-bi \pmod q$ for every Gaussian integer $a+bi$. Now consider a composite number $N=4k+3$ that satisfies this condition for the case $a+bi=3+2i$. I use Mathematica 8 and find no solution less than $5\cdot 10^7$. Can someone find a larger number for the condition, and can this be used for a primality test?
-
How do you prove the statement in the first sentence? – Igor Rivin Dec 2 at 5:45
@Igor: it looks like wsc810 was answering your question, not the OP's. You could also say that for $q = 3 mod 4$, both the Frobenius $x \mapsto x^q$ and the map $i \mapsto -i$ induce the unique nontrivial field automorphism on $\mathbb{Z}[i]/q \cong \mathbb{F}_{q^2}$, hence coincide. – Todd Trimble Dec 2 at 12:39
2
Search for Frobenius pseudoprimes. – Felipe Voloch Dec 2 at 16:10
Possibly helpful reference on Frobenius pseudoprimes : MR1680879 (2001g:11191) Grantham, Jon . Frobenius pseudoprimes. Math. Comp. 70 (2001), no. 234, 873--891 ams.org/journals/mcom/2001-70-234/… – François Brunault Dec 3 at 9:14
## 4 Answers
if q=4k+1 and prime, then (a+bi)^q is (a+bi) and NOT (a-bi). the "-" is only for prime q=4k+3. your proof is wrong from the start, please recheck it.
-
You should therefore unaccept this answer, wanglei. – Todd Trimble Dec 20 at 15:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This looks sort of interesting. I don't have an answer, but just a few observations. Instead of restricting to $3 + 2i$, we might consider the same condition holding for every $\alpha = a + bi$ prime to $N$, i.e., such that
$$\alpha^N \equiv \bar{\alpha} \pmod N.$$
We are then led to consider a Gaussian analogue of Carmichael numbers, i.e., Carmichael ideals for the Gaussian integers, generated by numbers $N$ of the particular form $N = q_1 q_2\ldots q_k$ where $q_i \equiv 3\pmod 4$ for $i = 1, \ldots, k$.
These will be ordinary Carmichael numbers $N$ but with the extra condition that
$$(q_i^2 - 1)\; |\; N^2-1$$
for $i = 1, \ldots, k$. For example, the ordinary Carmichael number $7 \cdot 19 \cdot 67 = 8911$ fails to meet this stronger condition.
I expect these stronger Carmichael numbers exist, but as I say I don't have an example. If I were researching this problem myself, I would try to get hold of a table of Carmichael numbers, see which ones have all of its prime factors $q_i$ congruent to 3 modulo 4, and then test the condition $q_i^2 - 1 \; |\; N^2-1$ on those.
-
Hm, is it possible that your condition implies the number is Carmichael? – joro Dec 3 at 6:24
I thought it did imply that. For any $\alpha \in \mathbb{Z}$, we have $\alpha^N \equiv \bar{\alpha} = \alpha \; \pmod N$. – Todd Trimble Dec 3 at 12:11
I vaguely remember seeing existence of your condition as an open problem, can't find the paper. Maybe wrong and have seen something similar though. – joro Dec 3 at 15:39
As Mr. R. Gerbicz pointed out in the mersenne forum an eventual counterexample for the base 3+2i must be 13-PRP (just multiply the equation with its conjugate). The first point to check is to make a list of pseudoprimes base 13 which are 3 (mod 4). I checked them to 10^10 and there is no counterexample which pass this test (a couple of them which are 1 (mod 4) passes the complex base test, but none of the 3 (mod 4)). However, the general opinion is that this test is a "hidden" multi-base PRP test, or a (n-1)(n+1) combined test, and as Mr. Tom Womack pointed out in that thread, if a couterexample exists, it must be HIGH (somewhere in 10^30 or so).
-
now we see under what conditions,$(a+bI)^N=a-bI\pmod q$, we can conclude N dosent have $4k+1$ factors, suppose there is a factor $q=4k+1$ for $N$and$N=q*d$ , then $(a+bI)^N=((a+bI)^q)^d=(a-bI)^d=a^d+b^d(-I)^{4k+3}=a^d+b^d*I\pmod q$. so N dosent have $4k+1$ factors. and There are 3,or 5 or 7 etc.factors of form 4k+3 for N. now to the 3 factors. Suppose N exists. $(a+bI)^{q_1q_2q_3}=a-bI\pmod {q_1q_2q_3}$
$((a+bI)^{q_1})^{q_2q_3}=((a-bI)^{q_2q_3}=a-bI\pmod {q_1}$
$((a+bI)^{q_2})^{q_1q_3}=((a-bI)^{q_1q_3}=a-bI\pmod {q_2}$
$((a+bI)^{q_3})^{q_1q_2}=((a-bI)^{q_1q_2}=a-bI\pmod {q_3}$
as$((a-bI)^{(q_1+1)})^{(q_1-1)k_1}=(a^2+b^2)^{(q_1-1)k_1}=1\pmod {q_1}$
multiply (a-bI) to the both sides.
$(a-bI)((a-bI)^{(q_1+1)})^{(q_1-1)*k_1}=(a-bI)^{(q_1^2-1)*k_1+1}\pmod {q_1}$
$(a-bI)((a-bI)^{(q_2+1)})^{(q_2-1)*k_2}=(a-bI)^{(q_2^2-1)*k_2+1}\pmod {q_2}$
$(a-bI)((a-bI)^{(q_3+1)})^{(q_3-1)*k_3}=(a-bI)^{(q_3^2-1)*k_3+1}\pmod {q_3}$
so we have
$q_2q_3=(q_1^2-1)*k_1+1$ (1)
$q_1q_3=(q_2^2-1)*k_2+1$ (2)
$q_1q_2=(q_3^2-1)*k_3+1$ (3)
(1)/(2)
$\frac{q_2}{q_1}=\frac{(q_1^2-1)*k_1+1}{(q_2^2-1)*k_2+1}$
introduce variable $t$ and it must be a integer.
$(q_1^2-1)k_1+1=q_2t$ (4)
$(q_2^2-1)k_2+1=q_1t$ (5)
tanspose
$(q_1^2-1)k_1=q_2t-1$ (6)
$(q_2^2-1)k_2=q_1t-1$ (7)
(6)*(7)
$q_1q_2t^2-(q_1+q_2)t+1-(q_1^2-1)(q_2^2-1)k_1k_2=0$
$\Delta=(q_1+q_2)^2-4q_1q_2(1-(q_1^2-1)(q_2^2-1)k_1k_2)$
$=(q_1-q_2)^2+4q_1q_2(q_1^2-1)(q_2^2-1)k_1k_2$
$t=\frac{(q_1+q_2)\pm\sqrt{(q_1-q_2)^2+4q_1q_2(q_1^2-1)(q_2^2-1)k_1k_2}}{2q_1q_2}$
$(q_1+q_2)\pm\sqrt{\Delta}=(q_1+q_2)\pm(q_1-q_2)\ne 0\pmod {q_1q_2}$
so $t$ does not exist. therefore the Compsite Number for $N=4k+3$ dose not exist.
-
we can still deal with for five factors. so there isn't any composite number sastifies the conditions. – wanglei Dec 3 at 9:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146726727485657, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F01/f01ecc.html
|
# NAG Library Function Documentnag_real_gen_matrix_exp (f01ecc)
## 1 Purpose
nag_real_gen_matrix_exp (f01ecc) computes the matrix exponential, ${e}^{A}$, of a real $n$ by $n$ matrix $A$.
## 2 Specification
#include <nag.h>
#include <nagf01.h>
void nag_real_gen_matrix_exp (Nag_OrderType order, Integer n, double a[], Integer pda, NagError *fail)
## 3 Description
${e}^{A}$ is computed using a Padé approximant and the scaling and squaring method described in Higham (2005) and Higham (2008).
If $A$ has a full set of eigenvectors $V$ then $A$ can be factorized as
$A = V D V-1 ,$
where $D$ is the diagonal matrix whose diagonal elements, ${d}_{i}$, are the eigenvalues of $A$. ${e}^{A}$ is then given by
$eA = V eD V-1 ,$
where ${e}^{D}$ is the diagonal matrix whose $i$th diagonal element is ${e}^{{d}_{i}}$.
Note that ${e}^{A}$ is not computed this way as to do so would, in general, be unstable.
## 4 References
Higham N J (2005) The scaling and squaring method for the matrix exponential revisited SIAM J. Matrix Anal. Appl. 26(4) 1179–1193
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Moler C B and Van Loan C F (2003) Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later SIAM Rev. 45 3–49
## 5 Arguments
1: order – Nag_OrderTypeInput
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor.
2: n – IntegerInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
3: a[${\mathbf{pda}}×{\mathbf{n}}$] – doubleInput/Output
Note: the $\left(i,j\right)$th element of the matrix $A$ is stored in
• ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• ${\mathbf{a}}\left[\left(i-1\right)×{\mathbf{pda}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
On entry: the $n$ by $n$ matrix $A$.
On exit: the $n$ by $n$ matrix exponential ${e}^{A}$.
4: pda – IntegerInput
On entry: the stride separating row or column elements (depending on the value of order) in the array a.
Constraints:
• if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$;
• if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pda}}\ge {\mathbf{n}}$.
5: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_BAD_PARAM
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
NE_INT_2
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}\ge {\mathbf{n}}$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_SINGULAR
The linear equations to be solved are nearly singular and the Padé approximant probably has no correct figures; it is likely that this function has been called incorrectly.
The linear equations to be solved for the Padé approximant are singular; it is likely that this function has been called incorrectly.
NW_SOME_PRECISION_LOSS
The arithmetic precision is higher than that used for the Padé approximant computed matrix exponential.
## 7 Accuracy
For a normal matrix $A$ (for which ${A}^{\mathrm{T}}A=A{A}^{\mathrm{T}}$) the computed matrix, ${e}^{A}$, is guaranteed to be close to the exact matrix, that is, the method is forward stable. No such guarantee can be given for non-normal matrices. See Section 10.3 of Higham (2008) for details and further discussion.
For discussion of the condition of the matrix exponential see Section 10.2 of Higham (2008).
## 8 Further Comments
The cost of the algorithm is $O\left({n}^{3}\right)$; see Algorithm 10.20 of Higham (2008).
As well as the excellent book cited above, the classic reference for the computation of the matrix exponential is Moler and Van Loan (2003).
## 9 Example
This example finds the matrix exponential of the matrix
$A = 1 2 2 2 3 1 1 2 3 2 1 2 3 3 3 1 .$
### 9.1 Program Text
Program Text (f01ecce.c)
### 9.2 Program Data
Program Data (f01ecce.d)
### 9.3 Program Results
Program Results (f01ecce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 51, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6554447412490845, "perplexity_flag": "middle"}
|
http://mendicantbug.com/tag/similarity/
|
# The Mendicant Bug
Wanderings into computational linguistics, science, social media and life…
## The limits of collaborative filtering?
Posted: 25 June 2008 in Uncategorized
Tags: attributes, collaborative filtering, logic, machine learning, netflix prize, proportional analogies, recommender systems, relations, similarity
Peter Turney posted recently on the logic of attributional and relational similarity. Attributes are features or characteristics of a single entity. Relations describe some connection between two entities, such as a comparison. We’ll denote a relation between two entities A and B as A:B. A relational similarity between two groups A, B and C,D will be denoted as A:B::C:D. This is the standard SAT-style proportional analogy: A is to B as C is to D. An attributional similarity indicates that two entities share the same attribute (this could be to varying degrees, but in the boolean case, it’s either shared or it isn’t). An attributional similarity between A and B will be denoted as A~B. This is like saying $\forall$ Z, A:Z::B:Z. I’m just giving a brief introduction here, but this is all in Peter’s post to greater detail, so I recommend reading that for more information.
This got me thinking about collaborative filtering (because, well, I’ve been thinking about it all the time for the past two years). Collaborative filtering exploits similarities between users to predict preferences for items the user has not seen. In the case of movie recommendations, like with Netflix, this means that users can recommend movies they have seen to similar users who have not seen those movies. There are many ways of doing this. At the heart of it, however, is this notion of relational and attributional similarity.
A: base user
B: movies rated by A
C: some set of other users
D: movies rated by C
We can’t just say that A:B::C:D, since A and C may be nothing like each other. If we constrain it to users with attributional similarity, then we arrive at the definition of collaborative filtering: A~C & A:B::C:D. Logically, it follows that B~D also holds. See Peter’s post for some properties of proportional analogies that make this more clear.
In the non-binary case, we can choose C to be a set of users whose similarity varies with A. Also, our measure of what exactly constitutes similarity can be any number of different metrics. From here, it seems pretty clear that the limit of collaborative filtering is bounded by the attributional similarity A~C. If (A~C) & (A = C) (complete similarity) then it follows that B = D or else A $\neq$ C. If A $\neq$ C then does it logically follow that B $\neq$ D? I guess it depends on the similarity metric and how we are defining the differences in the sets of movies and the differences in the sets of users.
I wonder if there has been any work done in this area? I wasn’t able to find anything, but maybe I’m just not searching for the right thing. Is it even worth pursuing?
Blog at WordPress.com. | Theme: Greyzed by The Forge Web Creations.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351100921630859, "perplexity_flag": "middle"}
|
http://gowers.wordpress.com/2010/02/02/edp5-another-very-brief-summary/
|
# Gowers's Weblog
Mathematics related discussions
## EDP5 — another very brief summary
I wasn’t expecting to have to write another post quite this soon, so this is another one where I don’t have much to say. Here are a few bits and bobs from the last lot of comments, but there’s plenty more in the comments themselves, and actually I think that it’s not that hard to browse through them now that we have depth-1 threading and quite a lot of the comments are very short.
Johan de Jong came up with a very interesting variant of the problem, in which $\mathbb{N}$ is replaced by the space of polynomials over $\mathbb{F}_2$. I confess to being a little sad when the problem was solved negatively soon afterwards, as it had looked as though it might be a rather good model problem. However, the solution was nice.
One of the general aims at the moment is to try to show that a multiplicative function of bounded discrepancy must have some kind of character-like behaviour. Terence Tao has come up with an intriguing argument that shows not that but at least something in the right ball park.
Work has continued on a human proof that completely multiplicative sequences must have discrepancy greater than 2. It looks as though the proof will be complete before too long, and not too painful. Some nice tricks of Jason Dyer have come in helpful in reducing the amount of case analysis that is needed.
### Like this:
This entry was posted on February 2, 2010 at 12:54 am and is filed under polymath5. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 106 Responses to “EDP5 — another very brief summary”
1. gowers Says:
February 2, 2010 at 1:01 pm | Reply
Here is a possibly rather pointless-looking observation. I’ll explain in a later comment why I stumbled on it.
Suppose we build a $\pm 1$ sequence as follows. It is somewhat like the way one builds the character-like sequence $\mu_3$, but there is a twist.
The first step is to write $1 * -1$. That is, we are saying that $f(1)=1$, $f(3)=-1$ and we’re not yet sure about $f(2)$. Having done that, we say that this decides the values whenever $n$ is congruent to 1 or 3 mod 3. That is, our sequence will be 1 * -1 1 * -1 1 * -1 … whatever happens.
Now we fill in two thirds of the gaps by taking minus the sequence we have so far. That is, we take the sequence
1 -1 -1 1 * -1 1 1 -1 1 -1 -1 1 * -1 1 1 -1 …
Next, we fill in two thirds of the remaining gaps by using the original sequence again, getting
1 -1 -1 1 1 -1 1 1 -1 1 -1 -1 1 * -1 1 1 -1 …
and so on.
This sequence is not multiplicative and it does not have small discrepancy (for example it is always -1 at multiples of 3). However, it does have partial sums that grow only logarithmically. One proof of that is an obvious inductive argument. Another is to “observe” that this sequence is precisely the sequence $\mu_3(1),\mu_3(3),\mu_3(5),\dots$, and we know that the partial sums of $\mu_3$ grow logarithmically along the integers and also along the even integers. (The word “observe” was in inverted commas because in fact I took the odd terms of $\mu_3$ and worked out this alternative way of generating the sequence.)
Since these are not very difficult properties to obtain, I’d better explain in a new comment why I looked at this sequence.
2. gowers Says:
February 2, 2010 at 1:30 pm | Reply
I had an idea in this comment here and it is not letting go of me. It’s the usual story: it doesn’t work in a completely obvious way, but neither have I found some counterexample that would crush a lemma I needed, or something like that. In this comment I’ll describe the basic thought, and in subsequent comments I’ll try to explain where I’m at (which is not far) with the details.
The thought is this. Let $f$ be a completely multiplicative function, and for simplicity suppose that $f$ takes values in $\{-1,1\}$. We want to get a handle on the mean-square partial sums of $f$. That is, we want to bound from below the $L_2$ norms of vectors of the form $(F(0),F(1),F(2),F(3),\dots,F(N))$, where each $F(n)$ is defined to be the partial sum $f(1)+f(2)+\dots+f(n)$. (I take $F(0)$ to be 0. Putting it in makes certain expressions tidier later.)
Now I want to write the vector $(F(1),F(2),\dots,F(N))$ as a sum of other vectors. Let us define $G(n)$ to be $\sum_{2m-1\leq n}f(2m-1)$. In other words, $G(n)$ is the partial sum of the odd terms of the sequence up to $n$. Then the vector $(G(0),G(1),G(2),\dots,G(N))$ is equal to
$\displaystyle (0,f(1),f(1),f(1)+f(3),f(1)+f(3),f(1)+f(3)+f(5),\dots)$
and $F-G$ is equal to
$\displaystyle (0,0,f(2),f(2),f(2)+f(4),f(2)+f(4),f(2)+f(4)+f(6),\dots),$
which in turn equals
$\displaystyle f(2)(0,0,f(1),f(1),f(1)+f(2),f(1)+f(2),f(1)+f(2)+f(3),\dots),$
which is $f(2)$ times a “horizontal stretch” of $F$ by a factor of 2.
We can therefore repeat the trick with the stretched version of $F$, decomposing it into a stretched version of $G$ (multiplied by $f(2)$) and an even more stretched version of $F$. If we allowed ourselves to extend the sequences to infinity, and if we defined $\sigma_2$ to be the map that takes a sequence $(x_1,x_2,x_3,\dots)$ to the sequence $(x_1,x_1,x_2,x_2,x_3,x_3,\dots)$, then we would be able to say that $F=G+f(2)\sigma_2G+(f(2)\sigma_2)^2G+\dots$. And indeed, this is true pointwise since the sequence on the right hand side is eventually zero at every $n$ (because of the stretching of the initial 0).
The thought at the back of my mind is that the more you stretch $G$, the more you kill off high frequencies, so if $r$ is quite a lot less than $s$ then $\sigma_2^rG$ shouldn’t correlate much with $\sigma_2^sG$. (There are all sorts of problems with this idea, which I’ll come to, but let me just give the idea.)
Now if there were no correlation at all between the functions $\sigma_2^rG$ then we would be done. That is because for parity reasons at least half the values of $G$ have absolute value at least 1, so the $L_2$ norm of $G$ is at least $1/\sqrt{2}$, and therefore the $L_2$ norm of the sum of $k$ of the stretches $\sigma_2^rG$ is at least $\sqrt{k/2}$.
While writing this, I did finally think of a “crushing” example. I’ll explain it in my next comment. I don’t yet know how damaging it is.
3. gowers Says:
February 2, 2010 at 1:57 pm | Reply
The key to making an approach like the above work would of course be proving that the functions $(f(2)\sigma_2)^rG$ did not correlate sufficiently negatively for the sum to be small in $L_2$. However, while I was writing the previous comment, it occurred to me to wonder where I was using the condition that the sequence takes values of modulus at least 1. Indeed, suppose we try out the argument on the “proto-Walters sequence” 1 -1 0 1 -1 0 …, which has bounded discrepancy. We know that the argument must fail somehow, but what actually happens?
Well, in this case,
$\displaystyle G=(0,1,1,1,1,0,0,1,1,1,1,0,0,1,1,1,1,0,0,\dots),$
so it is clear that the functions $\sigma_2^rG$ correlate with each other a lot. Therefore, once we multiply them by $f(2)^r=(-1)^r$, we divide them into two parts, one very positive and the other very negative. So there is no particular reason to stop the cancelling that happens.
That’s not quite the same as explaining why the cancellation does happen, which I don’t fully understand yet.
At this stage, I hope it’s clear what my motivation was for looking at odd values of $\mu_3$. There is a much nicer decomposition of the partial-sums vector of $\mu_3$ based on the number 3 rather than 2, but I wanted to see what happened if one split it up the “wrong” way, because if we have to decide what the “right” splitting is, then we’re back to the difficult problem of trying to classify multiplicative functions.
One final remark. If we know that the odd-partial-sums vector $G$ averages almost zero on all long intervals, then we should be able to get small correlation just because the inner product of a function that averages zero on long intervals with a function that is constant on long intervals (apart from the very occasional step) will be small. So there seems to be a chance of proving a curious result of the following kind. We are given a multiplicative function $f$. Then either its partial sums are unbounded, or the odd partial sums must have significant bias, in the sense that they do things like staying positive (on average) for long chunks of time. I haven’t worked out the argument in enough detail to know what the precise statement should be,
but it seems to be saying something like that you either have unbounded discrepancy or you have unbounded “double discrepancy”. (By this I mean that discrepancy is what you get when you integrate once, and double discrepancy what you get when you integrate twice — I’m glossing over the fact that you take just odd values.)
4. obryant Says:
February 2, 2010 at 2:51 pm | Reply
An automatic sequence is defined like this: let A={a_1,…,a_r} be a finite alphabet, and extend any function “from letters” to “from words” by $f(w_1w_2\cdots w_n) = f(w_1)f(w_2)\cdots f(w_n)$. Now if the length of f(a_i) is k (the image of each letter has the same length), and the first letter of f(a_1) is a_1, then the sequence of words
f(a_1),f(f(a_1)), f(f(f(a_1)))), …
converges in the sense that each word in the sequence is a prefix of the next. Let W be the infinite limit word (with index starting at 0, say). A proper coding is a map from A to the complex unit circle. An automatic sequence is a proper coding of W.
As far as I can tell, all of our examples of logarithmic growth are automatic (up to a small number of changes). Borwein, Choi, and Coons ask at the end of their paper whether automatic sequences with $\sum_{n<x} f(n) = o(x)$ have logarithmic growth.
Two things of note: any HAP of an automatic sequence is an automatic sequence. A sequence x_0,x_1,… is k-automatic (k is the length of f(a_i)) if and only if the set of infinite sequences
$\{ (x_{k^\ell n + r})_{n\ge 0} \colon \ell\geq 0, 0 \leq r < k^\ell\}$
is finite.
Showing EDP for automatic sequences may be a nice toy problem.
• gowers Says:
February 2, 2010 at 4:15 pm
Something I don’t quite understand about the question as Borwein, Choi and Coons ask it is that they seem to suggest that the growth should be logarithmic rather than at most logarithmic. Or does their approximate equality symbol mean that the ratio of the two sides tends to a limit that is allowed to be zero? I suppose that must be what they mean, given that the partial sums of the Morse sequence are bounded. Anyhow, that’s a nice question I agree.
I’d have thought showing EDP for automatic sequences should be a reasonably straightforward generalization of the proof for the Morse sequence, but that remark is made without having tried it so perhaps there is a difficulty that would take me by surprise if I did.
• Sune Kristian Jakobsen Says:
February 2, 2010 at 7:09 pm
Why isn’t the constant sequence 1,1,1,… automatic?
• gowers Says:
February 2, 2010 at 8:20 pm
The point is that one adds the condition that the partial sum up to $n$ is $o(n)$.
5. gowers Says:
February 2, 2010 at 4:50 pm | Reply
Here’s a question that might be worth investigating. I think it is probably too simple to lead to a serious advance, but giving a good answer to it might clarify one or two things.
Recall from this comment that if $F$ is the partial-sums sequence and $G$ is the odd-partial-sums sequence, then $F=G+\tau_2G+\tau_2^2G+\dots$, where $\tau_2=f(2)\sigma_2$. This implies (and was derived from the fact) that $F=G+\tau_2F$. This gives us an easy way of deriving $F$ from $G$.
For example, if we take the troublesome sequence 1 -1 0 1 -1 0 …, then $G$ works out to be 011110011110011110…, as I mentioned above. So now we are in a position to write out the first term in $F$, which is 0 (because of our convention that the partial-sums sequence always starts at 0). Therefore, $\tau_2F$ starts $00$. Adding this to $G$ tells us that $F$ starts 01. Since $f(2)=-1$, that tells us that $\tau_2F$ starts 0 0 -1 -1. Adding this to $G$ tells us that $F$ starts $0 1 0 0$, so $\tau_2F$ starts 0 0 -1 -1 0 0 0 0, so $F$ starts 0 1 0 0 1 0 0 1.
We can continue this process and prove quite easily by induction that $F$ is $0 1 0 0 1 0 0 1 0 0 1 0 \dots$ (which, reassuringly, is the correct answer).
Note that what we are doing here is very like calculating a substitution sequence, except that at each stage we add $G$. It’s a bit like an “inhomogeneous” substitution sequence.
The question that interests me is this. We want to prove that $F$ is unbounded. Is there some nice property of $G$ that is necessary and sufficient for this? And is it conceivable that we could prove that this property cannot occur for multiplicative functions with values of modulus 1?
Here’s a sequence $G$ of some potential interest: we start with 0, then have a 1, then two -1s, then four 1s, then eight -1s, and so on. What is the resulting $F$ if $F=G+\sigma_2F$? I get $0 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 \dots$. In other words, I get $(G+1)/2$. This shows that $G$ can have quite long stretches of bias and $F$ will remain bounded.
For comparison, if $G$ is itself the sequence $0 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 \dots$, then $F$ works out as $0 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 \dots$ and looks to me as though it grows logarithmically.
Putting this together, it seems that $G$ can afford to have oscillations of logarithmic speed as long as they average zero. But so far I’m saying that just on the basis of a quick look at these examples.
• Terence Tao Says:
February 2, 2010 at 6:01 pm
Tim, can you remind me what the definition of the sequences $\mu_d, \sigma_d, \lambda_d$, etc. are? Or is this on the wiki?
• gowers Says:
February 2, 2010 at 6:21 pm
Terry, $\lambda_p$ (for a prime $p$) is defined by the conditions that $\lambda(n)=\left(\frac np\right)$ if $n$ is not a multiple of $p$, $\lambda(p)=1$, and $\lambda$ is multiplicative. $\mu_p$ is the same but $\mu(p)=-1$. I defined $\sigma_d$ to be the map that takes a sequence and repeats each term $d$ times, so it’s an entirely different sort of object. Perhaps $S_d$ would be a better choice of notation, and it’s probably not too late to change it.
Going back to my comment above, I now see that there’s an easy expression for $F(n)$ in terms of $G$ if one goes back to the original equation. If I’m not much mistaken, it is $G(n)+G(\lfloor n/2\rfloor)+G(\lfloor n/4\rfloor)+\dots$. Turning that around, it says that $F$ will be bounded if and only if $G(a_1)+G(a_2)+G(a_3)+\dots+G(a_k)$ is (uniformly) bounded for all sequences such that $a_1=1$ and $a_{i+1}\in\{2a_i,2a_i+1\}$. This, it seems to me, is a rather strong requirement, since one can imagine an “adversary” always attempting to choose $a_{i+1}$ in a way that will make the sum go away from zero, and unless $G$ is pretty structured the adversary would seem to be able to win.
• gowers Says:
February 2, 2010 at 6:31 pm
Sorry, that expression I gave was valid only when $f(2)=1$. If $f(2)=-1$ then we get
$\displaystyle F(n)=G(n)-G(\lfloor n/2\rfloor)+G(\lfloor n/4\rfloor)-\dots.$
One other small (but encouraging) observation: when $f$ takes values in $\pm 1$, the values of $G(2x)$ and $G(2x+1)$ are always distinct. This makes life easier for the adversary (who is trying to make $F$ unbounded, so is actually on our side).
• gowers Says:
February 2, 2010 at 6:49 pm
Back on the proof-crushing side, here’s a thought that seems rather problematic. So far, all I’ve really used is that $f(2x)=f(2)f(x)$, and we know that it’s easy to find sequences with that property and bounded partial sums (such as the Morse sequence).
6. Terence Tao Says:
February 2, 2010 at 6:00 pm | Reply
I updated the code to make it take command line input, which should make it easier to use. For instance
[program name] 2 -3
would set f(2)=1 and f(3)=-1 and see what one can deduce from this. (I’m adopting the convention that f is odd, so that f(-n)=-f(n).
It occurs to me that we should look at the long multiplicative sequences that have already been found and see whether they have a fixed value at given primes (e.g. it already seems that they need to be -1 at 2 and 5). If we see such a fixed value, one can presumably eliminate the complementary possibility easily enough.
• gowers Says:
February 2, 2010 at 6:36 pm
Alec Edgington worked out all the sequences of length 246 (there are 500 of them) on his computer, and says that they agree on all primes up to 67. See this wiki page. His algorithm was (I think) a pure backtracking algorithm, so whether it’s safe to conclude that it should be easy to work out by hand what the function is up to 67 is not completely clear to me.
• Alec Edgington Says:
February 2, 2010 at 6:40 pm
The tendency observed so far is for long low-discrepancy multiplicative sequences to send primes congruent to 2 or 3 mod 5 (and the prime 5) to -1, and others to +1. See for example this one on the wiki. So the contrary choices may be worth trying to eliminate first, though it may not be straightforward to do so (witness some of the numbers on this page showing how far one sometimes needs to search to eliminate large drifts).
7. New thread for Polymath5 « Euclidean Ramsey Theory Says:
February 2, 2010 at 8:31 pm | Reply
[...] New thread for Polymath5 By kristalcantwell There is a new thread for polymath5. One goal that is close to being reached that of deriving a counterexample that is completely multiplicative from any counterexample. Let me update this there is another thread here. [...]
8. Kristal Cantwell Says:
February 2, 2010 at 10:54 pm | Reply
If a completely multiplicative sequence with discrepancy 2 or less has length more than 246 then I think there is a human proof that f(2) and f(5) are -1. Given this I can show that if f(3) is 1 f(7) is -1.
Assume that f(3) is 1 and f(7) is 1. We have by the above that f(2) and f(5) are -1. Then this will give the partial sum at 10 the value 2 so f(11) must be -1.
Since f(12) is 1 the sum at 12 must be 2 and f(13) must be -1.
Then f(25), f(26),f(27),f(28) and f(30) must be 1 this forces f(31) to be -1.
Then f(62), f(63),f(64),f(65) and f(66) must be 1 and we have a contradiction.
So if a completely multiplicative sequence with discrepancy 2 or less has length more than 246 and f(3)=1 then f(7)=-1.
• Uwe Stroinski Says:
February 3, 2010 at 5:38 pm
Being new to the project I first say hello to everybody and then deal with the next case.
In the context of Kristal’s post I assume f(2)=-1, f(5)=-1. For the sake of contradiction I assume furthermore f(3) = 1. Then Kristal showed that f(7) has to be -1.
With these assumptions, the partial sum f[18,21]:=f(18)+f(19)+f(20)+f(21) equals -3+f(19). Since f[1,odd] is in {-1,0,1} it follows that f[18,21]>=-2 and thus f(19)=1.
Similarly we get:
f[168,171]=3+f(17) f(17)=-1,
f[34,37]=3+f(37) f(37)=-1 and
f[74,77]=3-f(11) f(11)=1.
Since f(9)=f(10)=f(11)=f(12)=1 it follows that f(13)=-1.
f[50,55]=-5+f(53)>=-2 which cannot be satisfied for f(53) in {-1,1}.
Hence f(7) cannot be -1 and thus f(3)=-1.
It remains to show (by hand) that there is no completely multiplicative sequence with discrepancy 2 and f(2)=f(3)=f(5)=-1.
• Jason Dyer Says:
February 3, 2010 at 6:48 pm
I updated the wiki with the arguments so far (Terry’s, Kristal’s, and Uwe’s), but it needs a lot of polish.
• Uwe Stroinski Says:
February 5, 2010 at 1:23 pm
This is a proof that there is no completely multiplicative sequence with discrepancy 2, f(2)=f(3)=f(5)=-1 and f(7)=1.
I assume f(2)=-1, f(3)=-1 and f(5)=-1. Then
f[242,245]=-3+f(61) implies f(61)=1.
For the sake of contradiction I assume furthermore f(7) = 1. Then f[1,10]=2 and thus f(11)=-1.
We have:
f[18,21]=-3+f(19) implies f(19)=1,
f[168,171]=3+f(17) implies f(17)=-1,
f[22,25]=3+f(23) implies f(23)=-1,
f[60,63]=3-f(31) implies f(31)=1,
f[60,65]=3-f(13) implies f(13)=1,
f[114,117]=3+f(29) implies f(29)=-1,
f[184,187]=3-f(37) implies f(37)=1,
f[558,561]=-3+f(43) implies f(43)=1,
f[40,43]=3+f(41) implies f(41)=-1,
f[50,55]=3+f(53) implies f(53)=-1,
f[58,61]=3+f(59) implies f(59)=-1,
f[92,95]=-3-f(47) implies f(47)=-1 and
f[132,135]=3-f(67) implies f(67)=1.
0: 1 -1 -1 1 -1 1 1 -1 1 1|2
10: -1 -1 1 -1 1 1 -1 -1 1 -1|0
20: -1 1 -1 1 1 -1 -1 1 -1 -1|-2
30: 1 -1 1 1 -1 1 1 -1 -1 1|0
40: -1 1 1 -1 -1 1 -1 -1 1 -1|-2
50: 1 1 -1 1 1 -1 -1 1 -1 1|0
60: 1 -1 1 1 -1 -1 1 -1 1 1|2
70: f(71) -1 f(73) -1 -1 1 -1 1 f(79) -1|-1+f(71)+f(73)+f(79)
Now f[1,70]=2 and thus f(71)=-1. We have
f[72,75]=-3+f(73) implies f(73)=1,
f[88,91]=3+f(89) implies f(89)=-1 and
f[868,871]=3-f(79) implies f(79)=1.
70: -1 -1 1 -1 -1 1 -1 1 1 -1|0
80: 1 1 f(83) -1 1 -1 1 1 -1 1|3+f(83)
90: 1 -1 -1 1 -1 1 f(97) -1 -1 1|2+f(83)+f(97)
Now f[1,91]=4+f(83) is a contradiction and thus f(7)=-1.
It remains to show (by hand) that there is no completely multiplicative sequence with discrepancy 2 and f(2)=f(3)=f(5)=f(7)=-1. If my computer is right, this is much easier.
• Uwe Stroinski Says:
February 5, 2010 at 4:10 pm
This is a proof that there is no completely multiplicative sequence with discrepancy 2, f(2)=f(3)=f(5)=-1 and f(7)=-1 (which is the last case).
I assume f(2)=-1, f(3)=-1 and f(5)=-1. Then
f[242,245]=-3+f(61) implies f(61)=1.
For the sake of contradiction I assume furthermore f(7) = -1.
We have:
f[14,17]=3+f(17) implies f(17)=-1 and
f[34,37]=3+f(37) implies f(37)=-1.
Sub-case 1: f(11)=-1.
0: 1 -1 -1 1 -1 1 -1 -1 1 1|0
10: -1 -1 f(13) 1 1 1 -1 -1 f(19) -1|-2+f(13)+f(19)
Then f[1,12]=-2 and thus f(13)=1. Now
f[54,57]=3-f(19) implies f(19)=1.
10: -1 -1 1 1 1 1 -1 -1 1 -1|0
20: 1 1 f(23) 1 1 -1 -1 -1 f(29) -1|f(23)+f(29)
f[1,22]=2 implies f(23)=-1 and then f[1,25]=3 is a contradiction. Thus f(11)=1.
Sub-case 2: f(11)=1.
0: 1 -1 -1 1 -1 1 -1 -1 1 1|0
10: 1 -1 f(13) 1 1 1 -1 -1 f(19) -1|f(13)+f(19)
We have f(9)=f(10)=f(11)=f(14)=f(15)=f(16)=1 and f(12)=-1. This implies f(13)=-1.
f[34,39]=3-f(19) implies f(19)=1,
f[30,33]=-3+f(31) implies f(31)=1,
f[28,33]=-3+f(29) implies f(29)=1 and
f[242,247]=-3+f(41) implies f(41)=1.
10: 1 -1 -1 1 1 1 -1 -1 1 -1|0
20: 1 -1 f(23) 1 1 1 -1 -1 1 -1|1+f(23)
30: 1 -1 -1 1 1 1 -1 -1 1 1|3+f(23)
40: 1 -1 f(43) 1 -1 -f(23) f(47) -1 1 -1|2+f(43)+f(47)
Since f[1,41]=4 +f(23) we get a contradiction and thus f(11) is not 1.
Therefore we get a contradiction in the main case (the case f(7)=-1).
Since f(7)=1 is treated in an earlier post, that completes the proof (by hand) that there is no completely multiplicative sequence with discrepancy 2.
• Kristal Cantwell Says:
February 5, 2010 at 10:05 pm
This step f[558,561]=-3+f(43) implies f(43)=1 goes beyond the range that proves no sequence of discrepancy 2 has length more than 246 So we have no sequence of discrepancy 2 has length more than 560. We know the length is at most 246 from computer proofs. I think this can be fixed we can have a human proof of this as well.
• gowers Says:
February 5, 2010 at 10:13 pm
I think Uwe is just trying to show that there is no infinite sequence of discrepancy 2. However, I am interested in the distinction between the maximum length you can get and the maximum length you can get before there is a fairly easy proof (with no backtracking) that you are doomed. To that end, I’d be interested in the most economical proof that there is no infinite multiplicative sequence of discrepancy 2, even if it doesn’t prove that there is no such sequence of length 247. I even think this is somehow a more fundamental question, which is not to say that they aren’t both interesting.
• Kristal Cantwell Says:
February 5, 2010 at 11:18 pm
I have managed to fix the proof I mentioned earlier. So that it does not go beyond the bounds that prove 246.
Let me recap the proof up to this point:
“I assume f(2)=-1, f(3)=-1 and f(5)=-1. Then
f[242,245]=-3+f(61) implies f(61)=1.
For the sake of contradiction I assume furthermore f(7) = 1. Then f[1,10]=2 and
thus f(11)=-1.
We have:
f[18,21]=-3+f(19) implies f(19)=1,
f[168,171]=3+f(17) implies f(17)=-1,
f[22,25]=3+f(23) implies f(23)=-1,
f[60,63]=3-f(31) implies f(31)=1,
f[60,65]=3-f(13) implies f(13)=1,
f[114,117]=3+f(29) implies f(29)=-1,
f[184,187]=3-f(37) implies f(37)=1,”
from this we can get f(85,91) = 5 + f(89) -f(43) which implies that
f(43)=1
So there remains things to do to reach 246 as the maximal length.
• Uwe Stroinski Says:
February 6, 2010 at 9:27 am
When I started I tried to maintain the 246 bound. As the proof got longer and longer I got more humble and set the bound to 1000.
Of course, with enough patience this can be fixed by adding more cases or, as Kristal did when he fixed my last post, by considering longer partial sums. I tried to get away with partial sums of length 3 to keep checking managable since the proof was essentially generated by computer.
9. Steven Heilman Says:
February 3, 2010 at 2:25 am | Reply
Suppose (as suggested before) that we simplify to real valued multiplicative functions $f$. (The following comment comes mainly from a remark in [Balog, Granville, Soundararajan]). Then [Wirsing 1967] seems to say: first, assume a decay condition on $f$. That is, suppose $\sum_{p\leq x}f(p)\frac{\log p}{p}\sim\tau\log x$ for some $\tau>0$. Then the quantity $\lim_{x\to\infty} \frac{1}{x} \sum_{n\leq x}f(n)$ exists, and is given by his Equation (1.6) (by letting $\lambda=1$ identically, and then setting $\lambda^{*}=f$).
Without looking at the (rather technical) proofs, I would be skeptical whether or not the decay condition could be removed (and still retain the same argument).
10. gowers Says:
February 3, 2010 at 9:40 am | Reply
Here’s an attempt to prove that what we are trying to prove is very hard. What I would really like is for somebody to shoot it down.
Over at mathoverflow, I’ve been trying to see whether there is some intuitive reason for the fact that the partial sums of the Liouville function $\lambda$ grow at least as fast as $\sqrt{n}$ (give or take $n^{o(1)}$). I get the impression from the responses that there is not an elementary proof of this fact: the proofs all use properties of the Riemann zeta function and a bit of complex analysis to show that there must be a zero with real part at least 1/2.
It seems pretty clear that this approach is not going to generalize to all multiplicative functions that are not character-like. For instance, it seems that the functional equation is important in the proof alluded to above, and that property of the Dirichlet series associated with a multiplicative function is a rather special one.
But in that case, it looks as though showing that non-character-like multiplicative functions have partial sums that grow much faster than logarithmically is going to involve solving a known hard problem (that of finding an elementary proof of this fact for the Liouville function).
Does anyone see a way out of this difficulty? The results of Granville and Soundararajan are an obvious place to look, given that they are generalizing various results to all multiplicative functions. But nobody has yet pointed out some killer result or technique that does what we want.
If it does turn out to be hopeless to prove fast growth for non-character-like functions, then either the whole problem is hopeless or we must find some unified proof that gives at least logarithmic growth for all functions. I’d be very interested to know other people’s opinions on this.
• Alec Edgington Says:
February 3, 2010 at 12:36 pm
Can one prove, without recourse to the functional equation, that the partial sums of the Liouville function are unbounded?
• gowers Says:
February 3, 2010 at 12:40 pm
I too would be very interested in an answer to that. The feedback from mathoverflow is a little discouraging, but maybe this weaker question has not been thought about so much.
• Terence Tao Says:
February 4, 2010 at 7:56 am
Here’s an elementary way to show unbounded discrepancy of Liouville.
Lemma: let g be a {-1,1}-valued completely multiplicative function of bounded discrepancy. Then the (conditionally convergent) sum $L(1,g) := \sum_n g(n)/n$ is non-zero.
Proof: Observe that the Dirichlet convolution g*1 is non-negative, and at least 1 on square numbers, so
$\sum_{n \leq x} g*1(n) / \sqrt{n} \geq \frac{1}{2} \log x + O(1).$
On the other hand, expanding the LHS using the Dirichlet hyperbola method and the bounded discrepancy hypothesis (which in particular implies that $\sum_{n \leq x} g(n)/n = L(1,g) + O(1/x)$ by summation by parts), one can eventually write the left-hand side as $L(1,g) \sqrt{x} + O(1)$, which leads to a contradiction for x large enough if L(1,g) vanishes. QED
(This lemma is modeled on the standard elementary proof that $L(1,\chi) \neq 0$ for quadratic characters $\chi$.)
In the case of Liouville, lambda*1 is exactly equal to 1 on the squares and nowhere else. So the above argument gives the asymptotic
$\frac{1}{2} \log x + O(1) = L(1,\lambda) x^{1/2}+O(1)$
which is impossible regardless of whether $L(1,\lambda)$ is non-zero or not.
• Terence Tao Says:
February 4, 2010 at 8:03 am
To put it another way: the textbook elementary proof of Dirichlet’s theorem tells us that if g is {-1,+1}-valued, completely multiplicative, and has bounded discrepancy, then g(p)=+1 for infinitely many primes p (indeed $\sum_p g(p)/p$ is conditionally convergent, and by working harder (and using complex analysis of course) one can eventually obtain the prime number theorem for g, $\sum_{p \leq x} g(p) = o( x/\log x)$ and maybe even the classical error term). Of course, the Liouville function does not have this property, hence has unbounded discrepancy.
• gowers Says:
February 4, 2010 at 9:08 am
Many thanks for that, and I’m relieved that such a proof exists: as requested, you have destroyed my informal argument that what we are trying to prove might be out of reach. Here, if anyone is interested, is a link to an online article about Dirichlet’s hyperbola method, which not only tells you what the method is, but also gives an example and describes the circumstances where it is particularly useful. (In fact, it wouldn’t take much work to turn it into at Tricki article …)
11. gowers Says:
February 3, 2010 at 11:41 am | Reply
I’ve just had to take my car to be looked at and then walk back home, which was a perfect opportunity to think about multiplicative functions, but it has left me puzzled. I decided to think about how the partial sums of a random multiplicative function behave. (By the way, I am still using “multiplicative” as a lazy way of saying “completely multiplicative”, since I have no use for the weaker notion at the moment.) To that end, let $f$ be a completely multiplicative function taking values in $\{-1,1\}$, where the values at primes are chosen randomly (=independently and uniformly).
What is the expected partial sum up to $N$? Well, the expected value at any $n$ is 1 if $n$ is a perfect square and 0 otherwise, so we get roughly $\sqrt{N}$ as the expected partial sum. So for rather trivial reasons we have that the expectation of the square of the partial sum is at least $N$ (or rather the largest square less than $N$), and therefore, by linearity of expectation, that the expected mean-square partial sum up to $N$ is at least $N/2$ or something like that.
So far this looks quite reasonable: we seem to have that the mean-square partial sum of a random multiplicative function is rather like the mean-square partial sum of the Liouville function $\lambda$, confirming our heuristic that $\lambda$ behaves randomly.
But that is not the sense in which $\lambda$ is random. Indeed, we choose the values of $\lambda(p)$ in as unrandom a way as we can, and the randomness is in its additive properties. I’ll come back to this point.
Another point is that all I have shown so far is a lower bound for the typical size of a partial sum of a random multiplicative function. What if we try to be a bit more careful and actually get an estimate? What, then, is the expected value of $(f(1)+\dots+f(n))^2$? To answer this we need to think about the expectation of $f(r)f(s)$ for each $r$ and $s$. And the answer is that it is 1 if $rs$ is a perfect square and 0 otherwise, since $f(r)f(s)=f(rs)$.
Now the relation “$rs$ is a perfect square” is an equivalence relation. To form an equivalence class, you take a square-free number $t$ and take all numbers of the form $m^2t$. The size of this equivalence class will be about $\sqrt{n/t}$. Also, the square-free numbers are dense, so the number of equivalence classes is $cn$.
We therefore know that $f(r)f(s)$ has expected value 1 if $r$ and $s$ belong to the same equivalence class and 0 otherwise. It follows that the expectation of $(f(1)+\dots+f(n))^2$ is the sum of the squares of the sizes of the equivalence classes, which is (to within a constant) $\sum_{t=1}^n n/t=n\log n$.
My puzzlement disappeared as I wrote that — I had made the mistake of thinking that a typical equivalence class had size $c\sqrt{n}$, which gave a bound of $n^{3/4}$.
Let me now try to work out the fourth moment. Given an integer $x$ I’ll define $A(x)$ to be the set of primes that occur an odd number of times in the prime factorization of $x$. Note that $A(xy)=A(x) \oplus A(y)$ (where I’m writing $\oplus$ for symmetric difference, or sum mod 2, or exclusive or, or whatever you want to call it).
We’re trying to work out the expectation of $(f(1)+\dots+f(n))^4$, so we need to think about the expectation of $f(a)f(b)f(c)f(d)$. This will be 1 if $A(a)\oplus A(b)\oplus A(c)\oplus A(d)=\emptyset$, or equivalently if $A(a)\oplus A(b)=A(c)\oplus A(d)$, and 0 otherwise. So we end up with the expression
$\displaystyle \sum_{A(a)+A(b)=A(c)+A(d)}|A(a)||A(b)||A(c)||A(d)|.$
(Here I am summing over the equivalence classes rather than over the elements.)
This seems to be a little tricky to estimate, since if you choose $A(a)$, $A(b)$ and $A(c)$, then the size of $A(d)$ depends a lot on how much they overlap. I think I’d better save this for another comment.
The punchline of this comment was supposed to be the deliberately silly remark that we now had a proof of EDP for almost all multiplicative functions, and all we had to do was deal with the few that remained. So far, I’ve at least shown that it is true for a significant percentage of them (not that I would wish to claim that this was a new result).
One thing I find slightly mysterious is the way a low-discrepancy multiplicative function manages to deal with the fact that all perfect squares must have value 1. For a random multiplicative function that trivially gave us a lower bound of $\sqrt{N}$ for a typical partial sum, so if we are to avoid that then we need negative correlations between $f(r)$ and $f(s)$ when $rs$ is not a perfect square.
• Klas Markström Says:
February 3, 2010 at 3:01 pm
I guess one way to try to balance the contribution from the fact that perfect squares always give 1 could be to introduce a small bias in the values of the primes, making them slightly more likely to give -1. But then this bias would have to decrease for larger p in order to not give a large negative total sum.
• obryant Says:
February 4, 2010 at 4:23 am
For the usual simple random walk, we get $c\sqrt N$ for the expected partial sum, but almost all paths get substantively larger than that infinitely often. In particular, the n-th partial sum gets as large $\sqrt{2N\log\log N}$ (asymptotically) for infinitely many N.
Is there a law of the iterated logarithm for random multiplicative functions?
12. Sune Kristian Jakobsen Says:
February 3, 2010 at 12:02 pm | Reply
If we have a multiplicative sequence with bounded discrepancy, we can change the values at multiples of some prime p to 0, and the sequence will still have bounded discrepancy. We could also switch the sign at all multiples of p, but then the sequence wouldn’t be completely multiplicative (it would be – at p^2). But what if we only change the value at terms whose prime factorization contain p to an odd power? This sequence would be multiplicative, but is it possible to show that it can’t have bounded discrepancy?
13. gowers Says:
February 3, 2010 at 12:07 pm | Reply
Here’s a thought that probably goes nowhere. Let’s think of our multiplicative function $f$ as defined not on $\mathbb{N}$ but as a $\mathbb{F}_2$-linear function defined on sets of primes. That is, given a set $A$ of primes, we think of $\prod_{p\in A}f(p)$ as $f(A)$ rather than as $f(\prod_{p\in A}p)$. And another difference is that we don’t even think about numbers that are not square free. Instead, we associate with each set $A$ a number $S(A,n)$, which is the number of positive integers less than $n$ that can be written in the form $m^2\prod_{p\in A}p$. That is, $S(A,n)$ is the size of the equivalence class associated with $A$.
With this notation, we can rewrite $(f(1)+\dots+f(n))^2$ as $(\sum_A f(A)S(A,n))^2$.
How might that help us produce a lower bound? I’m not sure it will, but the basic idea is this. For any particular $n$, we cannot say what will happen (since we know that some partial sums are likely to be zero, and definitely can be zero). But we will be averaging over $n\leq N$, and this changes things a bit, because the sizes of the sets $S(A,n)$ will be changing. What’s more, we are now forced to choose the function $f$ once and for all.
What I find just a little bit encouraging about this is that the squares of the sizes of the equivalence classes are distributed in a $1/x$-ish way: it seems to provide a tiny little opening through which logarithmic growth might squeeze. What I find discouraging is there are a lot of singleton $A$s with $S(A,n)=1$ (namely all large primes). That makes me think of another question, which I’ll put in my next comment.
• Gil Kalai Says:
February 3, 2010 at 2:25 pm
A related question: I find the “square-free” analog of EDP rather appealing. Suppose you consider only square free numbers and associate 0 to all other numbers. How small can the discrepency be? Are there still logarithmic examples? Or pethaps the lower bound becomes power-like. (I must admit I do not know what happens for the basic mod 3 based example and related Matryoshka Sequences.)
• obryant Says:
February 3, 2010 at 8:05 pm
The “squarefree $\lambda_3$” appears to hit discrepancy k around $k^{2.7}$ pretty steadily for $30<k<64$.
The "squarefree $\mu_3$" appears to hit discrepancy k around $k^{2.8}$ pretty steadily for $10<k<59$.
• gowers Says:
February 4, 2010 at 12:45 am
Another small remark to add to what I wrote above, but it still probably doesn’t do much. With the rewriting above, we can expand out the bracket and interchange summation to obtain the expression
$\displaystyle \sum_{A,B}f(A)f(B)\mathbb{E}_{n\leq N}S(A,n)S(B,n)$
for the mean-square partial sum up to $N$. So it’s just conceivable that one could get somewhere by understanding the (positive definite) matrix $T_{A,B}=\mathbb{E}_{n\leq N}S(A,n)S(B,n)$. We’d be wanting $\langle f,Tf\rangle$ to be unbounded. The information we would have (but might not be obliged to use) is that $f$ depends $\mathbb{F}_2$-linearly on $A$ (in the sense that it takes values in $\{-1,1\}$ and $f(A \oplus B)=f(A)f(B)$).
• gowers Says:
February 4, 2010 at 11:30 am
I should add something to the comment just above, because I haven’t fully explained what the (unlikely to work) idea is. Suppose we make a guess at the value of $\mathbb{E}_{n\leq N} S(A,n)S(B,n)$ as follows. Let $x$ be the product of the primes in $A$ and let $y$ be the product of the primes in $B$. Then $S(A,n)S(B,n)$ is approximately $n/\sqrt{xy}$, so its expected value is approximately $N/2\sqrt{xy}$. Therefore, if we write $G(x)=\sqrt{N/2x}$, it is approximately $G(x)G(y)$.
If this were exact, it would be bad news because it would tell us that the matrix $T_{A,B}=G(A)G(B)$ had rank 1, and $\langle f,Tf\rangle$ would be nothing other than $\langle f,G\rangle^2$. So all we’d have to do is find a $\pm 1$ combination of the $G(A)$ that was bounded, which doesn’t look hard. However, because our estimates were only estimates, what we actually have is an approximation to this very low-rank matrix. Could it be that the real matrix has a much higher rank?
As usual, there is the problem of understanding why the proof would work for functions with values in $\{-1,1\}$ and not for functions in $\{-1,0,1\}$. Since I don’t have a good answer to that, I don’t have much faith in this approach, but I thought I’d mention it anyway.
• Gil Kalai Says:
February 4, 2010 at 5:56 pm
a question and a comment: 1) why the 1/x ish way S(A,n) distributes gives hope to log n behavior? 2) maybe you should allow functions with values in -1 0 and 1 and just hope for a clain that unless the function is nonzero on a bounded number of variables (primes) the discrepency is unbounded. is it so?
• gowers Says:
February 4, 2010 at 6:20 pm
My original answer to your question was rather pathetic: the function 1/x sums to log. Let me try to give a better one, but the calculations may well not work out.
If the product of primes in $A$ is $x$, then $S(A,n)$ is approximately $\sqrt{n/x}$ and the error will have order of magnitude 1. So the error when we approximate $S(A,n)S(B,n)$ has order $\sqrt{n/x}+\sqrt{n/y}$, whereas the value itself is close to $n/\sqrt{xy}$. If we think of the errors as fairly random, then when we add them we should get something like the root mean square of the individual errors. That seems to introduce a term like $\sqrt{n\log n}$ into the picture, which is vaguely promising except that I don’t actually know where I’m going with this line of thought.
Basically, I don’t even have a heuristic argument at this stage. But you could summarize what I’m trying to do as this: exploit the fact that equivalent integers (that is, ones whose product is a perfect square) take the same value, by exploiting the fact that the sizes of the equivalence classes look ever so slightly random (in that we know roughly how big they are, but if $n$ is random then the errors are distributed in a fairly random way).
I’m not sure I completely understand your second suggestion. What worries me is the function 1 -1 0 1 -1 0 1 -1 0 … . I don’t want to find myself trying to develop an argument that would prove that that function had unbounded discrepancy.
• Gil Says:
February 4, 2010 at 9:01 pm
I would like to regard the function 1 -1 0 1 -1 0 1 – 1 0 as “depending” on a single prime (p=3), hoping that when a function to {0,1, -1} or even to the unit disk does not depend on a single or a bounded number of primes then it has unbounded discrepency. But I do not know how precisely to define the influence of a prime on a sequence, and if it this hope is realistic.
14. gowers Says:
February 3, 2010 at 12:22 pm | Reply
I have an idea for a way that one might try to build a multiplicative function with slow-growing partial sums. If it can be made to work, then it would suggest that the kind of classification we are looking for would be difficult to achieve.
I’ll start with a simple observation. Suppose you were given a random walk and told that you could change it in every $k$th place, if you wanted to, and that your aim was to keep it as close to the origin as you could. Here’s how you might go about it. Every $k^2$ steps you would see what had happened to the partial sums. Typically, they would have drifted by about $k$, and there would be $k$ places you could change, so you could bring the walk back to the origin.
OK that’s not quite right, since on average only half the possible changes could help, and there’s a small chance that the walk would have drifted further, and so on and so forth, but some kind of argument like this ought to work, with a slightly worse bound, with high probability at each stage — and one could compensate for the unlucky stages by taking longer chunks of walk from time to time. But the point is that it ought to be possible to get the walk to stay much closer to the origin — I think logarithmic growth should be achievable (with a constant that depends on $k$).
Now let us do something a bit similar. This time we choose a random multiplicative function. It ought to behave a bit like a random walk. We then allow ourselves to change it at some, but not all, primes. We could even randomize the changes we made, by taking an interval with several primes in it and randomly choosing the ones we change.
It may be hard to prove anything rigorously — I don’t know — but heuristically it looks to me as though by this sort of procedure we could create a multiplicative function with a lot of randomness in it (so it would be most unlikely to exhibit character-like behaviour, I would have thought) but with growth less than $n^\epsilon$.
• Alec Edgington Says:
February 3, 2010 at 1:02 pm
Interesting. It would be good to understand this stochastic process better, and how similar it is (or isn’t) to a simple random walk.
• gowers Says:
February 3, 2010 at 1:07 pm
One way to do that would be to try to run it as an algorithm and see if one can produce lots of unstructured multiplicative functions all with pretty small partial sums. I’m imagining much longer sequences than the ones we’ve looked at so far — perhaps of length more like a million rather than 1000 — so we wouldn’t be displaying them on the wiki. A random such function would have partial sums of order of magnitude 1000, so if one could keep them down to 25 or something like that, then it would be a fairly convincing back-up to the heuristic argument.
• gowers Says:
February 3, 2010 at 8:02 pm
I’ve tried running at as a by-hand algorithm, and the results are interesting, and slightly surprising. My algorithm was this: if the sum up to p-1 is positive then f(p)=-1, if it is negative then f(p)=1, and if it is zero then f(p)=-1. I made the last choice to give the sequence a little help in the fight against perfect squares.
So far I’ve got up to 155. I got all the way to 45 before I first hit $\pm 3$ (and in fact I hit -3). I hit 3 at 57. I hit -3 again at 99, 105 and 119. I was feeling pretty good by this point, but then suddenly the sequence went a bit wild and the records started tumbling: 4, 5, 6 at 122, 123, 124, then 7 at 133 and 8 at 136. But fairly soon there were enough -1s to get things back under control again.
Thomas Sauvaget seems to have programmed something a bit like this (see the end of this page on the wiki) and gets a disappointingly large drift that seems to show that my heuristic arguments are wrong. But I don’t see what would be wrong with them, so I wonder whether I understand properly what he has done. Anyhow, I’d be very interested if someone was prepared to program this very simple greedy algorithm and tell me roughly how fast the partial sums grow. The problem at the moment is that if F(136)=8, then F(n) could just as well be $\sqrt{n/2}$ as $\log_2n$, so I need more data.
• Alec Edgington Says:
February 3, 2010 at 8:22 pm
Tim, here you go: partial sums plotted up to $10\,000$.
http://www.obtext.com/erdos/greed.png
• Alec Edgington Says:
February 3, 2010 at 8:33 pm
Perhaps I got the wrong end of the stick, but I thought your heuristic argument required a certain amount of ‘backtracking’: you’d look at the sum up to some value and then change a relatively small number of earlier primes to improve it. This does strike me as interesting, but the difficult question is how to decide which primes to change…
• Thomas Sauvaget Says:
February 3, 2010 at 8:34 pm
Tim, what I had done there was to construct a sequence by filling in multiplicatively and choosing at each undetermined prime $p$ either a +1 or -1 depending on which one minimized a kind of “global sum with maximal forward information”, namely the sum of all the HAP sums up to $q$ (the next prime after $p$). I had not computed the drift, but the plot shows that indeed there were rather long sequences of minuses in a row.
I have some time now to quickly code your algorithm, but I apologize I don’t understand what you mean by “the sum up to p-1″: is that just the sum along the sequence, or something else?
• Alec Edgington Says:
February 3, 2010 at 8:39 pm
On second thoughts, perhaps it isn’t so difficult to choose the primes: you only expect to have to change $\sqrt{N}$, but you’ve got something like $N / \log N$ in the second half of your sequence, so you can easily choose enough without messing about with values at multiples. Hmm…
• Johan de Jong Says:
February 3, 2010 at 8:41 pm
Running your suggestion up to n = 10000 gives:
Smallest value is -97 happens at 7049
Biggest value is 140 happens at 8668
Running your suggestion up to n = 100000 gives:
Smallest value is -1067 happens at 86255
Biggest value is 1121 happens at 99977
Running your suggestion up to n = 1000000 gives:
Smallest value is -10507 happens at 999991
Biggest value is 12261 happens at 863579
I could have made a mistake but the values you mention agree with what I get.
• Klas Markström Says:
February 3, 2010 at 8:51 pm
Maybe one of you could try making a plot of $\frac{\log(1+|S_n])}{\log(n)}$, where $S_n$ is the sum at n. This would give a rough idea about the rate of growth.
• Alec Edgington Says:
February 3, 2010 at 9:09 pm
It does look curiously regular, like $S_n \approx \Re (z n^{1+it})$ for some $z \in \mathbb{C}$ and $t \in \mathbb{R}$…
• Johan de Jong Says:
February 3, 2010 at 9:24 pm
You’re going to laugh at me, but I modified my code to put in a +1 at p only if the sum up to p-1 is <-10, and it behaves _much_ better. I just ran this up to 1000000 and the smallest value I get is -62 and the biggest value I get is 96. Coding error? I'll check, but you can try it too…
• gowers Says:
February 3, 2010 at 9:50 pm
I must say I find these results very interesting. First, I do not begin to have an explanation for the rapid growth of the partial sums — it’s as though there is some accidental resonance going on. I’d actually be quite interested to know whether the resulting multiplicative function is approximating some known function with a nice formula.
As for Johan’s modification, something makes me believe it. There appears to be a message (for which, again, I have not even a heuristic explanation) that if you are trying to control a multiplicative function you shouldn’t panic, particularly when it is negative.
There are some other variants one could perhaps try. One would be to choose the value of the prime at p to be 1 with probability 2/3 if the partial sum up to p-1 is negative and with probability 1/3 if the partial sum up to p-1 is positive. (Or one could have different probabilities for the two cases.) Another would be to choose random values at say three quarters of the primes and greedy for the remaining quarter. And I’m sure there are several other variants. But I’m excited about Johan’s (which isn’t randomized but is probably not too character-like either), since it suggests that multiplicative functions with slow-growing partial sums don’t have to be all that special.
Indeed, I’d be interested if Johan could tell us whether there is any sign that his function correlates with arithmetic progressions (other than homogeneous ones of course, where we know that they don’t correlate). If partial sums such as $f(5)+f(11)+f(17)+f(23)+\dots$ are also reasonably small (I don’t mean logarithmic, but I do mean that their growth should be definitely slower than linear), then it would be convincing evidence against all sorts of conjectures one might otherwise be tempted to make. A quick way of judging this might be just to take the Fourier transform and see whether it gets big anywhere.
• gowers Says:
February 3, 2010 at 9:55 pm
In answer to Alec’s question, it’s true that my original suggestion was to be slightly less greedy and modify primes in groups (and that may still be quite a good idea — one possibility is to have groups of size about $C(\log n)^2$ and change as many as it takes to get the partial sum by the end of the group to be zero (unless you can’t, in which case you get as close to zero as you can).
However, it then occurred to me that it ought to be the case that adding a few greedy steps to a random walk would dramatically reduce its drift, so a slightly different heuristic argument prompted me to ask about much greedier algorithms, with these unexpected, and to me still mysterious, results.
• Alec Edgington Says:
February 3, 2010 at 10:28 pm
This plot shows the results of three runs up to 10000 with $f(p)$ chosen to be minus the sign of the partial sum so far with probability two-thirds; and $\pm 1$ with equal probabilities if the partial sum is zero:
http://www.obtext.com/erdos/random_greed.png
The growth looks a lot more square-root-like.
• gowers Says:
February 4, 2010 at 12:51 am
Another thought. If the original greedy algorithm gives something that looks suspiciously regular, might it be possible to multiply the function by one of the form $n^{it}$ in order to introduce cancellation into the long stretches of pluses and long stretches of minuses? If we got it right, we might be able to get some very good multiplicative sequences taking values in $\mathbb{C}$.
• gowers Says:
February 4, 2010 at 11:43 am
In an effort to understand slightly better these very widely oscillating partial sums, I tried defining a multiplicative function by taking $f(p)$ to be $(-1)^k$ whenever $2^{k-1}<p\leq 2^k$. The thinking behind this is that if $p$ is a little below $2^k$ and $q$ is a little below $2^l$, then $pq$ will be a little below $2^{k+l}$, and so on, so composite numbers will have a tendency (but only a tendency) to behave similarly to the primes of about the same size.
I’ve calculated it up to 64 and the results have borne out my expectations. The record partial sums (big and small) are 1 at 1, 0 at 2, 2 at 4, -2 at 8, 4 at 14 (and at 16), -4 at 24 (and at 32), 8 at 44, 10 at 48, 12 at 62 (and at 64), where I have not mentioned records that are instantly broken.
However, I still can’t come up with a decent explanation for the phenomenon. The problem is that there are edge effects: it is not the case that the function $f(x)=\lfloor\log_2x\rfloor$ satisfies $f(xy)=f(x)+f(y)$. And it would seem that the more factors a number has, the more the edge effects should matter, eventually swamping everything and making the function look random. Perhaps that does indeed happen once we reach the point where almost all numbers have many prime factors that occur an odd number of times.
Actually, that’s given me an idea for how to explain the phenomenon after all. I’ll do a couple of calculations and then write a comment to say how I get on.
• gowers Says:
February 4, 2010 at 12:13 pm
OK, let’s suppose that we have a completely multiplicative function and we know that up to a certain point $N$. it correlates very well with the function $(1+\cos(A\log n))/2$. In other words, it behaves as if one were choosing to set $f(n)=1$ with that probability.
Now choose a typical number $m$ that’s a product of two numbers $x$ and $y$ that are less than $N$. In the absence of any other information, it seems reasonable to assume that the logarithms of $x$ and $y$ are uniformly distributed between $0$ and $\log m$ (since the probability that $m$ is divisible by $x$ is $x^{-1}$).
On this model, $f(m)$ will be 1 with probability
$\displaystyle (1+\cos(A\log x))(1+\cos(A\log y))/4+(1-\cos(A\log x))(1-\cos(A\log y))/4$
since this is the probability that $f(x)=f(y)=1$ or $f(x)=f(y)=-1$.
But that cancels down to $(1+\cos(A\log x)\cos(A\log y))/2$. We also know that
$\displaystyle \cos(A\log x)\cos(A\log y)=(\cos A(\log x+\log y)+\cos A(\log x-\log y))/2.$
Since $\log x+\log y=\log m$ and $\log x-\log y$ averages zero (in our model where we’ve chosen two random positive numbers that add up to $\log m$), the probability that $f(m)=1$ (randomizing over $m$ as well) is $(1+\cos(A\log m)/2)/2$.
That’s not exactly what we wanted, because of the factor 2 inside the bracket. However, it does at least show that there’s a tendency for the biased behaviour that we have up to $N$ to continue, in a slightly weaker form. But I still seem to have the problem that my calculations suggest that the randomness ought to damp down the oscillations, whereas Alec’s plot of what happens with the greedy algorithm shows no sign of damping at all.
• gowers Says:
February 4, 2010 at 12:21 pm
In a nutshell, my difficulty is this. How do we get a $\pm 1$-valued multiplicative function to mimic the behaviour of a complex-valued multiplicative function such as $f(n)=n^{it}=\exp(it\log n)$? Alec’s plot seems to suggest that this should be possible somehow, but the obvious method of taking real parts and converting them into probabilities (which is what I was doing above) doesn’t seem to work, which is not altogether surprising given that $\cos$ is not multiplicative. We do have a nice expression for $\cos(XY)$, but it doesn’t give us quite what we want: it gives us a hint of multiplicativity, but it also seems to want to disperse that multiplicativity.
I suppose it is just about conceivable that Alec’s plot would look different if there were more of it, but at the moment it really does look as though the bias is not dispersing and the peaks of the partial sums are growing linearly.
• Alec Edgington Says:
February 4, 2010 at 12:45 pm
A small observation concerning the behaviour of a random multiplicative function. We know that a standard random walk up to $N$ has a variance of about $N$. With a multiplicatively-defined random walk, the variance seems to be significantly bigger: it’s about 3500 for sums up to 1000. It looks roughly linear, with a few kinks.
• Alec Edgington Says:
February 4, 2010 at 1:34 pm
I tried mapping $p$ to $-1$ with probablility $\frac{1}{2} (1 - \cos(10 \log p))$. The resulting sums, for four runs up to 10000, are shown here:
http://www.obtext.com/erdos/random_harmonic.png
They look quite similar to the results of the greedy algorithm.
• gowers Says:
February 4, 2010 at 2:10 pm
If my calculation half way through this comment is correct, then the variance for a random multiplicative function is more like $N\log N$, which could be exactly what you are observing.
Those plots are interesting. I feel I must be making a mistake in my heuristic arguments, because they don’t seem to predict this linear growth of the record partial sums. I’m looking for a reason to get rid of that factor of 2 …
• Alec Edgington Says:
February 4, 2010 at 2:28 pm
Ah yes, hadn’t spotted your calculation. That seems quite plausible (with a factor of a half or so). There was indeed some upward curving near the origin which I’d dismissed.
• gowers Says:
February 4, 2010 at 2:42 pm
Here’s another experiment that would interest me. I wonder whether the problem with the greedy algorithm could be thought of like this: when the walk gets close to zero one is pulling it towards zero as fast as one can, but what one should actually be doing is easing off by that stage so that it won’t go zooming off in the other direction.
With that vague thought in mind (and partly informed by Johan’s experience), one could try a more delicate damping that worked like this. You set yourself a target, such as 50, and aim to keep the partial sums between -50 and 50. But you also don’t want to lose the benefits of randomness, so you use the non-randomness sparingly — only if there seems to be some genuine danger. More precisely, if the partial sum up to $p-1$ is $t$ and $|t|\leq 50$, then you let $f(p)=-1$ with probability $(t+50)/100$ (so it’s definitely -1 if $t=50$ and it’s definitely 1 if $t=-50$ and the probability varies linearly in between). If the partial sum is outside these bounds then you revert to the pure greedy strategy and try to get it back inside.
I’d be interested to know whether this easing off as you get near zero produces a completely different result. How far does it have to go before you hit $\pm 100$, for example?
• Alec Edgington Says:
February 4, 2010 at 4:41 pm
Here are four runs of that process taken up to 30000:
http://www.obtext.com/erdos/random_control_50.png
It does seem to be much better controlled. I wonder if one could even come up with an ‘optimal’ function $\alpha$ for generating the sequence by
$\mathrm{Prob} (f(p)=-1) = \alpha (\sum_{m<p} f(n))$
• gowers Says:
February 4, 2010 at 5:50 pm
That is very interesting, and I wonder how long this better control would continue, and whether it would work with 50 replaced by 20, etc. etc. It would indeed be interesting to optimize that function, but at the moment I feel as though I’m making stabs in the dark.
Talking of which, here’s another one. The thought behind it is that the original greedy algorithm didn’t work because the additional kick it gave was accidentally in phase with the partial sums themselves (not that I quite understand why that should be, when it was designed to be precisely the opposite). One way of changing the phase would be to make it depend not on the function itself (that is, the partial sum) but on the derivative (that is, the multiplicative function). A reasonably sensible way of doing that might be this. If you’ve chosen the values of $f$ up to $p$, then randomly pick an integer $r$ between $p-C\log p$ and $p-1$ and set $f(p)$ to be $-f(r)$. The idea here is that if the function is growing quite a bit as you approach $p$ then the value at $p$ will have a high probability of cancelling out a little bit of that growth. I’d be curious to know if this had the effect of hugely smoothing out the “random walk”. Part of me says that it probably will and another part of me says that it probably won’t.
If we define $g(p)$ to be the average value of $f$ near $p$, then one could generalize your question and ask for the optimal function $\beta(F(p),g(p))$, where $F(p)=\sum_{m<p}f(m)$.
• Alec Edgington Says:
February 4, 2010 at 9:34 pm
For what it’s worth, setting
$\mathrm{Prob}(f(p) = -1) = \frac{1}{2} (1 + \tanh \frac{1}{2} \sum_{m<p} f(m))$
gives good results most of the time. Here are ten runs up to 30000:
http://www.obtext.com/erdos/tanh_30000.png
Two of them get out of hand, but eight stay under control and close to zero. I wonder if they'll all 'go linear' eventually.
I tried the suggestion of setting $f(p) = -f(r)$ for a random recent $r$; maybe I just didn't get the parameter right but the behaviour looked fairly random. It ought to be possible to do something along these lines, though.
• Mark Bennet Says:
February 4, 2010 at 10:09 pm
The oscillations of the ‘out of control’ examples look quite regular – are there natural periods for these oscillations – ie predictable values?
• gowers Says:
February 5, 2010 at 10:47 am
Alec, thanks for running that $f(p)=-f(r)$ idea. It gives me one more idea (after which I may stop — but I can’t quite guarantee it). I think I can understand why $f(p)=-f(r)$ for a random recent $r$ might give fairly random behaviour. The rough reason is this. If the function starts growing in a linear kind of way, then the choices at primes will cease to be random and will try to kill the gradient. So this method should stop linear growth. However, if the behaviour is fairly random, then the choice of $f(p)$ is also pretty random. (Yes, there is some dependence, but we have a lot of dependences anyway.) So this method won’t do anything to defeat randomness, so to speak.
Now the other method (making $f(p)$ -1 when partial sums are positive and +1 when they’re negative) defeats randomness, but seems to give rise to linear behaviour. So what if we combine the two? That is, what if we just alternate the two methods of choosing $f(p)$? The idea is that one method would keep things at worst random and the other would then kill off the randomness. In short, one method would produce the randomness that the other one needs for the heuristic argument that lay behind it to work.
My heuristic arguments here are obviously pretty vague, and I can well imagine that this won’t work at all, but it definitely seems to be worth a try, and I’d have thought you could patch together some bits of existing code pretty quickly.
On another topic, I am very intrigued by your tanh sequences, and especially by the idea that they may be extremely well controlled but in an unstable way. There are all sorts of phenomena that I’d like to understand better. It feels as though there ought to be differential equations that would predict the behaviour well, but my attempts to do that have so far given obviously wrong predictions.
• Alec Edgington Says:
February 5, 2010 at 11:17 am
Sure. Here are ten runs of that method up to 30000, where at every other assignment $r$ is chosen randomly from the last $10 \log p$ numbers:
http://www.obtext.com/erdos/alternate_30000.png
As expected the graphs display a mixture of characteristics. Apart from the green one they do seem to be just about under control.
• gowers Says:
February 5, 2010 at 11:35 am
I have a partial explanation for this linear behaviour that keeps popping up, but need to think about it more. The idea is that the heuristic argument that predicts that choosing $f(p)$ to have the opposite sign to the partial sum up to $p-1$ should work well depends on the assumption that the partial sums look like a random walk. But if they ever don’t look like a random walk and start moving at a linear rate, then choosing the $f(p)$s will not have enough effect to defeat that linear behaviour. So instead what happens is that you just hopelessly attempt to slow it down, putting in a long sequence of values of the same sign that set you up for trouble a bit later on.
Actually, have you tried the very simple adjustment, designed to avoid this rather extreme behaviour, of setting … wait, I’ve just looked back and seen that you have. I’m talking about having probabilities of 2/3 and 1/3 instead of 1 and 0. You gave a link to some pictures that you claimed were much more square-root-like. I’d be interested to know whether that really is the case — that is, if you go a lot further than 10,000, do the deviations get quite a bit bigger?
Also, one could imagine varying your tanh experiment by sticking a factor of 1/2 in front of the tanh (so that the probabilities vary between 1/4 and 3/4 rather than between 0 and 1). I think that might have the effect of stopping the linear behaviour from ever arising, while maintaining the good control. And if it doesn’t, then perhaps some other constant would.
• gowers Says:
February 5, 2010 at 1:10 pm
Here’s yet another idea for producing a good sequence (my promise to try not to do this was fairly meaningless). If one wants to introduce a phase shift in order to try to stop any resonance, then a simple way of doing it is to define $f(p)$ to be 1 if the partial sum up to $\alpha p$ is negative and $-1$ otherwise. And of course that too can be made more probabilistic. It would be interesting to know which (if any) of the following three likely possibilities holds.
(i) Whatever $\alpha$ you choose, you get big growth, but $\alpha$ affects its frequency and amplitude to some extent.
(ii) Most $\alpha$ have very little effect, but if you get the right $\alpha$ you almost completely kill the growth (with subcases that you make it square-root-like or you make it power-of-log-like).
(iii) There are a few values for $\alpha$, notably 1, that are particularly bad, but in general pretty well any $\alpha$ is a big help.
• Alec Edgington Says:
February 5, 2010 at 1:29 pm
Right, an initial play with this suggests that something like (ii) may be the case:
http://www.obtext.com/erdos/some_alpha.png
These are plots up to 5000 for 20 values of $\alpha$ between 0.5 and 1. I’ll see if I can home in on the ‘good’ values…
• gowers Says:
February 5, 2010 at 1:36 pm
I wish I understood what was going on well enough to be able to guess what the best $\alpha$ should be. But the next best thing would be to see what it is experimentally and then try to retrodict (if that’s the right word) that value.
• Alec Edgington Says:
February 5, 2010 at 1:50 pm
Hmm, actually it appears that the best growth arises as $\alpha \rightarrow 1$, though there is some fairly chaotic variation at other values.
Maximum absolute value attained by partial sums up to 10000 terms for various $\alpha$:
```0.1 731
0.2 1111
0.3 844
0.4 639
0.5 1009
0.6 1271
0.7 1324
0.8 371
0.9 480
0.99 181
0.999 138
0.9999 140
```
15. Ben Green Says:
February 3, 2010 at 4:16 pm | Reply
Hi,
Upper and lower bounds of N^{1/2 + o(1)} for random multiplicative functions have been obtained: this is a paper of Halasz. It’s not easy to obtain:
Halász, G. On random multiplicative functions. Hubert Delange colloquium (Orsay, 1982), 74–96, Publ. Math. Orsay, 83-4, Univ. Paris XI, Orsay, 1983.
Best
Ben
• gowers Says:
February 3, 2010 at 5:21 pm
I presume what you mean there is that although it’s easy to get the average behaviour it’s not so easy to show that almost all functions behave roughly like average ones. Is that the point?
• Ben Green Says:
February 3, 2010 at 7:02 pm
Tim,
That’s right.
Ben
16. Klas Markström Says:
February 3, 2010 at 9:24 pm | Reply
As I mentioned in one of the earlier post I have kept my program for constructing sequences, not necesssarily multiplicativem running after a restart. The program is now working on sequences of length 1120 and has found a number of them, 63 so far. I have created a page from which you can download the sequences. It is here
http://abel.math.umu.se/~klasm/Data/EDP/
As before these sequences are not necessarily maximal so running one of the backtracking programs to try to extend them would be good.
Many of these sequences differ already in the first few values, but there are not that many different stubs of length 25 which the program has managed to extend to length 1120 so far.
• Alec Edgington Says:
February 3, 2010 at 9:33 pm
Klas, I’m afraid I can’t get at the sequences from that page: I get error 403 ‘access forbidden’…
• Klas Markström Says:
February 3, 2010 at 9:38 pm
I had forgotten to update the file permissions. It should work now.
Is see that I also happened to put the raw output files there instead of a cleaned up version. I will fix that too. The current files are fine if you delete the first line and the first two words of the second line.
• Klas Markström Says:
February 3, 2010 at 9:43 pm
Now the files should be readable and only contain the sequences.
• Alec Edgington Says:
February 3, 2010 at 10:03 pm
Thanks for these, Klas.
The first two that I’ve tried to optimize have stuck at 1124 (and backtracked to the 400s already). I’m starting to suspect that this is the limit! I’ll try some others.
• gowers Says:
February 3, 2010 at 10:20 pm
I was really hoping we’d break the 1124 barrier …
• Klas Markström Says:
February 3, 2010 at 10:26 pm
There are still a number of undecided cases which the computer is working on so I might add more sequences in a few days time.
Alec, make sure to try at least one from each file number, since they differ quite early on, but the best would of course be to at least see what the maximum extension of each sequence is.
Comparing them with the analysis done for the first 1124-sequences would also be interesting. I have no idea how different these sequences are from the first few.
• Alec Edgington Says:
February 3, 2010 at 10:54 pm
OK, I’m going through them all now, stopping each run when it’s backtracked below 500 without passing 1124…
• Alec Edgington Says:
February 3, 2010 at 11:24 pm
None got further than 1124. I’ll leave the last one running overnight just in case! In any case there’s a lot to explore in these sequences.
• Alec Edgington Says:
February 4, 2010 at 10:40 pm
There is quite a bit of common structure to these sequences. One can partition $[1120]$ into sets $A \subseteq [1120]$ with ‘primary’ functions $\xi_A : A \rightarrow \{ 1, -1 \}$ such that all the sequences in the list satisfy $x \vert_A = \pm \xi_A$. One such set is $\{ 2, 3, 14, 21, 94, 98, 141, 147 \}$. Most impressive is the set
$A_0 = \{ 4, 5, 6, 8, 9, 10, 12, 15, \ldots \}$
which has 729 elements! (Thus all Klas’s sequences satisfy $x_4 = -x_5 = -x_6 = -x_8 = x_9 = \ldots = x_{1120}$.)
• Alec Edgington Says:
February 4, 2010 at 11:09 pm
The next largest set in the partition is
$A_1 = \{ 148, 152, 164, 166, 172, 173, \ldots \}$
which has 66 elements.
• Alec Edgington Says:
February 5, 2010 at 8:42 am
I’ve posted the partition, along with prime factorizations of the members of each set in it, on the wiki.
• Thomas Sauvaget Says:
February 5, 2010 at 9:46 am
Klas, is your program exhaustive, i.e. when it finishes will it mean that it has reached the maximal possible length of C=2 sequences for sure? Also, do you have an estimate of when it might finish under the assumption that 1124 is the maximal length (i.e. how long did it take from 1119 to 1120?)
• Klas Markström Says:
February 5, 2010 at 10:39 am
Thomas, the program is exhaustive, so given enough time it will determine the maximum possible length of a sequence with discrepancy 2.
The program does not move forward in simple steps, as from 1119 to 1120, so it is difficult to estimate the remaining running time.
• Klas Markström Says:
February 5, 2010 at 2:14 pm
During the night one more solution of length 1120 was found. I have added it to the list, the name of the file is sol.6119-1-1-1-0-1-0-1-0
17. A Discrepency Problem for Planar Configurations « Combinatorics and more Says:
February 4, 2010 at 10:53 am | Reply
[...] in spirit and in methodology to discrepency problems in combinatorics and number theory such as the Erdos Discrepency problem, now under polymath5 attack. The Kupitz-Perles conjecture seems of a different nature, but I find [...]
18. Sune Kristian Jakobsen Says:
February 4, 2010 at 6:54 pm | Reply
“Showing EDP for automatic sequences may be a nice toy problem.” – O’Bryant
I must admit that I don’t have much intuition about automatic sequences, but here is my idea:
I think that if a sequence $x_i$ is k-automatic and d is an integer, gcd(d,k)=1, all (non-homogeneous) arithmetic progression with difference d looks like the d-HAP in the following sense: If $a$ and $n$ are natural numbers there exists a number N such that $x_{a},x_{a+d},x_{a+2d},\dots x_{a+nd}=x_{Nd},x_{(N+1)d},x_{(N+2)d},\dots x_{(N+n)d}$. This would imply that if the d-HAP has bounded discrepancy, then any d-AP has bounded discrepancy.
We know that the AP-discrepancy is unbounded for any sequence, so if we could prove the above for any d (and not only d relative prime to k) we would be done. But perhaps we could generalize the result about AP-discrepancy to:
For any sequence and any natural numbers C and k we can find natural numbers d and a such that gcd(d,k)=1 and the sequence $x_{a},x_{a+d},\dots$ has discrepancy >C.
However, I have no idea about how the AP-discrepancy result was proved, so I don’t know how difficult this generalization would be.
• Sune Kristian Jakobsen Says:
February 4, 2010 at 7:28 pm
“For any sequence and any natural numbers C and k we can find natural numbers d and a such that gcd(d,k)=1 and the sequence x_{a},x_{a+d},\dots has discrepancy >C.”
Do’h, here is a counterexample: +,-,+,-,… and k=2. Actually I was thinking about that example while I wrote the comment, but I didn’t take I too serious, because the k-HAP had unbounded discrepancy. What I should have wrote was: Is it possible to prove that for any C we can find a HAP or a d-AP with gcd(d,k)=1 that has discrepancy >C.
19. Guy Srinivasan Says:
February 5, 2010 at 6:54 am | Reply
Suppose $x$ is a sequence of length $N$ which is completely multiplicative, has discrepancy at most $D$, has $x_N=1$, and has $\sum_{i=1}^{N-1}x_i=0$.
Define $y$ as $y_i=y_{N k + a}=y_k$ if $a=0$ and $x_a$ otherwise.
Then $y$ is completely multiplicative and up to length $N^k-1$ has discrepancy at most $kD$.
Verifying using the sample multiplicative sequence of length 246 with discrepancy 2 (http://michaelnielsen.org/polymath1/index.php?title=Multiplicative_sequences), I checked to find the largest index where the sequence has value +1 and the partial sum just before that is 0, which is at $x_{241}$. Extending that gives a completely multiplicative sequence of length 1935 with discrepancy 3, and it goes to $241^3 < 112446751 < 241^4$ with discrepancy 7.
It's still multiplicative if $y_{N k + 0}=x_k$, which helped improve the original Matryoshka sequence. When I extend that way I get to length 1943 while still at discrepancy 3.
What I'm really hopeful about is iterating this procedure – finding a good constructive way to choose how much to increase the discrepancy and where to pick the length of the next sequence, then constructing "better and better" versions. I thought I had sqrt(log) for a moment, but it didn't work…
20. Guy Srinivasan Says:
February 5, 2010 at 7:44 am | Reply
(x needs to be completely multiplicative mod N to this to work.)
(which means I don’t know how to iterate it. unless y is magically completely multiplicative mod M for a nice M)
• Guy Srinivasan Says:
February 5, 2010 at 8:07 am
Yes, the length 1935 “completely multiplicative” sequence is not, because the length 240 sequence is not completely multiplicative mod 241. Is there an enumeration of the 500 length 246 discrepancy 2 sequences somewhere?
• gowers Says:
February 5, 2010 at 10:39 am
I think Kevin O’Bryant has been down more or less this road, but perhaps you are doing something he wasn’t. In any case, if you have a sequence that is completely multiplicative mod M, then a fairly easy Fourier argument gives you a partial sum of size proportional to the square root of M, so I think there isn’t an easy way of improving Matryoshka sequences. But it’s possible that I am not fully understanding what you are hoping to do here.
21. gowers Says:
February 5, 2010 at 1:03 pm | Reply
The thread on simple algorithms for producing (or accidentally failing to produce) reasonably low-discrepancy multiplicative sequences has got so long now that I’m starting a new comment about what the explanations might be for what we are observing.
It occurs to me now that the apparent linear behaviour may be an illusion, and that the truth may be growth that is very slightly sublinear. My reasoning is as follows. If you define a multiplicative function by defining $f(p)$ to be $-1$ raised to the power $\lfloor\log_2p\rfloor$, then you get some very nice behaviour that points in the direction of linear discrepancy. For instance, if $p$ and $q$ are primes and it so happens that $pq$ is only just smaller than $2^k$, then it is very likely that $\lfloor\log_2p\rfloor+\lfloor\log_2p\rfloor=\lfloor\log_2(pq)\rfloor$. So there is a tendency for the $pq$s to behave very like the primes, at least in the run-up to a power of 2.
This kind of argument works for products of more than two primes as well, but the more primes you take, the worse it gets. Now a typical integer near $n$ has about $\log\log n$ prime factors (despite anything I might have written to the contrary), and if $n$ is less than a million, then $\log\log n$ looks pretty much like a constant. So I think that the apparent linearity that we see in the examples that blow up may well be a function more like $n/\log n$ that we are just not fully seeing yet. Looking back at some of the plots, there are some signs, though far from conclusive ones, that the gradient is levelling off just a little.
22. EDP and some first steps in number theory « Random Thoughts Says:
February 12, 2011 at 3:05 pm | Reply
[...] My interest was sparked by Tim Gower’s Polymath project and two comments of Terence Tao (1, 2) on the special case of completely multiplicative functions. Let me state this version of [...]
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 440, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433316588401794, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/169721/trying-to-prove-frac2n-frac12-leq-int-1-n11-n-sqrt1-sin/169899
|
# Trying to prove $\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+(\sin(\frac{\pi}{t}) -\frac{\pi}{t}\cos(\frac{\pi}{t}))^2}dt$
I posted this incorrectly several hours ago and now I'm back! So this time it's correct. Im trying to show that for $n\geq 1$:
$$\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}dt$$
I checked this numerically for several values of $n$ up through $n=500$ and the bounds are extremely tight.
I've been banging my head against this integral for a while now and I really can see no way to simplify it as is or to shave off a tiny amount to make it more palatable. Hopefully someone can help me. Thanks.
-
Does anyone have any ideas? I can't believe this thing is so difficult. – lithium barbie doll Jul 12 '12 at 2:12
What is the source? – Potato Jul 12 '12 at 2:14
Well Kokteil: unless there exists some witty trick, this seems to be an unbelievably hard inequality. I know you already said it is from do Carmo's book, but what page/number of example or exercise? And is it from "Differential Geometry of Curves and Surfaces"? – DonAntonio Jul 12 '12 at 2:15
Do Carmo's "Differential Geometry of Curves and Surfaces" Section 1-3 problem 9b, the integral is the arc length of a portion of the curve $(t,tsin(\frac{\pi}{t}))$. – lithium barbie doll Jul 12 '12 at 2:17
What it looks like is an elliptic integral (I know nothing about these, beyond apparently what they look like). – lithium barbie doll Jul 12 '12 at 2:20
show 2 more comments
## 4 Answers
It is not hard to show (for example, in the problem following the problem you posted, exercise 10 in Section 1-3 of Differential Geometry of Curves and Surfaces by Do Carmo) that the arc length of and arc with endpoints $x$ and $y$ is at least the length of the straight line segment connecting them. In any case, the problem only asks for a "geometrical" proof.
That integral is the arc length of the curve $f(t) =(t,\sin (\pi/t))$ between the points $t=1/(n+1)$ and $t=1/n$. These points are $(1/(n+1), \sin((n+1)\pi)/(n+1))$ and $(1/n, \sin(n\pi)/n)$ (so the $y$ coordinates are $0$). Call them $A$ and $B$, respectively. The arc passes through the point $(1/(n+1/2),\sin(n\pi/2)/(n+(1/2))=(1/(n+1/2),\pm 1/(n+(1/2))$. Call this $C$. We see the arc length is at least the sum of the length of the segments $AC$ and $CB$. These each have length at least $\frac{1}{n+\frac{1}{2}}$ (draw the picture. This is the length of the perpendicular to the $x$-axis).
-
Thank you for this! – lithium barbie doll Jul 12 '12 at 3:23
Potato's answer is what's going on geometrically. If you want it analytically:$$\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2} \geq \sqrt{\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}$$ $$= \bigg|\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\bigg|$$ The above expression is the absolute value of the derivative of $t\sin(\pi/t)$. So your integral is greater than $$\int_{1 \over n + 1}^{1 \over n + {1 \over 2}}|(t\sin({\pi \over t}))'|\,dt + \int_{1 \over n + {1 \over 2}}^{1 \over n}|(t\sin({\pi \over t}))'|\,dt$$ This is at least what you get when you put the absolute values on the outside, or $$\bigg|\int_{1 \over n + 1}^{1 \over n + {1 \over 2}}(t\sin({\pi \over t}))'\,dt\bigg| + \bigg|\int_{1 \over n + {1 \over 2}}^{1 \over n}(t\sin({\pi \over t}))'\,dt\bigg|$$ Then the fundamental theorem of calculus says this is equal to the following, for $f(t) = t \sin(\pi/t)$: $$\bigg|f({1 \over n + {1 \over 2}}) - f(0)\bigg| + \bigg|f({1 \over n}) - f({1 \over n + {1 \over 2}})\bigg|$$ $$= \bigg|{1 \over n + {1 \over 2}} - 0\bigg| + \bigg|0 -{1 \over n + {1 \over 2}}\bigg|$$ $$= {2 \over n + {1 \over 2}}$$
-
I checked numerically that removing the 1 makes the inequality untrue, I don't think you can in general integrate the function $|\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)|$ without first pulling the absolute value bars outside the integral; consider $|x|$ in an interval around zero. – lithium barbie doll Jul 12 '12 at 3:25
1
What I wrote was correct ($\int|f| \geq |\int f|$). I suggest you recheck your numerical work. – Zarrax Jul 12 '12 at 3:40
You're right, my mistake, apparently I should be more wary of Wolfram's margin of error. – lithium barbie doll Jul 12 '12 at 18:15
Now that I have finished this, I see that it is similar to the approach that Zarrax used, but it looks a bit simpler, so I will post it in addition.
Using the following facts $$\sqrt{x^2+1}\ge|x|\tag{1}$$ $$|x-y|\ge\mathrm{sgn}(x)(x-y)\tag{2}$$ with a change of variables $t\mapsto 1/t$, we get $$\begin{align} &\int_{\frac{1}{n+1}}^{\frac{1}{n}}\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}\mathrm{d}t\\ &=\int_n^{n+1}\sqrt{1+(\sin(\pi t)-\pi t\cos(\pi t))^2}\frac{\mathrm{d}t}{t^2}\\ &\ge\int_0^1\left|\frac{\pi \cos(\pi t)}{n+t}-\frac{\sin(\pi t)}{(n+t)^2}\right|\mathrm{d}t\\ &\ge\int_0^1\mathrm{sgn}(\cos(\pi t))\left(\frac{\pi \cos(\pi t)}{n+t}-\frac{\sin(\pi t)}{(n+t)^2}\right)\mathrm{d}t\\ &=\int_0^1\mathrm{sgn}(\cos(\pi t))\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\sin(\pi t)}{n+t}\right)\mathrm{d}t\\ &=\left(\frac{\sin(\pi/2)}{n+\frac12}-\frac{\sin(0)}{n}\right)+\left(\frac{\sin(\pi/2)}{n+\frac12}-\frac{\sin(\pi)}{n+1}\right)\\ &=\frac{2}{n+\frac12}\tag{3} \end{align}$$
-
Hint: notice that the integral may be written as: $$\int_{\frac{1}{n+1}}^{\frac{1}{n}}\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}dt= \int_{\frac{1}{n+1}}^{\frac{1}{n}}\sqrt{1+{\left[\left(t\sin\left(\frac{\pi}{t}\right)\right)'\right]}^2}dt=\int_{\frac{1}{n+1}}^{\frac{1}{n}}\sqrt{1 + [f(t)']^2}dt$$ where $f(t)= t \sin\frac{\pi}{t}.$ That means that you need to prove that the lenght of the graph of the function $f(t)$ from $\frac{1}{n+1}$ to $\frac{1}{n}$ is greater than or equal to $\frac{2}{n+\frac{1}{2}}$. What does it happen with the function $f(t)= t \sin\frac{\pi}{t}$ at the points $\frac{1}{n+1}$ and $\frac{1}{n}$? Geometrically, the result is evident.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351966381072998, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/90071/argument-principle-and-rouches-theorem-for-fz
|
# Argument principle and Rouché's theorem for $f(z)$
Say I have a complex valued polynomial $f(z)= z^4 -6z +3$. I'm trying to use the quarter disk $0 \leq \theta \leq \pi/2$, but on the real axis, it seems $f$ is positive only when $z \leq 0$. Does that mean that the change of argument is $- \pi$ for this particular part of the quarter disk region? How would one apply Rouché's Theorem then to find how many roots of the equation have modulus between $1$ and $2$? The quarter disk I'm using to find how many roots lie in the first quadrant, just to clarify.
-
Actually $f$ is also positive when $z \geq 2$. – Libertron Dec 9 '11 at 23:51
## 1 Answer
Let $f(z)=z^4 -6z +3$. Then when $|z|=1+\epsilon$ where $\epsilon>0$ is sufficiently small, we have $$|f(z)-(-6z)|=|z^4+3|\leq |z|^4+3=(1+\epsilon)^4+3<6(1+\epsilon)=|-6z|.$$ By Rouche's theorem, $f(z)$ and $-6z$ has the same number of zeros in $|z|<1+\epsilon$. This implies that $f(z)$ has one zero inside $|z|<1+\epsilon$ for $\epsilon>0$ sufficiently small. On the other hand, when $|z|=2$, $$|f(z)-z^4|=|-6z+3|\leq 6|z|+3=15<16=|z^4|.$$ By Rouche's theorem again, $f(z)$ and $z^4$ has the same number of zeros in $|z|<2$. This implies that $f(z)$ has four zeros inside $|z|<2$. Combining all these, we can conclude that $f(z)$ has three zeros in $1 < |z| < 2$.
-
Was my method for trying to find how many roots were in the first quadrant correct? I think it turns out that there is only one as I believe the total change in argument should be $2 \pi$. – Libertron Dec 10 '11 at 2:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9640654921531677, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/33709/epr-paradox-and-uncertainty-principle
|
# EPR paradox and uncertainty principle
In Wikipedia article EPR paradox,
The original paper purports to describe what must happen to "two systems I and II, which we permit to interact ...", and, after some time, "we suppose that there is no longer any interaction between the two parts." In the words of Kumar (2009), the EPR description involves "two particles, A and B, [which] interact briefly and then move off in opposite directions."[9] According to Heisenberg's uncertainty principle, it is impossible to measure both the momentum and the position of particle B exactly. However, according to Kumar, it is possible to measure the exact position of particle A. By calculation, therefore, with the exact position of particle A known, the exact position of particle B can be known. Also, the exact momentum of particle B can be measured, so the exact momentum of particle A can be worked out. Kumar writes: "EPR argued that they had proved that ... [particle] B can have simultaneously exact values of position and momentum. ... Particle B has a position that is real and a momentum that is real."
But isn't measurement in quantum mechanics not related to Heisenberg uncertainty principle? According to my knowledge, measurement collapses wavefunction into one basis state, and has nothing to do with uncertainty principle..
I am bit confused of the paper.
-
## 1 Answer
The idea is as follows:
Suppose we create (for example) a pair of electrons that fly off in opposite directions. Because of conservation of momentum we know the momenta must be equal and opposite so if we measure the momentum of one of the particles we know the momentum of the other particle. Likewise if we measure the position of one of the particles we can calculate the position of the other.
So we wait until the particles are a long way apart like a light-year (remember this is just a thought experiment!) and we measure the momentum of particle A to perfect accuracy (so it's position is unknown) and the position of particle B perfectly (so it's momentum is unknown). So we know the momentum of particle A perfectly, but from our measurement of B's position we also calculate the position of particle A perfectly. The end result is that we know both the momentum of particle A and it's position i.e.
$$\Delta x_A \Delta p_A = 0$$
and this contradicts the Heisenberg uncertainty principle.
Note that I said the two particles are a light-year apart. We specified this so any signal from particle A to B, or vice verse, would take at least a year to travel. This means as long as we do our measurements of A and B within a year the two measurements can't affect each other (because information can't travel faster than light).
The resolution of the paradox is that anything you do to an entangled system affects the whole system simultaneously i.e. there is no limitation due to the speed of light. You can't just do a measurement of particle A without affecting particle B even when they are many light years apart. Any measurement you do is necessarily a measurement on the whole entangled system and you will always find that Heisenberg's uncertainty principle applies i.e.
$$\Delta x_{AB} \Delta p_{AB} \ge \frac{\hbar}{2}$$
This is unpalatable to lots of people because it suggests that quantum entanglement is necessarily non-local. Lots of effort has been expended in trying to get round this e.g. hidden variable theories. However the evidence to date is that entanglement really is non-local.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382249712944031, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/46048/tachyon-vertex-operator-polchinskis-book?answertab=votes
|
# Tachyon vertex operator (Polchinski's book)
• I would like to know how does Polchinski in his book "derive" what is the "tachyon vertex operator" (..as say stated in equation 3.6.25, 6.2.11..) I can't locate a "derivation" of the fact that $:e^{ikX}:$ is the tachyon vertex operator.
(..I understand that it follows from some application of the state-operator map but I can't put it together..)
• And then what is the meaning of the higher vertex operators" - which are of the form of arbitrary number of either operators of the above kind or the derivatives of $X$ w.r.t either $z$ or $\bar{z}$. (..like in equation 6.2.18..)
-
2
I wonder if someone could make the question (v1) and answers more self-contained, preferably so one doesn't have to open Polchinski's book? – Qmechanic♦ Dec 6 '12 at 10:30
Yes please @user6818, put the equations in, I dont have a Polchinski book ... :-/ – Dilaton Apr 2 at 17:28
## 1 Answer
Polchinski explains the state-operator correspondence in section 2.8, in particular equations 2.8.3, 2.8.4, and 2.8.9.
What you call "higher vertex operators" create multiple particles (if there are multiple exponential vertex operators) with higher spin (if there are extra derivatives multiplying the exponentials).
-
That was kind of my question :) Equation 2.8.3 and 2.8.4 are just definitions and I guess that the LHS of 2.8.9 is the same state as defined in open strings through 1.3.27. Now from that how does the equality of 2.8.9 follow? What is the derivation of that and why is it tachyonic? (..I presume that the tachyonic nature follows if the notation of $|0;k>$ follows the same state as described as the lightest bosonic open strings as in equation 1.3.38..) Though there does seem to be a need to derive 2.8.9 - but unlike these 3.6.1 and 3.6.25 are talking of "closed" strings.. – user6818 Dec 7 '12 at 23:39
1
@user6818: Ugh. We need to start latexing equations into the posts, to make them self-contained. It's painful to have a discussion while referring to a book that I keep in a different window. Can you go back through the post and latex the equations in? – user1504 Dec 9 '12 at 14:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413143992424011, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/10/15/a-new-definition-of-spans/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## A New Definition of Spans
I was trying to add in duals to the theory, and I ran into some trouble. The fix seems to be something I’d considered a little earlier for a possible generalization, but it seems that duals force the issue. We need to tweak our definition of spans.
Now, most of the definition is fine as it stands. Our objects are just the same as in $\mathcal{C}$, and our 1-morphisms are spans $A\stackrel{f_l}{\leftarrow}X\stackrel{f_r}{\rightarrow}B$. When I first (re)invented these things, I stopped at this level. I handled the associativity not with 2-morphisms, but with defining two spans $A\stackrel{f_l}{\leftarrow}X\stackrel{f_r}{\rightarrow}B$ and $A\stackrel{g_l}{\leftarrow}Y\stackrel{g_r}{\rightarrow}B$ to be equivalent if there was an isomorphism $\alpha:X\rightarrow Y$ so that $f_l=g_l\circ\alpha$ and $f_r=g_r\circ\alpha$. Then I said that the 1-morphisms were equivalence classes of spans, giving me a 1-category.
Now $\mathcal{C}$ still sits inside this category I’ll call $\mathbf{Span}_1(\mathcal{C})$. Indeed, if we use the same inclusion as before, the only way two arrows of $\mathcal{C}$ can be isomorphic in $\mathbf{Span}(\mathcal{C})$ (and thus equal in $\mathbf{Span}_1(\mathcal{C})$) is for them to be equal already in $\mathcal{C}$.
But we want to pay attention to those 2-morphisms, and that’s where things start to get interesting. See, those arrows $\alpha:X\rightarrow Y$ are nice as 2-morphisms, but they’re not very.. “spanny”. Instead, let’s define a 2-morphism from $A\stackrel{f_l}{\leftarrow}X\stackrel{f_r}{\rightarrow}B$ to $A\stackrel{g_l}{\leftarrow}Y\stackrel{g_r}{\rightarrow}B$ to be itself a span $X\stackrel{\alpha_l}{\leftarrow}M\stackrel{\alpha_r}{\rightarrow}Y$ satisfying $f_l\circ\alpha_l=g_l\circ\alpha_r$ and $f_r\circ\alpha_l=g_r\circ\alpha_r$. Here’s the picture:
Now, we handle the associativity at the 2-morphism level by cutting off the same way we did in $\mathbf{Span}_1(\mathcal{C})$ and say that a 2-morphism is really an equivalence class of spans. This makes the vertical composition of 2-morphisms just the same pullback construction as for the composition of 1-morphisms.
The horizontal composition of these 2-morphisms gets tricky. Here’s another picture:
Here we have our two 2-morphisms and we’ve already pulled back to get the 1-morphisms for the source and target of the composition. We write $\alpha:M\rightarrow A'$ for the composite $f_r\circ\alpha_l=g_r\circ\alpha_r$, and similarly $\alpha'$ on the other side.
Now we can pull back the diagram $M\stackrel{\alpha}{\rightarrow}A'\stackrel{\alpha'}{\leftarrow}M'$ to get an object $M''$ with arrows to $M$ and $M'$. If we follow these arrows, then up to $X$ and $X'$ (respectively) the universal property of $X''$ gives us a unique arrow from $M''$ to $X''$. Similarly, we have a unique arrow from $M''$ to $Y''$. These arrows make the required squares commute, and so define a span from $X''$ to $Y''$ which is our composite 2-morphism.
When we compose two spans, again we only have associativity up to isomorphism. In $\mathbf{Span}_1(\mathcal{C})$, this becomes an equality, so we’re fine. In $\mathbf{Span}(\mathcal{C})$ we made this isomorphism a 2-morphism between the two composite spans. Now in $\mathbf{Span}_2(\mathcal{C})$ we can make this isomorphism into one leg of a span 2-morphism, and everything works out as before. The exchange identity for the two compositions of 2-morphisms also works out, but it’s even more complicated than the definition of horizontal composition.
Seriously, does anyone know of a tool that will render commutative diagrams in 3-D, like with Java or something? This is getting ridiculous.
Anyhow, I think now I can throw away the request that the monoidal structures on $\mathcal{C}$ play nice with the pullbacks. Unfortunately, it’s getting a lot more complicated now and I have other real-world obligations I’ve got to attend to. So I think I’ll back-burner this discussion and move back to something old rather than spend too much time working this stuff out live as I have been doing.
### Like this:
Posted by John Armstrong | Category theory
## 6 Comments »
1. It might be worth mentioning that a number of people appear to have been playing with this idea. Joshua Nichols-Barrer mentioned to me that he and Mike Schulman thought about this way of dealing with spans and spans-between-spans, and so on, to give an infinity-category of spans. I use something similar based on a cubical arrangement involving diagrams that look like products of two span diagrams (i.e. the source and target of the spans needn’t be the same, so you have spans at both ends as well as in the middle) – which itself is related to a similar construction in n dimensions which Marco Grandis has some papers about (actually cospans in that case). It seemed necessary to do this to get these things to represent cobordisms adequately. There are probably other examples I’m not aware of.
The kind of arrangement you have here, with a common source and target for the two spans here, seems related to localization, which is used in derived categories (when we’re internal to a category of chain complexes). In that case, there is a special class of “quasi-isomorphisms” which have to appear on the left leg of the span – introducing spans as morphisms is like a way of introducing formal inverses for them, a-la localization in a ring.
This business is on my back-burner also. Anyway, it’s interesting stuff.
Comment by | October 17, 2007 | Reply
2. I suppose I didn’t explicitly mention that in my earlier installments. As I said, $\mathcal{C}$ sits in $\mathbf{Span}(\mathcal{C})$, as does $\mathcal{C}^\mathrm{op}$, and the two inclusions give each other 1-sided inverses. If things are nice, you get actual inverses.
And even better, the dual for a 1- or 2-morphism that comes from one of these inclusions is exactly the adjoined inverse. I think I’ll be writing this up as soon as I can, because it would be nice to have it out before my paper using things like spans in studying tangles.
One more thing: it seems an n-tangle is (in a sense) a cospan of (n-1)-tangles, and this is where it gets all its nice structure. Specifically, if you take the braided monoidal category with duals $\mathcal{T}ang$ and hit it with this construction, you get the category from HDA4.
Comment by | October 17, 2007 | Reply
3. The infinity-category of spans-between-spans-between-spans is closely analogous to the infinity-category of cobordisms-between-cobordisms-between-cobordisms. Cutting it off at some level and using isomorphism classes of spans is analogous to using diffeomorphism classes of cobordisms.
This infinity-category always reminds Jim and me of the Monty Python routine: “Span, span, span, span…”
Comment by | October 19, 2007 | Reply
4. That’s exactly what I’m thinking of. Cobordisms are cospans, and so cobordisms between cobordisms between… are spans between spans between…
Basically, I think the reason that everything about 2-tangles listed in HDA4 works out is that it’s secretly a special case of (the dual of) this construction.
So here’s a far-reaching conjecture, since a man’s reach should exceed his grasp: If we start at the point (n,k) with both n and k at least 1 in your famous table, then “spanning” moves us to either (n+1,k) or (n+1,k+1), and duals come along for the ride.
Comment by | October 19, 2007 | Reply
5. Sounds like a great conjecture! I hope you prove it for some nice medium-sized n and k.
Comment by | October 21, 2007 | Reply
6. [...] Span 2-categories I’ve just had a breakthrough today on my project to add structures to 2-categories of spans. I was hoping to generalize from the case of a monoidal structure on the base category that [...]
Pingback by | November 7, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453834891319275, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/37680/algorithm-to-tell-if-a-partial-recursive-function-is-0-everywhere?answertab=active
|
Algorithm to tell if a partial recursive function is 0 everywhere
Is there a (partial) recursive function that tells me, if a partial recursive function encoded by the number $c$ is the constant zero function ?
-
When writing questions, it's better practice to explain why you are interested in the question and what you have already tried. In this case, this question is a standard sort of result/exercise for a computability textbook. Once you learn the general method you will be able to answer these yourself. – Carl Mummert May 8 '11 at 10:46
ok, I will do that next time – temo May 8 '11 at 20:40
3 Answers
This wording in the question is ambiguous. It is unclear whether you want to show that the set of constant zero partial recursive functions is decidable or semi-decidable. I will show that the answer for the latter is negative and thus a fortiori so will be the answer for the former. Also it is unclear what you mean by partial recursive constant zero functions. There are two cases, (1) in which a function is considered constant zero only if it is total and constant zero and (2) in which a function is considered constant zero if for every input in its domain (which in general is a subset of the natural numbers) the output is zero. I will argue for both cases.
(1) Here I assume that constant zero means total and constant zero.
In the first case I will show that if the set of constant zero functions is semi-decidable then the set of total functions is semi-decidable. This wields as a result that you can recursively enumerate total recursive functions. This is impossible, because then if $f_n$ is an enumeration of total recursive functions let $g(n)=f_n(n)+1$. This $g$ is a total recursive function and different from every total recursive function, which is impossible.
So let's show that if the set of constant zero functions is semi-decidable then the set of total functions is semi-decidable. I'll argue using Turing machines: Assume that there is a machine $Zero$ that witnesses the semi-decidability of the set of constant zero functions. I will create a witness $Total$ for the semi-decidability of the set of total functions. The algorithm of $Total$ will be as follows:
For input a TM $M$, create a new machine $M^*$ that for every input $x$ runs $M$ with input $x$ and if $M$ halts $M^*$ gives $0$ as output. Then run $Zero$ with input $M^*$ and give as output the output of $Zero$.
$M$ is total if and only if $M^*$ is constant zero and thus $Total$ will witness the semi-decidability of the set of total functions.
(2) Here I assume that constant zero means constant zero in its domain that may be a subset of the natural numbers.
The second case requires a bit more. First observe that the complement of the set of constant zero partially recursive functions is semi-decidable. Use dovetailing (I hope this is the right term) to do this. That is given a partial recursive function "run" it with input $1$ for $1$ step, then run it with input $1$ and $2$ for $2$ steps, etc. If there is an element of the function's domain for which the function doesn't give $0$ as output the aforementioned process with find it.
The above shows that if the set of constant zero partial recursive functions is semi-decidable then it is decidable (since both the set and its complement are semi-decidable).
This leads to a contradiction: Assume there is a Turing machine that decides if a partial recursive function is constant zero or not and let's call it $ZERO$. I will use this $ZERO$ to create a Turing machine that solves the halting problem. Let $M$ be a Turing machine and $x$ an input for it. We create the following Turing machine $M^x$: For any input other than $x$ print as output $0$, for input $x$ run $M$ with input $x$ and if the machine halts then print as output $1$. Now using $ZERO$ with input $M^x$ we decide if $M$ halts with input $x$.
-
The answer is no by Rice's theorem.
The only property of computable languages/graphs of functions that can be decided computably are the trivial ones (i.e. the property that is satisfied by all, and one which is satisfied by none).
Being the constant function 0 is not a trivial property so it cannot be decided.
-
1
This is the theorem to look up; it is one of the more general undecidability theorems. Unfortunately, the conclusion of Rice's theorem is only that the problem is undecidable; it doesn't give any more precise information about the amount of undecidability. – Carl Mummert May 8 '11 at 10:50
We can find a (total) recursive (indeed quite simple) function $f(c,n)$ with the following properties:
(i) If $c$ is the index of a Turing machine, then so is $f(c,n)$
(ii) The Turing machine with index $f(c,n)$, on input anything other than $n$, halts and gives result $0$.
(iii) On input $n$, the Turing machine with index $f(c,n)$ halts and gives result $0$ if the machine with index $c$ halts. Otherwise, the machine with index $f(c,n)$ does not halt.
Then the machine with index $c$ halts on input $n$ iff the machine with index $f(c,n)$ computes the identically $0$ function.
It is well-known that the Halting Problem for Turing machines is not Turing machine solvable. But if there were an algorithm (Turing machine) for determining whether a machine output is identically $0$, then by applying the algorithm to the machine $f(c,n)$ we would have an algorithmic solution to the Halting Problem.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128789305686951, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/10/13/hom-space-duals/?like=1&source=post_flair&_wpnonce=420e8d64d7
|
# The Unapologetic Mathematician
## Hom Space Duals
Again, sorry for the delay but I was eager to polish something up for my real job this morning.
There’s something interesting to notice in our formulæ for the dimensions of spaces of intertwinors: they’re symmetric between the two representations involved. Indeed, let’s take two $G$-modules:
$\displaystyle\begin{aligned}V&\cong\bigoplus\limits_{i=1}^km_iV^{(i)}\\W&\cong\bigoplus\limits_{j=1}^kn_jV^{(j)}\end{aligned}$
where the $V^{(i)}$ are pairwise-inequivalent irreducible $G$-modules with degrees $d_i$. We calculate the dimensions of the $\hom$-spaces going each way:
$\displaystyle\begin{aligned}\dim\hom_G(V,W)&=\sum\limits_{i=1}^km_in_i\\\dim\hom_G(W,V)&=\sum\limits_{i=1}^kn_im_i\end{aligned}$
but these are equal! So does this mean these spaces are isomorphic?
Well, yes. Any two vector spaces having the same dimension are isomorphic, but they’re not “naturally” isomorphic. Roughly, there’s no universal method of giving an explicit isomorphism, and so it’s regarded as sort of coincidental. But there’s something else around that’s not coincidental.
It turns out that these spaces are naturally isomorphic to each other’s dual spaces. That is, for any $G$-modules $V$ and $W$ we have an isomorphism
$\displaystyle\hom_G(W,V)\cong\hom_G(V,W)^*$
Luckily, we already know that their dimensions are equal, so the rank-nullity theorem tells us all we need is to find an injective linear map from one to the other.
So, let’s take an intertwinor $h:W\to V$ and use it to build a linear functional $\lambda_h$ on $\hom_G(V,W)$. For any intertwinor $f:V\to W$ we define
$\displaystyle\lambda_h(f)=\mathrm{Tr}_V(h\circ f)$
Where $\mathrm{Tr}$ is the trace of an endomorphism. Given a matrix, it’s the sum of the diagonal entries. Since the composition of linear maps is linear in each variable, and the trace is a linear function, this is a linear functional as desired. It should also be clear that the construction $h\mapsto\lambda_h$ is itself a linear map.
Now, we must show that this map is injective. That is, for no $h$ to we find $\lambda_h=0$. This will follow if we can find for every nonzero $h:W\to V$ at least one $f:V\to W$ so that $\mathrm{Tr}_V(h\circ f)\neq0$. To do so, we pick a basis for each irreducible representation that shows up in either $V$ or $W$ so we can replace $V$ and $W$ with matrix representations. Now we can write
$\displaystyle h=\bigoplus\limits_{i=1}^kM_i\boxtimes I_{d_i}$
where $M_i\in\mathrm{Mat}_{n_i,m_i}(\mathbb{C})$ is an $n_i\times m_i$ complex matrix. To construct our $f$, we simply take the conjugate transpose of each of these matrices:
$\displaystyle f=\bigoplus\limits_{i=1}^kM_i^\dagger\boxtimes I_{d_i}$
where now $M_i^\dagger\in\mathrm{Mat}_{m_i,n_i}(\mathbb{C})$ is an $m_i\times n_i$ complex matrix, as desired. We multiply the two matrices:
$\displaystyle hf=\bigoplus\limits_{i=1}^k(M_iM_i^\dagger)\boxtimes I_{d_i}$
and find that each $M_iM_i^\dagger\in\mathrm{Mat}_{m_i,m_i}(\mathbb{C})$ is a $m_i\times m_i$ square matrix. Thus the trace of this composition is the sum of their traces.
We’ve already seen that the composition of a linear transformation and its adjoint is self-adjoint and positive-definite. In terms of complex matrices, this tells us that the product of a matrix and its conjugate transpose is conjugate-symmetric and positive-definite. This means that it’s diagonalizable with all nonnegative real eigenvalues down the diagonal. And thus its trace is a nonnegative real number, and it can only be zero if the original matrix was zero.
The upshot, if you didn’t follow that, is that if $h\neq0$ we have an $f$ so that $\lambda_h(f)=\mathrm{Tr}(h\circ f)\neq0$. And thus the map $h\mapsto\lambda_h$ is injective, as we asserted. Proving naturality is similarly easy to proving it for additivity of $\hom$-spaces, and you can work it out if you’re interested.
## 2 Comments »
1. [...] product? We get the complex conjugate: . What happens when we swap the arguments to the functor? We get the dual space: . Complex conjugation corresponds to passing to the dual [...]
Pingback by | October 25, 2010 | Reply
2. [...] we can use the duality on hom spaces and apply it to yesterday’s Frobenius [...]
Pingback by | December 3, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978230953216553, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/243/compact-kaehler-manifolds-that-are-isomorphic-as-symplectic-manifolds-but-not-as/81475
|
## Compact Kaehler manifolds that are isomorphic as symplectic manifolds but not as complex manifolds (and vice-versa)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
1. What are some examples of compact Kaehler manifolds (or smooth complex projective varieties) that are not isomorphic as complex manifolds (or as varieties), but are isomorphic as symplectic manifolds (with the symplectic structure induced from the Kaehler structure)? Elliptic curves should be an example, but I can't think of any others. I'm sure there should be lots...
2. In the other direction, if I have two compact Kaehler manifolds (or smooth complex projective varieties) that are isomorphic as complex manifolds (or as varieties), then are they necessarily isomorphic as symplectic manifolds?
3. And one last question that just came to mind: If two smooth complex (projective, if need be) varieties are isomorphic as complex manifolds, then they are isomorphic as varieties?
-
## 5 Answers
1. Well, there are stupid examples like the fact that $\mathbb{P}^n$ has Kähler structures where any rational multiple of the hyperplane class is the Kähler class which are compatible with the standard complex structure (you just rescale the symplectic structure and metric). I think you should get similar examples with multi-parameter families on things like toric varieties with higher dimensional $H^2$.
2. I know some non-compact examples where you can deform the complex structure without changing the symplectic one. I don't know any compact examples, but they probably exist. The thing is, the only thing you can deform about a symplectic structure on a compact thing is its cohomology class (by the Moser trick), so anything with an big enough family of Kähler metrics will work.
3. This probably follows from GAGA, but you'd have to ask someone more expert than me to be sure. Edit: David's answer made me realize I forgot to say projective here. That's important.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Re 3: If you say projective, then yes. GAGA tells you that an analytic isomorphism is also an algebraic one.
If you don't say projective, then no. See the appendix to Hartshorne for a family of nonisomorphic algebraic structures on C^2/Z^2.
-
So here are some examples: When X has no continuous families of automorphisms (H^0(X, TX)=0), complex deformations of X to first order are given by H^1(X, TX). For compact Calabi-Yaus this is H^{(n-1, 1)} and moreover by Bogomolov-Tian-Todorov the deformations are unobstructed.
Symplectic deformations as Ben noted are controlled by H^2(X, R) by Moser's trick. If we want to deform while staying Kahler, then in H^{(1,1)}(X, R). In mirror symmetry (where this discussion is stolen from) one allows a B-field and correspondingly a complexified space of deformations H^{(1,1)}. Then for mirror manifolds these two spaces of deformations are switched.
This is discussed in Denis Auroux's notes on mirror symmetry (http://math.mit.edu/~auroux/18.969/, any misinterpretation is my fault).
Mirror symmetry is cool and all, but if we just stay on the same Calabi-Yau the deformation spaces for symplectic and complex structures can have different dimensions - with either one bigger, giving examples for both 1 and 2.
-
That's a very nice observation! – Kevin Lin Oct 16 2009 at 15:24
In case anybody is curious, there are still examples of (1) even if one replaces the requirement that the complex manifolds be nonisomorphic with the requirement that they be not even deformation equivalent. In fact in arXiv:0608110 Catanese showed that Manetti's examples of general type surfaces which are diffeomorphic but not deformation equivalent are symplectomorphic (with respect to their canonical Kahler forms).
-
If $M \to X$ is smooth and proper, and $M$ is K\"ahler, then the fibers are all symplectomorphic. (Proof: the Levi-Civita connection generates symplectomorphisms.) The family of elliptic curves was already mentioned, but another interesting one has every general fiber being $F_0$ and the special fiber $F_2$ (Hirzebruch surfaces).
A curious example is the family `$\{ xy = t \}$` of hypersurfaces in ${\mathbb C}^2$ as $t$ varies (away from $0$). There, the fibers are all holomorphic, and symplectomorphic, but not by the same diffeomorphism (their unique closed geodesics are of varying length).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459375739097595, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/174445-alternating-series-estimation-theorem.html
|
# Thread:
1. ## Alternating Series Estimation Theorem
Use the alternating series estimation theorem to approximate the sum of $(-1)^n/(n!)$ from n=0 to infinity with an error of $.000005$.
I know that after summing a number of terms, the remainder(error) will be less than the first omitted term, so
I've set up the problem like this:
$.000005 < 1/(n+1)!$
Is this right, and if so, what's next?
2. Originally Posted by JewelsofHearts
Use the alternating series estimation theorem to approximate the sum of $(-1)^n/(n!)$ from n=0 to infinity with an error of $.000005$.
I know that after summing a number of terms, the remainder(error) will be less than the first omitted term, so
I've set up the problem like this:
$.000005 < 1/(n+1)!$
Is this right, and if so, what's next?
So you want the least integer $$$n$ such that:
$(n+1)!>20000$
To proceed you need either to use trial and error or some suitable approximation for the factorial (and then a numerical solution of the resulting equation).
(look up $$$7!$ and $$$8!$)
CB
3. Shouldn't it be (n+1)!<200000?
4. Originally Posted by JewelsofHearts
Shouldn't it be (n+1)!<200000?
Probably counting zeros is not something I am particularly careful about
5. Not only the zeros, but I think the direction of the inequality sign is supposed to be different. I really don't know, though. I don't understand the theorem.
6. error, $E < \dfrac{1}{(n+1)!} < 5 \times 10^{-6}$
$(n+1)! > \dfrac{1}{5 \times 10^{-6}} = 2 \times 10^5$
note the following factorial values ...
$8! = 40320$
$9! = 362880$
so, how many terms of the series would you need to add to get an error less than $5 \times 10^{-6}$ ?
fyi, to check your work, note ...
$\displaystyle \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} = \frac{1}{e}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9081636667251587, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/222044/what-is-the-distribution-of-a-data-set
|
# What is the distribution of a data set
I understand what the probability distribution is.
I also have a personal understanding/interpretation of the concept of distribution of a dataset. Whenever I see this expression I imagine a graph with frequency as the y-axis and the members of the data set on the x-axis, for each of them(members of the data set) the graph containing a point at the corresponding frequency level.
1. Is this the correct interpretation ? Is "distribution of a datset" = "probability distribution" ? To me it doesn't look like the two concepts are the same thing.(probably subtly related but not the same thing)
2. I was unable to find a standard definition of this concept. Can you provide me with a pointer to a resource defining it ?
3. When authors say: "Two data sets drawn from the same underlying distribution", what exactly do they mean by "underlying distribution" ? Do they mean the same thing as I mentioned above, i.e. a graph like :frequency vs each member of the data set ?
-
I'm still waiting for answers. In case there are unclarities in the way the question is phrased, please let me know. – Razvan Nov 16 '12 at 17:25
## 1 Answer
I think what you mean distribution of a dataset actually refers to the distribution of data instances. For a general dataset, you do not know the length, so if you want to define a probability density (assume continuous case) for all possible datasets, you would have to assume infinity dimension. But in practical, every instance in the dataset have a fixed length representation, which corresponds to $d$-dimensional vector, and it is easy to assign a probability density to every such $d$-dimensional point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267377853393555, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/23540/list
|
## Return to Answer
2 added 22 characters in body
As you note, if we choose two different embeddings $k^s \to k_v^s$, say $\imath_1$ and $\imath_2$, then we get two different $G_{k_v}$-module structures on $M$, call them $M_1$ and $M_2$, and two different restriction maps $r_1:H^n(k,M) \to H^n(k_v,M_1)$ and $r_2:H^n(k,M) \to H^n(k_v,M_2)$.
The point is that there will also be a canonical isomorphism $i:H^n(k_v,M_1) \cong H^n(k_v,M_2)$ such that $i\circ r_1 = r_2,$ given by a formula analogous to the one you gave in the abelian variety context.
Namely, if $\imath_2 = \imath_1\circ g,$ then the isomorphism $i$ will be induced by $m \mapsto g\cdot m$ (if I have things straight; you can easily check if this is correct, of if I should have $g^{-1}$ instead). The fact that $i\circ r_1 = r_2$ will then depend on the fact that the automorphism of $H^n(k,M)$ induced by conjugation by $g^{-1}$ and the map $m\mapsto g\cdot m$ is the identity. So one does not have a literal independence of the embedding, but rather, the restriction is defined up to a canonical isomorphism independent of the embedding, and this latter fact does depend upon conjugation inducing the identity on cohomology (which is why people often summarize it in that way).
Note also that your abelian variety example is actually a special case of this, because the $M$ is $A(k)$, A(k^s)$, and the$G_{k_v}$-action on$M$does depend on the embedding of$k$k^s$ in $k_v$. k_v^s$. But the natural map$A(k) A(k^s) \to A(k_v)$A(k_v^s)$ also depends on this embedding, in such a way that, when you compose the restriction from $G_k$ to $G_{k_v}$ (with coefficients in $A(k)$) A(k^s)$) with the map on$G_{k_v}$-cohomology induced by the embedding$A(k) A(k^s) \hookrightarrow A(k_v)$A(k_v^s)$, you do obtain a map on cohomology that is independent of the embedding.
But it is not that in this case $M$ has a well-defined action of $G_{k_v}$ independent of the choice of embedding $k k^s \hookrightarrow k_v$k_v^s$. It is rather that$M$embeds into a bigger module$M_v$(in a way that also depends on the embedding) so that the composite$H^n(k,M)\to H^n(k_v,M) \to H^n(k_v,M_v)$is independent of the embedding. It is the embeddings$M \hookrightarrow M_v$that are missing in the more general context (i.e. when$M$is not of the form$A(k)$).A(k^s)$).
1
As you note, if we choose two different embeddings $k^s \to k_v^s$, say $\imath_1$ and $\imath_2$, then we get two different $G_{k_v}$-module structures on $M$, call them $M_1$ and $M_2$, and two different restriction maps $r_1:H^n(k,M) \to H^n(k_v,M_1)$ and $r_2:H^n(k,M) \to H^n(k_v,M_2)$.
The point is that there will also be a canonical isomorphism $i:H^n(k_v,M_1) \cong H^n(k_v,M_2)$ such that $i\circ r_1 = r_2,$ given by a formula analogous to the one you gave in the abelian variety context.
Namely, if $\imath_2 = \imath_1\circ g,$ then the isomorphism $i$ will be induced by $m \mapsto g\cdot m$ (if I have things straight; you can easily check if this is correct, of if I should have $g^{-1}$ instead). The fact that $i\circ r_1 = r_2$ will then depend on the fact that the automorphism of $H^n(k,M)$ induced by conjugation by $g^{-1}$ and the map $m\mapsto g\cdot m$ is the identity. So one does not have a literal independence of the embedding, but rather, the restriction is defined up to a canonical isomorphism independent of the embedding, and this latter fact does depend upon conjugation inducing the identity on cohomology (which is why people often summarize it in that way).
Note also that your abelian variety example is actually a special case of this, because the $M$ is $A(k)$, and the $G_{k_v}$-action on $M$ does depend on the embedding of $k$ in $k_v$. But the natural map $A(k) \to A(k_v)$ also depends on this embedding, in such a way that, when you compose the restriction from $G_k$ to $G_{k_v}$ (with coefficients in $A(k)$) with the map on $G_{k_v}$-cohomology induced by the embedding $A(k) \hookrightarrow A(k_v)$, you do obtain a map on cohomology that is independent of the embedding.
But it is not that in this case $M$ has a well-defined action of $G_{k_v}$ independent of the choice of embedding $k \hookrightarrow k_v$. It is rather that $M$ embeds into a bigger module $M_v$ (in a way that also depends on the embedding) so that the composite $H^n(k,M)\to H^n(k_v,M) \to H^n(k_v,M_v)$ is independent of the embedding. It is the embeddings $M \hookrightarrow M_v$ that are missing in the more general context (i.e. when $M$ is not of the form $A(k)$).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347153306007385, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/equation-solving
|
# Tagged Questions
Questions on the analytic and numeric equation solving functions of Mathematica (Solve, Reduce, NSolve, FindRoot, DSolve, RSolve, etc.).
0answers
91 views
### Find all the integer numbers $a$, $b$, $c$, $d$, $e$, $f$, $k$ to this equation have three integer different solutions?
How to choose all integer numbers $a$, $b$, $c$, $d$, $e$, $f$, $k$ belongs to interval $[-20,20]$ to the equation $$\sqrt[3]{a x + b} +\sqrt[3]{c x + d} + \sqrt[3]{e x + f} =k$$ has three integer ...
1answer
152 views
### Efficient way to solve equal sums $x_1^k+x_2^k+\dots+x_5^k=y_1^k+y_2^k+\dots+y_5^k$ with Mathematica?
I need to solve the system of equations, call it $S_1$, in the integers $$x_1x_2x_3x_4x_5 = y_1y_2y_3y_4y_5$$ $$x_1^k+x_2^k+\dots+x_5^k=y_1^k+y_2^k+\dots+y_5^k,\;\; k= 2,4,6$$ I used a very ...
0answers
51 views
### Bounds on random number
I have a Markov Chain Monte Carlo code that uses a chi squared function to fit some data. I am running into an issue now with getting unrealistic values for some of the free parameters in the code. ...
1answer
149 views
### The Orbit and Perigee of the Flamsteed comet
Historical context This year we have the 330-th anniversary of the Battle of Vienna - one of the great formative events of European history, it took place on September 12, 1683. Kara Mustafa, Grand ...
0answers
34 views
### Evaluating the different calculations
I am a new user. I wonder that either there is possible to evalute different calculations at the same time in Mathematica or not? Can i share the kernels for different calculation at same time?
0answers
60 views
### Reducing exponential inequalities fails
I am quite stumped by this problem : Reduce[ N^(x-y) <1 && N > 0 , x, Reals] gives the expected result ...
1answer
88 views
### Solving for variables in a series of nonlinear equations [closed]
I have a nonlinear expression in five variables. I want to use Mathematica to solve for three of the variables. I have a series of points giving values of x an y and I am trying to solve for a,b, and ...
0answers
66 views
### draw a graph of a polynomial function with variables in the denominator [closed]
I have an equation f(u,z1,z2)=0, which is a long and complicated polynomial function with u in denominator. ...
0answers
57 views
### Rearrange this equation so x is the subject [closed]
I have this equation: y = 0.1x^3-0.2x^2 + 0.4x + 100 Can someone rearrange this equation so x is the subject. Thanks.
0answers
42 views
### Collect and factor equations with certain terms
Collect user defined terms from a symbolic algebraic expression ...
0answers
61 views
### Maximum root by using FindRoot [closed]
Assume f[x_, y_,z_] := Block[{A}, FindRoot[Abs[r[x, y, z]] - w[z] + 1 == 0, {A, 100}]] where ...
2answers
118 views
### find where 3 inequalities are simultaneously greater than zero
I have 3 functions, f11[l], f12[l] and f13[l], and I would like to know for which values of l these 3 functions are greater than zero. How do I do it analytically or numerically or graphically in ...
1answer
79 views
### Prandtl-Meyer function
I'm trying to solve for Mach number based on the given Prandtl-Meyer function, however I keep getting odd error. Here is my code: ...
0answers
80 views
### I can't solve this set of equations
I am having trouble solving a set of equations in mathematica. I do not get any errors, but i don't get any answers as well. I have some restrictions on my parameters and variables in the equations ...
3answers
192 views
### Solve for the intersection point of 3 surfaces
Basically, I have three 2-torus (tilted) in 3-dimensional Euclidean space. They are expressed by parametric equations: ...
2answers
97 views
### Count and plot the number of solutions in an interval
I have equations depending on one or more parameters and I want to find and plot the regions in the parameters space in which there are a specific number of solutions. For definiteness let's ...
1answer
167 views
### solving a cubic equation
I need to find the minimum $r$ and the maximum $k$ of the following cubic equation for which there does not exist three distinct real roots. $rx^3-rkx^2+(r+k)x-rk=0$. Is it possible to find such $r$ ...
0answers
6 views
### Identify the curve from the sample points [migrated]
I am writing a script which will identify the patterns drawn on the touch-pad of the laptop. I have generated all the points where user moved his finger over the touch-pad using synclient. Now i ...
0answers
62 views
...
1answer
43 views
### How can I solve this system with Mathematica? [closed]
Systems with 3 equations and 3 unknowns ... The unknowns are {p,q,j} ...
1answer
119 views
### How can I solve this nonlinear system with 3 equations and 3 unknown? [closed]
I have this system with 3 equations and 3 unknowns: ...
0answers
55 views
### Is there an error in this simultaneous equation? [closed]
I am a novice to Mathematica. Just curious to know why this simultaneous equations isn't working: ...
0answers
76 views
### Solve equations real and imaginary part separately
For my system of equations, the procedure described in Solving complex equations of using Reduce works no more. How can I separate the real and imaginary part of ...
1answer
59 views
### Error/warning when using NSolve for simple equation
I am using NSolve to solve an equation, as shown here: ...
1answer
56 views
### Finding the root of a nested function with small values
I have reduced an error in my program to this line of code: FindRoot[Nest[# (1 - #) k &, 1/2, 2^4] - 1/2, {k, 3.5}] It works for $2^1, 2^2, 2^3, 2^4,$ but ...
1answer
128 views
### Symbolic Simulations
I have 9 nonlinear equations and 10 unknowns. It is not possible to obtain a numeric solution but I do get a parametric solution. It is useful in my case because I want to observe how parameters (p1, ...
0answers
112 views
### FindRoot gives a wrong solution which obviously should not be there
I got stuck on FindRoot and I didn't see any similar problem posted, so let me explain what I am trying to do and what problem I meet here. I try to find roots of a particular function, which in the ...
0answers
55 views
### Exposing a variable [closed]
This is the basic equation \[Sigma][T]=E^((-20.7302 + 0.0147986 T) x) (625.646 - 3.50008 T) x^( 1.03651 - 0.000739932 T) Then there are commands: ...
0answers
108 views
### Nest for large value of n
I am working on a problem to find the Feigenbaum constant. In this problem I have to get $f^{2^n}(1/2)$ with $f(x) = kx(1-x)$. I use Nest to get $f^{2^n}(1/2)$ but ...
1answer
81 views
### Solving a system of equations
Mathematica doesn't want to solve my exact system with 3 equations and 3 variables. ...
0answers
140 views
### Creating and using an explicit piecewise function in a convenient way
I have a set of data points that define a function in the form curvePts = {data1,data2,...} where ...
1answer
128 views
### How to find the smallest positive root [closed]
My problem is : ...
1answer
87 views
### A basic problem with Solve
I have this equation: 4b*Cos[2t]-4a*Sin[2t]==4Cos[2t]+8Sin[2t] Which I would like to solve. Without using mathematica, you can pretty easily see that a = -2 and ...
2answers
202 views
### Mathematica does not understand (R^3)^(1/3) is the same as R [closed]
In the output from a calculation in mathematica stands a/((R^3*c)^(1/3)), with c and a ...
0answers
73 views
### Conditional statements in intial conditions?
This is potentially a daft question, but I thought I'd ask it; I have some material free to diffuse in a boundary between rn and ro; I've been able to get it working nicely for neumann type boundary ...
0answers
72 views
### Simultaneous Equation Solving Under Nonnegative and Bounding Conditions
I have nonlinear equations to solve parametrically; ...
1answer
121 views
### solution of differential equation
I have the following code that gives you a phase portrait of a 2d system and I can't understand what means the 3rd and 4th line (sol1 and sol2). ...
1answer
47 views
### Define variable relationships and dynamically update variables
I would like to define a relationship between 3 variables first and then later when I have filled two of the variables with a number, I would like to extract the value of the last value. For example: ...
1answer
36 views
### How to delete duplicate solutions of this system of equations?
I want to find vertices (has integer coordinates) of the triangle $ABC$ with the centorid is $G(1,1)$ and orthocenter is $H(3,3)$. I tried ...
1answer
114 views
### Solving Intervals
In a recent question I posed, it was noted that by design, Sqrt[ blah] only returns the positive branch, even though we might want to obtain all possible symbolic solutions. So, taking care to avoid ...
1answer
87 views
### Why is FindInstance failing when I relax a set of constraints?
I'm attempting to use FindInstance to generate coordinate sets for plausible triangles with edge length distance constraints. E.g.: ...
1answer
139 views
### How to solve this equation with integers as a solution?
I want to solve the equation $$x^y + y = y^x + x$$ with $x$, $y$ are integer numbers. I tried Solve[x^y + y == y^x + x, {x, y}, Integers] How to solve the ...
0answers
47 views
### DSolve::overdet for system of linear PDEs
I would like to resolve symbolically the following equation: ...
2answers
128 views
### Using RSolve correctly
I am having problems persuading Mathematica to solve even the simplest recurrence relations. As an example, how would you do the following? ...
0answers
72 views
### Defining and Solving Systems of Equations Using Matrix Tables
I've defined a system of equations, but I been unable to get Mathematica to solve for the individual variables created by matrix tables. ...
2answers
108 views
### Using FindRoot with an interpolating function
This question bears resemblance to a few other questions on mathematica.SE about finding points of intersection of crossing curves. I know that the guidebook of numerics has an entry about the whole ...
3answers
199 views
### Solving a system of equations with conditions related to the number of solutions
The equation below describes a conic with oblique axis: $$9 + 22 x + 9 x^2 + 46 y + 24 x y + 16 y^2=0$$ It is a parabola, as the coefficients in $x^2$, $y^2$, $xy$ form a perfect square. To find the ...
1answer
73 views
### Find point at which equation stops having roots (if it exists)
I am interested in the roots of this function: f[M_, b_] := 1 - (2 M Gamma[2, 0, (1/M + b M)/Sqrt[b]])/(1/M + b M) for fixed values of b. In particular I want ...
0answers
50 views
### Finding all roots of the system of two complex equations
I need to find all roots for the system of two complex equations. Obviously I can rewrite them as 4 real however problem remains. I have looked through routines connected with using ContourPlot ...
1answer
63 views
### Plotting query - same values on Y-axis / viewing data
I'm new to Mathematica, and have numerically solved two similar functions with the following code; ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 5, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102019667625427, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/223510/eigenvalues-and-determinant-of-conjugate-transpose-and-hermitian-of-a-complex-m
|
Eigenvalues and determinant of conjugate, transpose and hermitian of a complex matrix.
For a strictly complex matrix $A$,
1) Can we comment on determinant of $A^{*}$ (conjugate of entries of $A$) , $A^{T}$ (transpose of A) and $A^{H}$ (hermitian of $A$). I know that for real matrices, $\det(A)=\det(A^{T})$. Does it carry over to complex matrices, i.e. does $\det(A)=\det(A^{T})$ in general? I understand $\det(A)=\det(A^{H})$ (from Schur triangularization).
2) The same question as first, now about eigenvalues of $A$. I would like to know about special cases, for instance what if $A$ is hermitian or positive definite and so on.
-
1 Answer
Since complex conjugation satisfies $\overline{xy} = \overline{x} \cdot \overline{y}$ and $\overline{x+y} = \overline{x} + \overline{y}$, you can see with the Leibniz formula quickly that $\det[A^*] = \overline{\det[A]}$.
For complex matrices $\det[A] = \det[A^T]$ still holds and doesn't require any changes to the proof for real matrices.
Together this means that $\det[A] = \overline{\det[A^H]}$.
This applies to the eigenvalues as well: the characteristic polynomial of $A^*$ is given by $\det[tI - A^*] = \det[(tI - A)^*] = \overline{\det[tI - A]}$ and the eigenvalues of $A^*$ are exactly the complex conjugates of those of $A$.
In particular if $A$ is hermitian, $A = A^*$ and so all eigenvalues are equal to their complex conjugates - in other words, they're real.
-
1
A few more useful facts. Skew-Hermitian matrices have purely imaginary eigenvalues. Unitary matrices have eigenvalues which lie on the unit circle. Matrices with all real entries will always have eigenvalues occurring as conjugate pairs, this follows from the conjugate root theorem for real polynomials. – EuYu Oct 29 '12 at 14:24
@Euyu Thanks a lot both of you. I would like to explain the question which actually inspired all of this questions, I have a rank one positive definite matrix $A=xx'$. I consider $A^{T}$. Now, define $b=x^{*}$ and we have $A^{T}=bb^{H}$, by above arguments, they should have same eigen value. – dineshdileep Oct 29 '12 at 14:39
@dineshdileep You will have $A^T = b^* b^H$ – Cocopuffs Oct 29 '12 at 14:51
@Cocopuffs $A^{T}=b^{*}b^{H}=xx^{T}$, is it so? i feel something wrong with it. – dineshdileep Oct 29 '12 at 15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134425520896912, "perplexity_flag": "head"}
|
http://planetmath.org/admissibleidealsboundquiveranditsalgebra
|
# admissible ideals,, bound quiver and its algebra
Assume, that $Q$ is a quiver and $k$ is a field. Let $kQ$ be the associated path algebra. Denote by $R_{Q}$ the two-sided ideal in $kQ$ generated by all paths of length $1$, i.e. all arrows. This ideal is known as the arrow ideal.
It is easy to see, that for any $m\geqslant 1$ we have that $R_{Q}^{m}$ is a two-sided ideal generated by all paths of length $m$. Note, that we have the following chain of ideals:
$R_{Q}^{2}\supseteq R_{Q}^{3}\supseteq R_{Q}^{4}\supseteq\cdots$
Definition. A two-sided ideal $I$ in $kQ$ is said to be admissible if there exists $m\geqslant 2$ such that
$R_{Q}^{m}\subseteq I\subseteq R_{Q}^{2}.$
If $I$ is an admissible ideal in $kQ$, then the pair $(Q,I)$ is said to be a bound quiver and the quotient algebra $kQ/I$ is called bound quiver algebra.
The idea behind this is to treat some paths in a quiver as equivalent. For example consider the following quiver
$\xymatrix{&2\ar[dr]^{{b}}&\\ 1\ar[rr]^{{c}}\ar[dr]_{{e}}\ar[ur]^{{a}}&&3\\ &4\ar[ur]_{{f}}}$
Then the ideal generated by $ab-c$ is not admissible ($ab-c\not\in R^{2}_{Q}$) but an ideal generated by $ab-ef$ is. We can see that this means that ,,walking” from $1$ to $3$ directly and through $2$ is not the same, but walking in the same number of steps is.
Note, that in our case there is no path of length greater then $2$. In particular, for any $m>2$ we have $R_{Q}^{m}=0$.
More generally, it can be easily checked, that if $Q$ is a finite quiver without oriented cycles, then there exists $m\in\mathbb{N}$ such that $R_{Q}^{m}=0$
Major Section:
Reference
Type of Math Object:
Definition
Parent:
## Mathematics Subject Classification
14L24 Geometric invariant theory
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: joking
Added: 2011-02-18 - 22:58
Author(s): joking
## Versions
(v6) by joking 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203842282295227, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/31892-lost-equation.html
|
Thread:
1. Lost on this equation
I am trying to figure out the steps to solve this equation:
(t-5)^2=2t^2-7t-3
I dont even know how to start this.
And I cant seem to find an example in the book.
Thanks,
2. Originally Posted by GeinoD
I am trying to figure out the steps to solve this equation:
(t-5)^2=2t^2-7t-3
I dont even know how to start this.
And I cant seem to find an example in the book.
Thanks,
1. Expand the LHS
2. Collect all terms at the LHS
3. Solve the quadratic equation for t using the quadratic formula.
4. I've got: $x = -\frac32 \pm \frac12 \cdot \sqrt{121}$
EDIT: Found my mistake!
3. Originally Posted by GeinoD
I am trying to figure out the steps to solve this equation:
(t-5)^2=2t^2-7t-3
I dont even know how to start this.
And I cant seem to find an example in the book.
Thanks,
Try expanding the left hand side
$(t-5)^2=2t^2-7t-3 \iff t^2-10t+25=2t^2-7t-3<br />$
Now collect all of you like terms.
$0=t^2+3t-28$
You should be able to finish from here
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8972054719924927, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/35472/is-multicollinearity-implicit-in-categorical-variables
|
# Is multicollinearity implicit in categorical variables?
I noticed while tinkering with a multivariate regression model there was a small but noticeable multicollinearity effect, as measured by variance inflation factors, within the categories of a categorical variable (after excluding the reference category, of course).
For example, say we have a dataset with continuous variable y and one nominal categorical variable x which has k possible mutually exclusive values. We code those $k$ possible values as 0/1 dummy variables $x_1, x_2,\dots ,x_k$. Then we run a regression model $y = b_0 + b_1x_1 + b_2x_2 + \dots + b_{k-1}x_{k-1}$. The VIF scores for the $k-1$ dummy variables turn out to be non-zero. In fact, as the number of categories increases, the VIFs increase. Centering the dummy variables doesn't appear to change the VIFs.
The intuitive explanation seems to be that the mutually exclusive condition of the categories within the categorical variable causes this slight multicollinearity. Is this a trivial finding or is it an issue to consider when building regression models with categorical variables?
-
## 2 Answers
I cannot reproduce exactly this phenomenon, but I can demonstrate that VIF does not necessarily increase as the number of categories increases.
The intuition is simple: categorical variables can be made orthogonal by suitable experimental designs. Therefore, there should in general be no relationship between numbers of categories and multicollinearity.
Here is an `R` function to create categorical datasets with specifiable numbers of categories (for two independent variables) and specifiable amount of replication for each category. It represents a balanced study in which every combination of category is observed an equal number of times, $n$:
````trial <- function(n, k1=2, k2=2) {
df <- expand.grid(1:k1, 1:k2)
df <- do.call(rbind, lapply(1:n, function(i) df))
df$y <- rnorm(k1*k2*n)
fit <- lm(y ~ Var1+Var2, data=df)
vif(fit)
}
````
Applying it, I find the VIFs are always at their lowest possible values, $1$, reflecting the balancing (which translates to orthogonal columns in the design matrix). Some examples:
````sapply(1:5, trial) # Two binary categories, 1-5 replicates per combination
sapply(1:5, function(i) trial(i, 10, 3)) # 30 categories, 1-5 replicates
````
This suggests the multicollinearity may be growing due to a growing imbalance in the design. To test this, insert the line
```` df <- subset(df, subset=(y < 0))
````
before the `fit` line in `trial`. This removes half the data at random. Re-running
````sapply(1:5, function(i) trial(i, 10, 3))
````
shows that the VIFs are no longer equal to $1$ (but they remain close to it, randomly). They still do not increase with more categories: `sapply(1:5, function(i) trial(i, 10, 10))` produces comparable values.
-
You have the constraint that you can see is inherent in multinomial distributions, namely that one and only one of the x$_i$s will be 1 and all the rest will be 0. So you have the linear constraint ∑X$_i$ =1. That means say X$_1$ =1 - ∑X$_i$ where the sum is taken over i ≠ 1. This is the collinearity effect that you are noticing. There is nothing unusual or disturbing about it.
-
I do not understand what multinomial distributions have to do with this situation. Could you explain? – whuber♦ Aug 31 '12 at 23:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8684492707252502, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/30592/warp-drive-with-gravitational-waves-in-the-nonlinear-regime
|
# warp drive with gravitational waves in the nonlinear regime
gravitational waves are strictly transversal (in the linear regime at least), also their amplitudes are tiny even for cosmic scale events like supernovas or binary black holes (at least far away, maybe we should ask some physicists located a bit closer to the center of the galaxy), but lets put all those facts aside for a second and consider a gravitational source big enough to generate gravitational waves with amplitudes of the order of the galaxy. For instance consider a planar wave like in my mediocre drawing:
$$h_{\alpha \beta} e^{i (k_{y} y - \omega t)}$$
where
$$h_{\alpha \beta} \approx 1$$
so the perturbation is in the nonlinear regime
i draw two far away objects in three different time slices (this is why they are repeated 3 times), the topmost is the objects without the gravitational wave, the one in the middle represents the objects in the crest of the gravitational wave, and the one in the bottom represents the objects in the valley of the wave.
So, my point is that people would only have to travel an arbitrarily small distance when the wave is on the valley (assuming circular polarization) even if the "normal" distance (i.e: $h_{\mu \nu} = 0$) is several light-years away
Besides being slightly impractical to set up such a mammoth gravitational source, this kind of warp drive is valid from a physical standpoint? Are there any physical limits to gravitational wave amplitudes in such nonlinear regime?
-
## 1 Answer
I don't think you could use this as a warp drive unless you could collimate the gravity waves. If you consider a spaceship moving at constant velocity through a gravity wave, the ship will be accelerated then decelerated again as the wave passed through but it's average velocity would be unchanged. The only way you could get a net effect from the wave is if you could move from a region of high amplitude to low amplitude within half a cycle of the wave. I can't think of any (plausible) geometry that would allow this. Possibly you could do it very close to a black hole binary, where the gravity wave generation doesn't look like a point source.
-
the gravity wave does not produce any "acceleration" in test particles in the traditional sense of changing net momentum, it is just an oscillation in the metric, so the distance between the far-away objects grows and shrink in a single period by an amount proportional to the wave amplitude. So all objects at every point in the gravitational oscillation are always in free fall. – lurscher Jun 22 '12 at 13:06
regarding physical plausibility, i agree, it is usually hard to come by with sources of planar waves, and the fact that its gravitational radiation we are talking about does not make it any more realistic. – lurscher Jun 22 '12 at 13:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437196850776672, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/89142?sort=oldest
|
## Are context-free languages with context-free complements necessarily deterministic context-free?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $L \subseteq A^\star$ be a formal language over $A$ generated by a context-free grammar, and $L' = A^\star - L$ be the relative complement in $A^\star$.
If $L$ and $L'$ are both context-free, are they necessarily deterministic context-free?
-
2
This is essentially an exact duplicate of mathoverflow.net/questions/51657/… which has an answer. – Benjamin Steinberg Feb 21 2012 at 22:27
1
Perhaps you missed my last edit - I removed that part of the question. To clarify, my original question contained the above, as well as a "dual" question concerning the closure of the set of context-free languages w.r.t. finitary Boolean operations. In this question, I'm not looking for the closure of the class of context-free languages with respect to complements, but merely to know if the proper sub-class of CF languages which have CF complements is in fact the class DCF of deterministic CF languages. – Nick Loughlin Feb 21 2012 at 22:50
1
Ok no longer a duplicate. – Benjamin Steinberg Feb 21 2012 at 22:53
1
In other words, you are asking if $CFL \cap coCFL \subseteq DCFL$ or not. Interesting question. Couldn't find the answer in Hopcroft and Ullman. – Kaveh Feb 22 2012 at 1:55
1
@Nick, I rewrote the question to match your comment above as djlewis2 makes a good point. Please re-edit if this was not your intent. – Benjamin Steinberg Feb 22 2012 at 21:57
show 1 more comment
## 2 Answers
It seems that the answer to your question is no. See here.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Your question is a bit unclear, and when we clarify it, it becomes true.
If by "deterministic context-free grammar" you mean, as usual, an LR(k) grammar for some k, then Knuth proved in his seminal paper ("On the translation of languages from left to right", 1965) that the languages defined are the same as those defined by deterministic PDAs. These are the DFCLs, and the DFCLs are closed under complement. So both your L and L' are DFCLs and hence CFLs, and your last premise is redundant.
Your question really comes down to: are the DFCL's closed under complement -- and they are.
-
2
You are right, but it seems from the discussion that the OP didn’t actually intend to demand $L$ to be DCFL a priori, it may be a typo. – Emil Jeřábek Feb 22 2012 at 16:55
Ah, perhaps. But if so, that's quite a bit more than a typo. He'd simply say "If L is a CFL and L' its complement is also a CFL..." Also, if the question is as you say, then it ~is~ a duplicate of the referenced question, and the answer is "no". Perhaps Nick needs to chime in here. – David Lewis Feb 22 2012 at 17:53
Oh, he accepted that answer -- I guess you are right. – David Lewis Feb 22 2012 at 17:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420586228370667, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/94441/what-is-the-kolmogorov-extension-theorem-good-for?answertab=active
|
# What is the Kolmogorov Extension Theorem good for?
The Kolmogorov Extension Theorem says, essentially, that one can get a process on $\mathbb{R}^T$ for $T$ being an arbitrary, non-empty index set, by specifying all finite dimensional distributions in a "consistent" way. My favorite formulation of the consistency condition can be found here. Now for the case in which $T$ is countable, this has already be shown by P. J. Daniell (see for example here or here). So I would like to know what the extension to uncountable index sets brings. Events like "sample paths are continuous" are not in the $\sigma$-algebra. In a rather critical paper on Kolmogrov's work on the foundation of probability, Shafer and Vovk write about the extension to uncountable index sets: "This greater generality is merely formal, in two senses: it involves no additional mathematical complications and it has no practical use." My impression is that this sentiment is not universally shared, so I would like to know:
How is the Kolmogorov Extension Theorem applied in the construction of stochastic processes in continuous time? Especially, how are the constructed probabilities transferred to richer measurable spaces?
-
– t.b. Dec 27 '11 at 14:43
– Ilya Dec 27 '11 at 17:35
Thank you, but Oksendal constructs a process on $R^{[0,\infty)}$ with the finite dimensional distributions of BM and then mentions that one can assume the process to be supported on the space of continuous functions. He does not prove this. I have to check what Fremlin writes on the issue. – Michael Greinecker Dec 27 '11 at 18:33
1. First part of your question is: How is the Kolmogorov Extension Theorem applied in the construction of stochastic processes in continuous time? Did I answer it with my comment? If no, your question is still unclear to me, so would you elaborate on its formulation? 2. What are the richer measurable spaces? 3. Oksendal uses Kolmogorov Continuity Theorem to show that there is a continuous version of a Brownian motion. – Ilya Dec 28 '11 at 13:12
Sorry, I looked in the wrong place in the book by Oksendal. I've seen now that he uses the continuity theorem. So this gives an answer to both questions. I'm still courious about other applications, since it is not that hard to construct BM "by hand" using weak convergence theory. – Michael Greinecker Dec 29 '11 at 19:14
show 2 more comments
## 1 Answer
Assume that you have a set of finite-dimensional distributions $(\mu_S)_{S\in A}$, where $A$ is the set of finite subsets of, say, $\mathbb{R}$, and assume that you would like to argue for the existence of a stochastic process $X$ with càdlàg (right-continuous with left limits) paths such that the family of finite-dimensional distributions of $X$ is $(\mu_S)_{S\in A}$. Kolmogorov's extension theorem allows you to split this problem into two parts:
1. Establishing the existence of a measure on $(\mathbb{R}^{\mathbb{R}_+},\mathbb{B}^{\mathbb{R}_+})$ with the appropriate finite-dimensional distributions (here, the extension theorem is invoked).
2. Using the measure constructed above, argue for the existence of a measure on $D[0,\infty)$ - the space of functions from $\mathbb{R}_+$ to $\mathbb{R}$ which are right-continuous with left limits - with the same finite-dimensional distributions.
One example of where this is a viable proof technique is in the theory of continuous-time Markov processes on general state spaces. For convenience, consider a complete, separable metric space (E,d) endowed with its Borel $\sigma$-algebra $\mathbb{E}$. Assume given a family of probability measures $(P(x,\cdot))_{x\in E}$, where $P(x,\cdot)$ is a probability measure on $(E,\mathbb{E})$. We wish to argue for the existence of a càdlàg continuous-time Markov process with values in $E$ and $(P(x,\cdot))_{x\in E}$ as its transition probabilities.
The following argument is given in Rogers & Williams: "Diffusions, Martingales and Markov Processes", Volume 1, Section III.7, and uses the two steps outlined above. First, the Kolmogorov extension theorem is invoked to obtain a measure $P$ on $(E^{\mathbb{R}_+},\mathbb{E}^{\mathbb{R}_+})$ with the desired finite-dimensional distributions. Letting $X$ be the identity on $E^{\mathbb{R}_+}$, $X$ is then a "non-regularized" process with the desired finite-dimensional distributions. Afterwards, in the case where the family of transition probabilities satisfy certain regularity criteria, a supermartingale argument is applied to obtain a càdlàg version of $X$. This supermartingale argument could not have been immediately applied without the existence of the measure $P$ on $(E^{\mathbb{R}_+},\mathbb{E}^{\mathbb{R}_+})$: Without this measure, there would be no candidate for a common probability space on which to define the supermartingales applied in the regularity proof. Thus, it is not obvious how to obtain the same existence result without the Kolmogorov extension theorem.
-
1
But is it clear that the uncountable version of Kolmogorov is essential? Another typical approach would be to construct a measure $P$ on the countable product $(E^\mathbb{Q}, \mathbb{E}^\mathbb{Q})$, and then show that $P$-almost every element of $E^\mathbb{Q}$ extends to a cadlag path $\mathbb{R} \to E$, which one calls $X$. This has the benefit that one is working with standard Borel spaces throughout. I haven't looked at Rogers and Williams, but is there some reason why this wouldn't work here? – Nate Eldredge Apr 12 '12 at 13:04
@NateEldredge have you found and answer for this question of yours? – Ilya Jan 23 at 20:37
@Ilya: No, I haven't, but I haven't looked very hard either. – Nate Eldredge Jan 23 at 21:47
@NateEldredge: I agree with you that seemingly in this case, the uncountable version is not essential, merely convenient. I personally don't know of any applications where the uncountable version is essential. – Alexander Sokol Feb 5 at 16:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931854248046875, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/3417/relative-security-of-a-vigenere-cipher?answertab=votes
|
# Relative security of a Vigenère cipher
Within a closed computer network, I am ciphering some plaintext data as an added security measure. This is below several other layers of protection. For various technical reasons, I am restricted to plain-text, UTF-8 data. To that end, I have been using a Vigenère cipher with pre-shared key database. The keys range from 30 to 100 characters, and are not dictionary words.
My question is regarding the Vigenère cipher: It is my understanding that the security of this cipher is directly related to the length and security of the keys. Long and tightly secured keys bring this cipher on par with many more complex techniques. Is that true, or is this a "kiddie" cipher that could be cracked while you sleep?
Edit It may bear mentioning, the text that is being ciphered is JSON encoded data arrays, which is human readable but not "natural language" -- it contains many symbols interspersed with the actual data.
EDIT 2 Plaintext length varies unpredictably, from 100 characters to 5,000 plus. The data is being transported via SSL, so this is, as I mentioned, intended as yet another added layer, not the whole and sum of security used.
-
You say "the keys range from 30 to 100 characters and are not dictionary words." How are the keys derived? Randomly? How long are the plaintexts? – mikeazo♦ Aug 1 '12 at 12:03
Plaintext is wildly varied in length, the keys were derived by gluing fragments of random long dictionary words together with fragments of random characters – Chris Aug 1 '12 at 14:23
## 3 Answers
The real security of Vigenere is difficult to quantify. A million character plaintext with a 10 character password is easy to break. But a 10 character plaintext with a 10 character randomly chosen password is essentially a one-time-pad and theoretically unbreakable.
Given the data you've told us (plaintext: 100 to 5000 characters; password: 30 to 100 characters), it would seem that Vigenere adds very little. A 5000 character plaintext with 30 character password yields $5000/30\approx 167$ characters per plaintext alphabet. That is easily breakable using statistical methods. Other configurations given the numbers you've specified might be slightly harder (especially if the password were completely random).
That said, if you are already using SSL and adding Vigenere "as yet another added layer" of security, the increase in security by using Vigenere is 0 at worst and negligible at best. An attacker who can break SSL will have no problems breaking Vigenere.
-
Thanks! The phrase "An attacker who can break SSL will have no problems breaking Vigenere." really settles it for me -- you're absolutely right about that. – Chris Aug 1 '12 at 17:56
The Vigenère cipher has many weaknesses, but perhaps the most obvious ones are:
• An attacker, who knows (or can guess) as many consecutive characters of any plaintext message as there are in the key, can trivially recover the key and thus decrypt all messages. (In fact, the characters need not even be consecutive, they just need to cover the entire key, or at least most of it.)
• For most natural messages, it's fairly easy to guess the key length, for example by looking at correlations between characters $n$ positions apart.
• An attacker who knows (or can guess) the key length can divide the ciphertext into blocks of this length and decrypt one block with the other as the key to obtain a linear combination of the two messages. This will often have enough structure to allow the original blocks to be (at least partially) reconstructed, which in turn allows recovery of the key. (Also, if there are multiple messages encrypted with the same key, or if the key itself is not completely random, other similar attacks may be possible even without guessing the key length.)
In short, I would strongly recommend not using the Vigenère cipher for any purpose, except perhaps for puzzles that are meant to be broken. At the very least, use a stream cipher like RC4, which is simple enough to implement off the top of your head. Also, if you need to encrypt multiple files, include a unique initialization vector with each (or derive one from some unique value, like the name of the file) and hash it together with your master key.
(Using a stream cipher in such a way that they output remains valid Unicode text is a somewhat non-trivial exercise in format-preserving encryption, but it should be no harder than doing the same with a Vigenère cipher. Personally, I'd suggest encrypting the input as a binary octet stream and then Base64-encoding it into printable ASCII as the simplest secure method.)
-
Thanks for the info! It was hard to decide which answer to accept, since they basically all say "don't!" (with varying degrees of exclamation). I went with @mikeazo's answer because I appreciate that phrase "An attacker who can break SSL will have no problems breaking Vigenere.", and I guess that is the bottom line. Thanks again, +1 – Chris Aug 1 '12 at 17:55
With sufficient ciphertext, statistical analysis can also reveal the key length. Further analysis on each block can potentially reveal each letter in the key regardless of whether or not they are random. – Stephen Harris Aug 9 '12 at 10:38
Eek! The Vigenere cipher is completely and totally insecure. You should never use it. Instead, use a modern authenticated encryption scheme.
If you are protecting data in transit, I recommend using TLS (or SSL). If you are protecting data in storage, I recommend encrypting it with GPG (or PGP). This is the simplest, easiest way to get well-vetted cryptography that is unlikely to have a catastrophic flaw.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413529634475708, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/148953-sub-sequential-limit-point.html
|
# Thread:
1. ## Sub-sequential limit point
1) Is it Possible to find a sequence for which the set of sub-sequential limit points is [0,1]?
2)Is any closed set in R,the set of sub-sequential limit points of a sequence?
2. Originally Posted by math.dj
1) Is it Possible to find a sequence for which the set of sub-sequential limit points is [0,1]?
2)Is any closed set in R,the set of sub-sequential limit points of a sequence?
Yes (to both questions). Use the fact that R is separable. A closed subset of R has a countable dense subset. Construct a sequence which visits each element of that subset infinitely often.
3. In other words, for (a) construct a sequence that converges to 0 ({1/n} will do), a sequence that converges to 1 ({(n-1)/n} will do) and "blend" the two: $a_n= \frac{1}{n}$ if n is odd, $a_n= \frac{n-1}{n}$ if n is even.
4. a) Consider the sequence {x_n} such that there is a one to one correspondence between the elements of the sequence and the rational numbers in [0,1]. Then the sequence has sub-sequential limits [0,1].
b) Yes. Any finite set is closed. So try to find a sequence with finite no. of limit points.
5. are you trying to say there is a bijection between N and Q intersection [0,1]?
6. Yes. Since the set of rational nos. Q are countable, we can arrange the rational nos Q∩[0,1] as the sequence {x1,x2,x3,.....}. Then for any no. α in [0,1] there is a sub-sequence from {xn} converging to α.
7. There exist a bijection between any two countable sets:
If A is countable then, by definition, there exist a bijection f:N-> A.
If B is countable then, by definition, there exist a bijection g:N-> B.
The function A->B, $g\circ f^{-1}$ is a bijection from A to B.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485294818878174, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/201209/fredholm-operators
|
# Fredholm operators
How can I get the (Volterra) operator from an equation of the type
$$u''(x)+xu'(x)+u(x)=0\text{ ?}$$
I know that there is a general way of doing it, if you could point me at the proper book I'd be thankful!
-
It is simpler to solve this equations than to understand what you are asking about – Norbert Sep 23 '12 at 16:51
Ok maybe I explain myself a little bad. Fredholm operators are helpful in order to solve differential equations, because you can reduce the problem to one of the tipe $u(x)=T(u(x))$ where $T$ is the operator, that is, the answer will be the eigenvalues of $T$. The question is adressed to people who know a little about Fredholm and Volterra operators. – Miguel Sep 23 '12 at 17:40
## 1 Answer
I understand that you want to rewrite the differential equation in terms of an integral (Volterra-type) operator. The resulting operator $T$ will be Hilbert-Schmidt, hence compact, hence $I-T$ is Fredholm.
Introducing $v=u'$, we get the system of 1st order equations $u'=v$, $v'=-u-xv$. Using the initial values $(u_0,v_0)$, we rewrite the IVP as a system $$u(t)=u_0+\int_0^t v(s)\,ds, \qquad v(t)=v_0+\int_0^t [-u(s)-xv(s)]\,ds$$ The desired operator $T$ takes the vector-valued function $(u,v)$ and produces $$t\mapsto \left(u_0+\int_0^t v(s)\,ds, v_0+\int_0^t [-u(s)-xv(s)]\,ds\right)$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243017435073853, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/36936/what-does-the-probabilistic-model-suggest-the-error-term-in-the-pnt-should-be/38272
|
What does the probabilistic model suggest the error term in the PNT should be?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\Lambda(n)$ be the von Mangoldt function. The prime number theorem is equivalent to the statement that $\sum_{n \leq N} \Lambda(n) \approx N$. Defining $\lambda_{*}(n)= \Lambda(n)-1$ we may rewrite this as $S(N) = \sum_{n \leq N} \Lambda_{*}(n) =o(N)$. Now it is known that $|S(N)| \gg |N|^{1/2}$ infinitely often. Moreover, on the RH we have that $|S(N)| \ll N^{1/2}ln^2(N)$. Not that these estimates differ by a factor of $ln^2(N)$.
My question is the following: What do probabilistic considerations suggest the correct error term to be?
Let me suggest a model: Let $X_n$ be a sequence of independent random variables such that $X_n = \ln(n)-1$ with probability 1/ln(n) and $-1$ with probability $1-1/ln(n)$, and form the sum $T(N)= \sum_{n=1}^{N} X_n$. Is there an elementary function $E(N)$ such that $\lim sup_N |T(N)|/E(n) = 1$ holds almost surely?
Notice that if the primes had positive density in the integers and we adjusted our model accordingly the law of the iterated logarithm would allow us to take $E(N)$ to be a multiple of $|N|^{1/2}\ln\ln(N)$.
(More generally, I'm interested in understanding sums of the above form (that is independent random variables with slowly increases variance) if you know of an appropriate reference.)
-
2 Answers
Let $P_n$ be independent variables which are 1 with probability $1/\log n$ and $0$ with probability $1-1/\log n$ and let $$\Pi(x) = \sum_{n\leq x} P_n.$$
Then Cram\'{e}r showed that, almost surely,
$$\limsup_{x\rightarrow \infty} \frac{|\Pi(x)-\ell i(x)|}{\sqrt{2x}\sqrt{\frac{\log\log x}{\log x}}} = 1$$
where
$$\ell i (x) = \int_2^x \frac{dt}{\log t}.$$
See page 20 here: http://www.dms.umontreal.ca/~andrew/PDF/cramer.pdf
Edit: H. L. Montgomery has given an unpublished probabilistic argument that suggests
$$\limsup_{x\rightarrow \infty} \frac{|\psi(x)-x|}{\sqrt{x} (\log\log\log x)^2} = \frac{1}{2\pi}.$$
This is announced in: H.L. Montgomery, "The zeta function and prime numbers," Proceedings of the Queen's Number Theory Conference, 1979, Queen's Univ., Kingston, Ont., 1980, 1-31.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Cramer's model is well-known to be (provably) a little bit off from the truth (See Maier's work on short gaps between primes). There is a different probabilistic model that you can use. It is motivated by the following equivalent form of the Riemann Hypothesis: $$\text{RH true} \iff \sum_{n \leq X} \mu(n) \ll_{\varepsilon} X^{1/2 + \varepsilon}$$ for every fixed $\varepsilon > 0$, where $\mu$ is the the Moebius function. Motivated by this equivalence, we consider the following problem: Let $X_p$ be a sequence of independent random variable with $\mathbb{P}(X_p = 1) = \mathbb{P}(X_p = -1) = 1/2$ and for squarefree $n = p_1 \ldots p_k$ define $$X_n = X_{p_1} \ldots X_{p_k}$$ What almost-sure bounds for $$\sum_{n \leq X} X_n$$ can we obtain? It turns out that this is a difficult problem in it's own right. But a few results are known. It was known for a long time (since Wintner) that $$\sum_{n \leq X} X_n \ll_{\varepsilon} X^{1/2 + \varepsilon}$$ almost surely and Halasz improved this to bound to an $\ll X^{1/2} \exp(C \sqrt{\log\log X})$ (Halasz proved a slightly weaker bound but his method can be used to obtain the bound stated here). So you could say that this model suggests a bound of $X^{1/2} \exp(C \sqrt{\log\log X})$ for partial sums of the Moebius function, but the difference between $X_n$ and $\mu(n)$ turns out surprisingly to be too stricking to believe that this is the truth. However, the model itself certainly deserves more study!
As a post-scriptum, let me mention that the best known bound (assuming RH) for the Partial sums of $\mu(n)$ is $C \cdot X^{1/2} \exp(\sqrt{\log X} (\log\log X)^{14})$ (see http://arxiv.org/abs/0705.0723)
-
Thank you for the comments. I don't think Maier's work should have much impact on the question. Maier located (a very sparse collection) of small intervals with more/less primes than expected. However, since these intervals are small and sparse the contribution from all of these primes is minor to the (best possible) error term. – Mark Lewko Sep 11 2010 at 22:34
1
So, I thought that Maier's work "contradicted" Cramer's model, because (if I recall correctly) Cramer's model predict that all small intervals (by small, I mean some power of log x) should have just the right number of primes, that is ~ h / log x, where h is the size of the interval. Also, I think that the error term for the difference between pi(x) and li(x), as predicted by Cramer's model, can be proven to be false (i.e pi(x) - li(x) = Omega(sqrt(x)). Finally, I wonder, what prompted you in the direction of this question? – anon Sep 14 2010 at 3:51
Maier's work was also surprising, because, probably as you know, Selberg has proven earlier that (assuming RH) "almost all" intervals of size (log x)^{2} contain just the right amount of primes. – anon Sep 14 2010 at 3:52
Oups ... I would like to point out that anon = kukuriku :-) – anon Sep 14 2010 at 3:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469208121299744, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/54846?sort=votes
|
## Converse of the Banach fixed point theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f:X \to X$ ($X = R^d$ for some $d$) be a mapping such that $f^n (x) \to x^\ast$ for all $x \in X$ as $n \to \infty$ ($x^\ast$ is unique). Can we say anything about the spectral structure of the gradient matrix $\nabla f (x^\ast)$ ? Do we know that the spectral or the operator norm of this matrix is less than one?
-
3
Of course, a priori, there is no reason why $f$ should be differentiable at $x^*$ – Dick Palais Feb 9 2011 at 5:43
## 1 Answer
I think this is a counter-example to what you are asking. Choose a smooth function $a : R \to R$ with $a(0) = 0$, $a(-x) = a(x)$, $a(x) = x^2$ near $x = 0$, $a(x)$ monotonically increasing for $x > 0$, and $a(x)$ everywhere less than $1$. Then if we let $f(x) := (1 - a(x)) x$ we have $f'(0) = 1$, but $f^n(x)$ converges to $0$ for all $x$. This is because $f^n(x)$ is monotonic and bounded, and so approaches a limit, which must be a fixed point and so can only be $0$.
-
An example of such an $f$ is $\frac{x}{1+x^2}$. But what if we demand that $f_n(x)$ converges to $x^*$ uniformly? – robot Feb 9 2011 at 13:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175166487693787, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/164790-finding-stretch-compression.html
|
# Thread:
1. ## Finding stretch or compression
Hey everyone I'm trying to figure out how to find a stretch or compression of a parabola. I have a parabola with a vertex of (1,3) and points are (0,5) and (2,5).
The equation I have so far is y = _ (x-1)^2+3, but i can't find the stretch/compression. Thanks for your help.
2. If you mean that the parabola you've found so far is
$y=(x-1)^{2}+3,$ then I would not agree that all three of
$(1,3), (2,5), (0,5)$ are on the parabola.
That is, I don't think you've found the correct formula yet.
3. The parabola is a stretched parabola with the points (0,5) and (2,5). and a vertex of (1,3). I'm trying to find the stretc/compression of this parabola.
4. If you're trying to find the stretch or compression of the parabola, you must be comparing this parabola to a different parabola. You can fit a parabola to the three points you have there. And that would just be a simple parabola. To know if that parabola is stretched or compressed from another parabola, you would have to tell me what the equation for the other parabola is. Do you follow me? I guess what I'm saying is that the very word "stretch" implies that you're starting with one thing, and then stretching it into another thing. Thus, there are two things involved. I only see one thing at the moment.
5. Ackbeet, there is a "_" in front of the square representing, I think, the unknown coefficient. I believe that Jubbly means $y= a(x- 1)^2+ 3$. The "-1" and "+ 3", of course, guarantee that the parabola passes through (1, 3). Putting x= 2 and y= 5 gives $5= a(2- 1)^2+ 3= a+ 3$. Putting x= 0 and y= 5 gives $5= a(0- 1)^2+ 3= a+ 3$ also
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490876793861389, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/134321-rate-change-cone-print.html
|
Rate of change of a cone
Printable View
• March 17th 2010, 02:20 PM
Frostking
Rate of change of a cone
I am trying to figure out a practice problem while studying for a calculus final and I do not know how to approach this problem. Any help would be much appreciated. The height of a cone is decreasing at 2 cm per second while its radius is increasing at 3cm per second. When the radius is 4 cm and the height is 6 cm, at what rate is the volume of the cone changing?
I started with the volume for a cone which is: 1/3 pi r^2 h
I thought I could take the derivative of volume with respect to radius and multiply that times the rate of change per sec of the radius and then add to that the derivative of volume with respect to height multiplied by the cnage per sec of h. When I do all this I get that the volume of the cone is decreasing by 13.8 cm^3 per second which does not seem reasonable since the radius is increasing by more than the height is decreasing and the radius is a squared term in the original volume equation???!!!! Thanks for all you folks time and effort! Frostking
• March 17th 2010, 02:26 PM
TKHunny
Quote:
Originally Posted by Frostking
V(t) = 1/3 pi r(t)^2 h(t)
With a little modification.
Now find the derivatvie with respect to 't'. It will take a product rule. No mysterious adding or other tomfoolery.
$\frac{dV}{dt}\;=\;\frac{\pi}{3}\cdot \left(r^{2}\frac{dh}{dt} + 2r\frac{dr}{dt}\cdot h\right)$
All times are GMT -8. The time now is 02:24 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942255437374115, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87794/why-is-the-identity-element-of-a-group-denoted-by-e/87797
|
## Why is the identity element of a group denoted by $e$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The question was asked by a student, and I did not have a ready answer. I can think of the German word Einheit'', but since in German that is not how the identity element of a group is called, I doubt that is the origin. Any ideas?
-
4
"but since in German that is not how the identity element of a group is called" ... Sometimes it is indeed called like this. Also the identity matrix is frequently or at least not rarely called 'Einheitsmatrix'. Another thought: Sometimes the identity element in a multiplicative group is called (perhaps sloppily) Einselement (where 'eins' means 'one'). – quid Feb 7 2012 at 14:09
## 1 Answer
Heinrich Weber uses Einheit and e in his Lehrbuch der Algebra (1896).
-
1
That is almost certainly the origin, though it should be noted that one in Russian is "edinica". – Igor Rivin Feb 7 2012 at 14:58
1
@Igor: The influential early textbooks on algebra tended to be written in German, unfair though that may be to those of us who grew up with English (or Russian). Quite a bit of common terminology and notation in mathematics seems to have originated in German work during the 19th century, such as the symbols `$K,k$` for fields. – Jim Humphreys Feb 7 2012 at 20:55
Well, Weber surely popularized the term. But his friend Dedekind used "einheit" before him to mean either a unit in a field, or a unit measure in geometry, and I'll bet if you look in his work you'll find it for groups. Probably if you dig into the 19th century you can find a series of earlier and earlier, vaguer and vaguer, uses of the term for a group identity. – Colin McLarty Dec 29 at 17:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683317542076111, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/110378/analogue-of-a-set-with-n-binary-operations/110379
|
## analogue of a set with n binary operations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
So a group is a type of structure with one binary operations that satisfies some list of axioms. A ring is a structure that has two binary operations that satisfy some list of axioms. Do there exist structures with n, independent (meaning it isn't possible to decompose the structure into collections of structures with two or less binary operations), binary operations that satisfy some list of axioms? Is it possible to define a consistent structure of this sort? If someone could point me in the direction of an article or something, or offer your own point of view, I would appreciate it.
-
1
It is unclear what you mean by independent. Indeed, I suspect from your parenthetical remark that you actually mean some form of dependence. (It is possible that you may mean that the operations are algebraically independent (for me this means each f stays out of the clone generated by all the other operations)l. Gerhard "Ask Me About System Design" Paseman, 2012.10.22 – Gerhard Paseman Oct 23 at 1:00
1
Consider a $cat^n$ group, which is a model for a pointed, connected homotopy $n$-type... – David Roberts Oct 23 at 6:15
## 4 Answers
The study of sets with an arbitrary number of operations is called universal algebra.
Also universal algebra isn't limited to binary operations but studies operations of any arity: nullary, unary, binary, ternary, ... , n-ary.
(Universal algebra does however typically restrict itself to axioms that are defined by equations which means fields are excluded from this way of studying algebra.)
Beware however that operations that are ostensibly independent might not be. See for example the earlier mathoverflow question: Can we unify addition and multiplication into one binary operation? To what extent can we find universal binary operations?
In a ring the distributive law connects the addition and multiplication operations, so that they cannot in a sense be independent, but there is nothing to stop a structure of n operations from being consistent if none of the axioms connect any of the operations to each other. Like Gerhard Paseman comments about, it depends what you mean by independent.
Questions along similar lines to the linked question about universal operations could be asked about any given universal-algebraic variety - given a variety in which every operation is connected to the other operations by axioms, then to what extent can the number of operations be reduced and still define the same class of structures. For questions about reducing the number of operations or axioms see https://www.cs.unm.edu/~mccune/projects/gtsax/
P.S. I'll mention here as a curiosity the topic of n,m-operations. That is operations that not only have a domain-arity but also a co-domain arity. e.g. $*:G \times G \times G \rightarrow G \times G$. Concepts such as associativity can be generalized to n,m-operations but n,m-operations haven't been studied much in universal algebra.
-
2
An $n,m$ operation, `$G^{n}\to G^m$` can be viewed as $m$ separate components, each of which is an ordinary $n$-ary operation. Nevertheless, $n,m$ operations serve as the morphisms in Lawvere's category-theoretic approach to universal algebra. The key idea is that substitution, which plays an essential role in clones, can be subsumed under the simpler concept of composition if one uses $n,m$ operations. – Andreas Blass Oct 23 at 3:26
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For every symmetric monoidal category $C$ the category of commutative monoids $\mathrm{CMon}(C)$ is again symmetric monoidal, the forgetful functor to $C$ being a monoidal functor. Thus we may iterate this construction and define $\mathrm{CMon}^{(n)}(C)$ for all $n \in \mathbb{N}$. Intuitively, objects of this category have $n$ commutative operations which are compatible with each other. However, this sequences already terminates at $n \geq 1$. Every commutative monoid in $C$ has a unique structure as an object in $\mathrm{CMon}(C)$.
For example, for $C=\mathrm{Ab}$, starting with abelian groups ($n=0$), next we get commutative rings ($n=1$). But for a commutative ring $R$ there is exactly one unit $u : \mathbb{Z} \to R$ and exactly one ring homomrphism $\mu : R \otimes R \to R$ making it into a commutative monoid of commutative rings, namely $u(z)=z \cdot 1_R$ and $\mu(a \otimes b)=a \cdot b$.
This explains a little bit why there are so few structures with three or more binary operations.
-
I don't know that there is anything as nice as the way that the two main operations in a ring fit together. However there are ways to introduce many linked operations which, while mainly done for the purpose of doing so, are perhaps not too overly frivolous. Here is one:
Given a finite set $S$ with $k$ elements, there are $k^{k^2}$ binary operations and each operation $\diamond$ can be represented by its Cayley Table the $k \times k$ square with $i,j$ entry $i \diamond j$ (assuming a known order). This single operation defines a quasi-group if the table is a Latin Square, each symbol appears once in each row and column. Equivalently (and without an agreed order), for every $a,b$ there are unique $x,y$ with $x \diamond a=b$ and $a \diamond y=b.$ This defines two operations $y=a \backslash b$ and $x=b / a$ with $$b=a \diamond (a \backslash b)=a \backslash (a \diamond b)=(b /a) \diamond a=(b \diamond a) / a.$$ This equational definition of a quasi-group via 3 operations is available for infinite quasi-groups as well. If we introduce the projections $a\pi_1b=a$ and $a\pi_2b=b$ then we can write the previous equations as $$a\pi_2b=(a\pi_1b)\diamond(a\backslash b)=\cdots$$
Two Latin Squares (given by quasigroups $(S,\cdot)$ and $(S,\diamond)$) are orthogonal if the map $(x,y) \to (x\cdot y,x\diamond y)$ is a bijection from $S \times S$ to itself. It is not known how many pairwise orthogonal quasi-groups (aka Mutually Orthogonal Latin Squares MOLS) are possible on a set of size $k.$ Certainly no more than $k-1$ and this is possible when (and perhaps only when) $k$ is a prime power.
So I suppose that a set of $t$ MOLS on the same set could be described as $t$ operations $\diamond_i$ whose individual quasi-group nature is asserted (as before) using another $2t$ operations $a \backslash_i b$ and $a/_i b$ (along with $\pi_1$ and $\pi_2$) and whose pairwise orthogonality is specifieded using a further $t(t-1)$ operations $\leftarrow_{ij}$ and $\rightarrow_{ij}$ so that $x=a \leftarrow_{ij}b$ and $y=a \rightarrow_{ij} b$ satisfy $(x\diamond_iy,x\diamond_jy)=(a,b).$ Or, if we have no shame, $$((a \leftarrow_{ij}b)\diamond_i(a \rightarrow_{ij} b),(a \leftarrow_{ij}b)\diamond_j(a \rightarrow_{ij} b))=(a\pi_1b,a\pi_2b).$$
On one hand $\leftarrow_{ij}=\rightarrow_{ji}$ so maybe a total count of $t^2+2t+2$ is too greedy. On the other hand, maybe more operations such as $a\diamond^i b=b\diamond_ia$ could be shoehorned in. But I will stop there.
-
I've decided to expand my comment since I like my interpretation of your idea of independence.
Let's use the language of universal algebra, where we take a well understood case of an underlying set A and some system F of total functions f of finite arity n (so $f:A^n\longrightarrow A)$. For increased graspibility, I will assume A is a finite set and F is a finite nonempty tuple, where in the question all the functions in F have $n=2$, but I will allow $n$ the freedom to vary.
One reading of the poster's notion of independence is related to indecomposability of F: given A,F (I am omitting the brackets of the traditional notation), it should not be able to represent it in a nontrivial fashion as some amalgam of A,G and A,H, where G and H are smaller tuples. (I will pretend that the mechanism of tuple concatenation is not allowed, e.g. F = G concat H is illegal.) In particular, each f in F is not derivable from the other operations in F. Using 1 in A as a symbol for a constant function, which I will also call a function of arity 0, the algebras A,+,%,1 and A,+,%,1,g are different, but if g is essentially the function derived from the term (x%(x+1)), then the second algebra does not meet the notion of independence.
A fuller exposition of this notion can be seen in looking at certain closed collections of functions on A, called clones. At http://en.wikipedia.org/wiki/Post's_lattice one can see containment relations as well as lists of generators for each clone. Most of the generators are binary functions, but some functions of higher arity are needed since there are infinitely many classes of such functions. By itself, each generating tuple F is a tuple for such an independent algebra {0,1},F .
Now for larger sets, there are larger examples, including the n-quasigroup examples suggested by Aaron Meyerowitz. It is not clear to me that such examples are independent in the above sense, however. There are also n-semilattices as well as lattices enriched with n extra binary operations; I will let you search the general algebra literature for those.
There are also primal algebras, which have F as a one-tuple on a finite set A, such that the single (usually binary) function f in F can generate all other functions on A. Also there are quasiprimal algebras which are like primal algebras, except their clones are maximal but incomplete: adding any other function outside their clone to the signature F would generate the clone of all functions.
This may read more like an advertisement for clone theory than an answer, but the post did ask for other points of view.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527498483657837, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/22866-urgent-help-graphs-sine-cosine-2.html
|
# Thread:
1. Originally Posted by aikenfan
I'm sorry, I'm a bit confused...On 68, I thought that the Amplitude came from the graph...the line goes up to 2 (but the grid goes up to 3)
you labeled a point on the graph to be $(\pi, 3)$, how is it that the amplitude is 2? (you do not label a grid using coordinates, so if that's what you tried to do, you are double wrong on this problem. you define your grid on your axis, anyone will assume that any coordinate point you write applies to the curve)
2. I think that I made a mistake...I am going to go back and fix that and come back...the problem posted on the first page of this thread shows the curve going up to 2...
3. This is my new answer... I believe this is correct now...the numbers that were given confused me a bit (the line only went to 2, not 3) but I had thought to go to 3....
4. Originally Posted by aikenfan
This is my new answer... I believe this is correct now...the numbers that were given confused me a bit (the line only went to 2, not 3) but I had thought to go to 3....
no, the period for 68 is $4 \pi$, it is k that is 1/2
5. Ok, i've fixed that as well...I believe they should be correct now...other than that small mistake....Do you think it looks correct now
6. Originally Posted by aikenfan
Ok, i've fixed that as well...I believe they should be correct now...other than that small mistake....Do you think it looks correct now
i don't think you explicitly wrote the period for 70, which is $\frac {4 \pi}3$. other than that, everything looks good
7. I want to thank you so much for all of your help and for being patient! I appreciate it very much!
8. Originally Posted by aikenfan
I want to thank you so much for all of your help and for being patient! I appreciate it very much!
you're very much welcome. i like your style. it is evident that you are trying to get through this stuff and not just relying on others for answers
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9808714985847473, "perplexity_flag": "middle"}
|
http://nrich.maths.org/183/note
|
### Homes
There are to be 6 homes built on a new development site. They could be semi-detached, detached or terraced houses. How many different combinations of these can you find?
### Let's Investigate Triangles
Vincent and Tara are making triangles with the class construction set. They have a pile of strips of different lengths. How many different triangles can they make?
### Teddy Town
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
# A City of Towers
## A City of Towers
In a certain city houses had to be built in a particular way.
There had to be two rooms on the ground floor and all other rooms had to be built on top of these.
Families were allowed to build just one room for each person living in the house.
So a house for two people would look like this:
but a house for three people could look like one of these:
There are some families of seven people living in the town.
In how many different ways can they build their houses?
### Why do this problem?
This problem is an investigation into combinations of a number of cubes. It is a practical activity which involves visualising and relating $3$D shapes to their representation on paper. Young children are often introduced to sets of regular polyhedra and similar sorts of shapes, less often do they systematically explore shapes made up from cubes.
### Possible approach
You could start with this story as an introduction to the problem. Alternatively you could simply talk through the problem as it is written. Ideally, it would be good to supply interlocking cubes or other cube bricks and $2$ cm squared paper or plain paper for recording. It might help to begin the challenge all together before asking children to work in pairs on the problem so that they are able to talk through their ideas and compare their results with a partner.
Some children may need help recording their models but do encourage them to record in whatever way they feel is useful. If necessary, you could demonstrate on the interactive whiteboard. If $2$ cm cubes have been used then they can lay their shape on the paper and see how it fits into the squares. Alternatively, children might just sketch their models on plain paper or, if you have enough cubes, they can keep each model.
In the plenary, as well as comparing results, it would be good to spend time talking about how the children approached the problem. Some might have started straight away with seven cubes, others might have tried four cubes, then five, etc. Some children might have made the models, some might have been able to picture the houses and draw them without using cubes. It can be useful to discuss the advantages and disadvantages of each different method. Depending on the children's experience, you can also draw attention to those that have used a systematic way of finding all the houses. If most of the children have not developed a system, you could line up models in a particular order for all to see so that they notice the system themselves. This way, they may be able to spot any that are missing.
### Key questions
How many cubes are there in this one? Would it be a good idea to count them?
Are all your houses different from each other?
Could you put this cube in a different place?
How will you draw your houses?
### Possible extension
Some children could investigate other numbers of cubes or create their own rules for building houses.
### Possible support
You may like to suggest that some children start by finding all the houses for four people, then five etc.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9701545238494873, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/41497/list
|
## Return to Question
4 deleted 1092 characters in body
For the problem I am interested in I have additional information about the matrix $A$. For example I know it is a generalized vandermonde matrix.
Then the results of Schlickewei, Hans Peter; Viola, Carlo Generalized Vandermonde determinants. Acta Arith. 95 (2000), no. 2, 123--137. seems to imply that if the ratio of the entry's are not roots of unity then the choice of the exponent set is rather limited its always finite and except for a few exceptional cases actually very very small.
Here is a brief summary,
My requirement is for the $n=4$, the entries of the matrix have modulus 1, the matrix is a generalised vandermonde matrix in the sense of Schlickwel and Viola, yes in light of the $n=3$ case I do know no proper subdeterminant vanishes. Well I also know the entries are algebraic integers, and we can assume the ratios are not roots of unity.
My question is
Do we know that the solution set is small i.e., in the non exceptional case as mentioned in the above mentioned paper
.
3 added 46 characters in body
Let $A$ be a $n \times n$ matrix all of whose entries has modulus 1.
Suppose the matrix $A$ is singular.
We will assume without loss of generality that all the entries in the first row and the first column of the matrix are 1.
Observe when $n=2$ the matrix $A$ can be then singular if and only if $a_{2,2}=1$ as well.
A slightly less trivial observation is that the same thing happens when $n=3$, that is the matrix $A$ is singular if and only if two of the rows or columns are identical.
\begin{equation} \left|\begin{array}{ccc} 1 & 1 & 1 \ 1 & \alpha_{2,2} & \alpha_{2,3} \ 1 & \alpha_{3,2} & \alpha_{3,3} \ \end{array}\right| = 0 \end{equation}
So the matrix $A$ is singular iff $(\alpha_{2,2}-1)(\alpha_{3,3}-1)=(\alpha_{2,3}-1)(\alpha_{3,2}-1)$.
Let us assume without loss of generality that $\alpha_{2,2} \neq 1$ and $\alpha_{3,2} \neq 1$.
Consider the circle $C_1(t)= (\alpha_{2,2}-1) (e^{2 \pi i t}-1)$ and $C_2(t)=(\alpha_{2,3}-1) (e^{2 \pi i t}-1), t\in [0,1]$.
Since, the two circles either are identical and in that case $\alpha_{i,2}=\alpha_{i,3}$ that is the second and third columns are identical, or else as two distinct circles can intersect in at most two points we get similarly two of the rows or columns are identical.
Now, probably it is too much to expect the same result for all $n$.
But my requirement is only for $n=4$, is it true that a similar result holds for $n=4$ ?
Edit: I forgot to mention that I am interested in the case when the matrix is singular > > and none of its sub matrices are singular. (thanks @ Gerry Myerson for pointing it out)
In fact,
For the problem I am interested in I have additional information about the matrix $A$ A\$. For example I know it is a generalized vandermonde matrix.
Then the results of Schlickewei, Hans Peter; Viola, Carlo Generalized Vandermonde determinants. Acta Arith. 95 (2000), no. 2, 123--137. seems to imply that if the ratio of the entry's are not roots of unity then the choice of the exponent set is rather limited its always finite and except for a few exceptional cases actually very very small.
Here is a brief summary,
My requirement is for the $n=4$, the entries of the matrix have modulus 1, the matrix is a generalised vandermonde matrix in the sense of Schlickwel and Viola, yes in light of the $n=3$ case I do know no proper subdeterminant vanishes. Well I also know the entries are algebraic integers, and we can assume the ratios are not roots of unity.
My question is
Do we know that the solution set is small i.e., in the non exceptional case as mentioned in the above mentioned paper
.
Thankyou,
2 added 187 characters in body
Let $A$ be a $n \times n$ matrix all of whose entries has modulus 1.
Suppose the matrix $A$ is singular.
We will assume without loss of generality that all the entries in the first row and the first column of the matrix are 1.
Observe when $n=2$ the matrix $A$ can be then singular if and only if $a_{2,2}=1$ as well.
A slightly less trivial observation is that the same thing happens when $n=3$, that is the matrix $A$ is singular if and only if two of the rows or columns are identical.
\begin{equation} \left|\begin{array}{ccc} 1 & 1 & 1 \ 1 & \alpha_{2,2} & \alpha_{2,3} \ 1 & \alpha_{3,2} & \alpha_{3,3} \ \end{array}\right| = 0 \end{equation}
So the matrix $A$ is singular iff $(\alpha_{2,2}-1)(\alpha_{3,3}-1)=(\alpha_{2,3}-1)(\alpha_{3,2}-1)$.
Let us assume without loss of generality that $\alpha_{2,2} \neq 1$ and $\alpha_{3,2} \neq 1$.
Consider the circle $C_1(t)= (\alpha_{2,2}-1) (e^{2 \pi i t}-1)$ and $C_2(t)=(\alpha_{2,3}-1) (e^{2 \pi i t}-1), t\in [0,1]$.
Since, the two circles either are identical and in that case $\alpha_{i,2}=\alpha_{i,3}$ that is the second and third columns are identical, or else as two distinct circles can intersect in at most two points we get similarly two of the rows or columns are identical.
Now, probably it is too much to expect the same result for all $n$.
But my requirement is only for $n=4$, is it true that a similar result holds for $n=4$ ?
Edit: I forgot to mention that I am interested in the case when the matrix is singular > > and none of its sub matrices are singular. (thanks @ Gerry Myerson for pointing it out)
In fact, I have additional information about the matrix $A$ it is a generalized vandermonde matrix.
Then the results of Schlickewei, Hans Peter; Viola, Carlo Generalized Vandermonde determinants. Acta Arith. 95 (2000), no. 2, 123--137. seems to imply that if the ratio of the entry's are not roots of unity then the choice of the exponent set is rather limited its always finite and except for a few exceptional cases actually very very small.
Here is a brief summary,
My requirement is for the $n=4$, the entries of the matrix have modulus 1, the matrix is a generalised vandermonde matrix in the sense of Schlickwel and Viola, yes in light of the $n=3$ case I do know no proper subdeterminant vanishes. Well I also know the entries are algebraic integers, and we can assume the ratios are not roots of unity.
My question is
Do we know that the solution set is small i.e., in the non exceptional case as mentioned in the above mentioned paper
.
Thankyou,
1
# structure of singular matrices whose entries have modulus one
Let $A$ be a $n \times n$ matrix all of whose entries has modulus 1.
Suppose the matrix $A$ is singular.
We will assume without loss of generality that all the entries in the first row and the first column of the matrix are 1.
Observe when $n=2$ the matrix $A$ can be then singular if and only if $a_{2,2}=1$ as well.
A slightly less trivial observation is that the same thing happens when $n=3$, that is the matrix $A$ is singular if and only if two of the rows or columns are identical.
\begin{equation} \left|\begin{array}{ccc} 1 & 1 & 1 \ 1 & \alpha_{2,2} & \alpha_{2,3} \ 1 & \alpha_{3,2} & \alpha_{3,3} \ \end{array}\right| = 0 \end{equation}
So the matrix $A$ is singular iff $(\alpha_{2,2}-1)(\alpha_{3,3}-1)=(\alpha_{2,3}-1)(\alpha_{3,2}-1)$.
Let us assume without loss of generality that $\alpha_{2,2} \neq 1$ and $\alpha_{3,2} \neq 1$.
Consider the circle $C_1(t)= (\alpha_{2,2}-1) (e^{2 \pi i t}-1)$ and $C_2(t)=(\alpha_{2,3}-1) (e^{2 \pi i t}-1), t\in [0,1]$.
Since, the two circles either are identical and in that case $\alpha_{i,2}=\alpha_{i,3}$ that is the second and third columns are identical, or else as two distinct circles can intersect in at most two points we get similarly two of the rows or columns are identical.
Now, probably it is too much to expect the same result for all $n$.
But my requirement is only for $n=4$, is it true that a similar result holds for $n=4$ ?
In fact, I have additional information about the matrix $A$ it is a generalized vandermonde matrix.
Then the results of Schlickewei, Hans Peter; Viola, Carlo Generalized Vandermonde determinants. Acta Arith. 95 (2000), no. 2, 123--137. seems to imply that if the ratio of the entry's are not roots of unity then the choice of the exponent set is rather limited its always finite and except for a few exceptional cases actually very very small.
Here is a brief summary,
My requirement is for the $n=4$, the entries of the matrix have modulus 1, the matrix is a generalised vandermonde matrix in the sense of Schlickwel and Viola, yes in light of the $n=3$ case I do know no proper subdeterminant vanishes. Well I also know the entries are algebraic integers, and we can assume the ratios are not roots of unity.
My question is
Do we know that the solution set is small i.e., in the non exceptional case as mentioned in the above mentioned paper
.
Thankyou,
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104889035224915, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/wavefunction?page=1&sort=votes&pagesize=50
|
# Tagged Questions
A complex scalar field that describes a quantum mechanical system. The square of the modulus of the wave function gives the probability of the system to be found in a particular state.
9answers
3k views
### About the complex nature of the wave function?
1. Why is the wave function complex? I've collected some layman explanations but they are incomplete and unsatisfactory. However in the book by Merzbacher in the initial few pages he provides an ...
2answers
252 views
### Wavefunction collapse and gravity
If gravity can be thought of as both a wave (the gravitational wave, as predicted to exist by Albert Einstein and certain calculations) and a particle (the graviton), would it make sense to apply ...
3answers
253 views
### If superposition is possible in QM, why do we often assume systems are already in their eigenstates?
My understanding is that an arbitrary quantum-mechanical wavefunction can be written as a linear combination of eigenfunctions of some Hermitian operator, most commonly the Hamiltonian; when a ...
2answers
249 views
### Was uncertainty principle inferred by Fourier analysis?
I would like to know: did Heisenberg chance upon his Uncertainty Principle by performing Fourier analysis of wavepackets, after assuming that electrons can be treated as wavepackets?
2answers
222 views
### Superconducting Wavefunction Phase (Feynman Lectures)
In Volume 3, Section 21-5 of the Feynman lectures (superconductivity), Feynman makes a step that I can't quite follow. To start, he writes the wavefunction of the ground state in the following form ...
7answers
450 views
### Is it wrong to talk about wave functions of macroscopic bodies?
Does a real macroscopic body, like table, human or a cup permits description as a wave function? When is it possible and when not? For example in the "Statistical Physics, Part I" by Landau & ...
1answer
405 views
### Must the derivative of the wave function at infinity be zero?
I came across a problem in Griffiths where the derivative of the wave function (with respect to position in one dimension) evaluated at $\pm\infty$ is zero. Why is this? Is it true for any function ...
3answers
360 views
### Meaning of $\int \phi^\dagger \hat A \psi \:\mathrm dx$
While analysing a problem in quantum Mechanics, I realized that I don't fully understand the physical meanings of certain integrals. I have been interpreting: \$\int \phi^\dagger \hat A \psi ...
3answers
520 views
### Electrons - What is Waving?
If an electron is a wave, what is waving? So many answers on the internet say "the probability that a particle will be at a particular location"... so... the electron is a physical manifestation of ...
3answers
143 views
### Time Varying Potential, series solution
Suppose we have a time varying potential $$\left( -\frac{1}{2m}\nabla^2+ V(\vec{r},t)\right)\psi = i\partial_t \psi$$ then I want to know why is the general solution written as \$\psi = ...
2answers
304 views
### Does quantum mechanics allow faster than light (FTL) travel?
Let's suppose I initially have a particle with a nice and narrow wave function[1] (I will leave these unnormed): $$e^{-\frac{x^2}{a}}$$ where $a$ is some small number (to make it narrow). Let's also ...
1answer
195 views
### Relativistic contraction for a wave packet and uncertainty on momentum
Consider an electron described by a wave packet of extension $\Delta x$ for experimentalist A in the lab. Now assume experimentalist B is flying at a very high speed with regard to A and observes the ...
3answers
997 views
### What is the relation between position and momentum wavefunctions in quantum physics?
I have read in a couple of places that $\psi(p)$ and $\psi(q)$ are Fourier transforms of one another (e.g. Penrose). But isn't a Fourier transform simply a decomposition of a function into a sum or ...
2answers
320 views
### Amplitude of Probability amplitude. Which one is it?
QM begins with a Born's rule which states that probability $P$ is equal to a modulus square of probability amplitude $\psi$: $$P = \left|\psi\right|^2.$$ If I write down a wave function like this ...
2answers
306 views
### Exactly how is the constant measured velocity of light deduced from Maxwell's equation?
For electromagnetic radiation the velocity of propagation is $c = 1/\sqrt{\mu_0 \epsilon_0}$. Since both $\mu_0$ and $\epsilon_0$ do not vary in any inertial frame, then $c$ must be constant in any ...
1answer
34 views
### Tip of a spreading wave-packet: asymptotics beyond all orders of a saddle point expansion
This is a technical question coming from mapping of an unrelated problem onto dynamics of a non-relativistic massive particle in 1+1 dimensions. This issue is with asymptotics dominated by a term ...
1answer
175 views
### Boundary conditions from single-valuedness of spherical wavefunctions
This question is a follow-up to David Bar Moshe's answer to my earlier question on the Aharanov-Bohm effect and flux-quantization. What I forgot was that it is not the wavefunction that must be ...
1answer
205 views
### Young's double slit
Am I right to think the (general) probability distribution of photon in a double slit experiment at the screen has the form $|\psi|^2 = c e^{\alpha x^2}\cos^2(\beta x)$? (Due to the superposition of ...
3answers
223 views
### Can a wavefunction be solved to any arbitrary precision, given enough computer time?
I learned that the wavefunction for the hydrogen atom can be solved analytically (we did the derivation in class), but that for more complicated atoms it is "impossible" to solve and that only ...
2answers
363 views
### Is the free electron wavefunction stable?
The wavefunction of a free electrons is variously described as a plane wave or a wave packet. I am fairly happy with the wave packet, as it is localised. But if we change to the electron's rest ...
4answers
250 views
### Does the wave nature of a particle refer to the wave function?
In quantum mechanics when we talk about the wave nature of particles are we referring in fact to the wave function? Does the wave function describes the probability of finding a particle (ex: ...
3answers
258 views
### How to compute the expectation value $\langle x^2 \rangle$ in quantum mechanics?
$$\langle x^2 \rangle = \int_{-\infty}^\infty x^2 |\psi(x)|^2 \text d x$$ What is the meaning of $|\psi(x)|^2$? Does that just mean one has to multiply the wave function with itself?
1answer
252 views
### Help me understand the first equation in Landau & Lifshitz's Quantum Mechanics
While I've covered a basic course in Quantum Mechanics, I'm self-studying Landau & Lifshitz's book to help me understand what's going on. Unfortunately, I'm stuck on the very first equation in ...
1answer
141 views
### Expected value inequality
Why is $\langle p^2\rangle >0$ where $p=-i\hbar{d\over dx}$, (noting the strict inequality) for all normalized wavefunctions? I would have argued that because we can't have $\psi=$constant, but ...
3answers
338 views
### Smoothness constraint of wave function
Is there anything in the physics that enforces the wave function to be $C^2$? Are weak solutions to the Schroedinger equation physical? I am reading the beginning chapters of Griffiths and he doesn't ...
3answers
587 views
### Historical background of wave function collapse
I wonder what were the main experiments that led people to develop the concept of wave function collapse? (I think I am correct in including the Born Rule within the general umbrella of the collapse ...
1answer
33 views
### Connection between a simple matter wave and Heisenberg's uncertainty relation
When looking at the wave function of a particle, I usually prefer to write $$\Psi(x,t) = A \exp(i(kx - \omega t))$$ since it reminds me of classical waves for which I have an intuition ($k$ ...
1answer
122 views
### What does the appearance of a classical particle fundamentally reduce to?
I've been reading an article that describes what seems to be a classical particle as a regularity in the global wavefunction over a quantum configuration space: When you actually see an electron ...
1answer
663 views
### Confusion between the de Broglie wavelength of a particle and wave packets
So I learned that the de Broglie wavelength of a particle, lambda = h/p, where h is Planck's constant and p is the momentum of the particle. I also learned that a quantum mechanics description of a ...
1answer
308 views
### Expectation values-Wavefunction
I'm a bit puzzled about an excercise in which I have to find the expectation values for position and momentum. Normally this should be pretty easy but in this case I just don't get the point. ...
2answers
143 views
### Interpretation of $e|\psi|^2$ as electron density
In solid state physics the electron density is often equated to $e|\psi|^2$. However, the Sakurai says (Chapter 2.4, Interpretation of the Wave Function, p. 101) that adopting such a view leads "to ...
1answer
321 views
### Even and Odd States of a 1D finite potential well
Is it possible for a particle trapped in a 1D finite potential well to evolve from a even state to an odd state and vice-versa? Why?
1answer
258 views
### wavefunction collapse and uncertainty principle
We all know that wavefunction collapse when it is observed. Uncertainty principle states that $\sigma_x \sigma_p \geq \frac {\hbar}{2}$. When wavefunction collapse, doesn't $\sigma_x$ become $0$?, as ...
1answer
343 views
### Where does the wave function of the universe live? Please describe its home
Where does the wave function of the universe live? Please describe its home. I think this is the Hilbert space of the universe. (Greater or lesser, depending on which church you belong to.) Or maybe ...
2answers
318 views
### Is the electron wave function defined during photon emission
I have heard the term quantum leap to describe the (instantaneous?) transition from a higher energy orbital to a lower energy orbital. Yet, I understand that this transition time has now been ...
1answer
191 views
### Projection of states after measurement
Continuing from the my previous 2-state system problem, I am told that the observable corresponding to the linear operator $\hat{L}$ is measured and we get the +1 state. Then it asks for the ...
1answer
329 views
### Finding $\psi(x,t)$ for a free particle starting from a Gaussian wave profile $\psi(x)$
Consider a free-particle with a Gaussian wavefunction, $$\psi(x)~=~\left(\frac{a}{\pi}\right)^{1/4}e^{-\frac12a x^2},$$ find $\psi(x,t)$. The wavefunction is already normalized, so the next thing to ...
2answers
451 views
### How do I figure out the probability of finding a particle between two barriers?
Given a delta function $\alpha\delta(x+a)$ and an infinite energy potential barrier at $[0,\infty)$, calculate the scattered state, calculate the probability of reflection as a function of ...
3answers
220 views
### Is this interpretation of $\psi=\frac{1}{\sqrt{\pi a^{3}}}e^{-r/a}$ correct?
Apologies if this is stating the obvious, but I'm a non-physicist trying to understand Griffiths' discussion of the hydrogen atom in chapter 4 of Introduction to Quantum Mechanics. The wave equation ...
3answers
206 views
### How do I integrate $\frac{1}{\Psi}\frac{\partial \Psi}{\partial x} = Cx$
How do I integrate the following? $$\frac{1}{\Psi}\frac{\partial \Psi}{\partial x} = Cx$$ where $C$ is a constant. I'm supposed to get a Gaussian function out of the above by integrating but don't ...
5answers
217 views
### wave superposition of electrons and quarks
Is quantum wave superposition of electrons and quarks possible? If not, can different types of elementary particles be mixed in wave superposition?
3answers
272 views
### What is the rationale behind representing a state function by a complex valued function in QM?
What is the rationale behind representing a state function of an electron with a complex valued function $\Psi$. If only the probabilistic argument was required then why not represent it with just a ...
2answers
789 views
### Speed of a particle in quantum mechanics: phase velocity vs. group velocity
Given that one usually defines two different velocities for a wave, these being the phase velocity and the group velocity, I was asking their meaning for the associated particle in quantum mechanics. ...
1answer
752 views
### Probability current
Conservation of probability: Suppose a wavefunction has ${\partial \mathbb P \over \partial t} = -t f(x,t)$ and ${\partial j \over \partial x} = i f(x,t)$. How does it follow that \${\partial \mathbb P ...
2answers
114 views
### Vector representation of wavefunction in quantum mechanics?
I am new to quantum mechanics, and I just studied some parts of "wave mechanics" version of quantum mechanics. But I heard that wavefunction can be represented as vector in Hilbert space. In my eye, ...
3answers
292 views
### Confused over complex representation of the wave
My quantum mechanics textbook says that the following is a representation of a wave traveling in the +$x$ direction:$$\Psi(x,t)=Ae^{i\left(kx-\omega t\right)}\tag1$$ I'm having trouble visualizing ...
1answer
89 views
### Confused over the presence of 2 expressions for $\Psi(x,t)$
I'm following Griffiths' Introduction to Quantum Mechanics, and I see that he's got 2 different expressions for $\Psi(x,t)$. One of them is ...
1answer
133 views
### What does the notation $|x_1,x_2\rangle$ mean?
I would like clarification on an equation in the paper "Free matter wave packet teleportation via cold-molecule dynamics", L. Fisch and G. Kurizki, Europhysics Letters 75 (2006), pp. 847-853, DOI: ...
1answer
134 views
### Why don't cancelling wavefunctions for two different particles give zero total wavefunction?
Let $\left|a\right>=e^{i(kx-\omega t)}$, $\left|b\right>=-e^{i(kx-\omega t)}$ be two neutral particles in the 1D free space without any interaction. Then ...
1answer
56 views
### In the expansion of the scattered wave function, why do these two functions have the same index?
See Griffiths Quantum Mechanics, eq. 11.21. Evidently, $$\psi(r,\theta,\phi)=Ae^{ikz}+A\sum\limits_{l,m}^{\infty}C_{l,m}h_{l}(kr)Y_{l}^{m}(\theta,\phi).$$ But I don't see why the $l$th Hankel function ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924255907535553, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/221613/how-do-we-find-the-length-of-the-following-curve
|
# How do we find the length of the following curve?
$r = \sin^2(\theta/3)$
I can't integrate the following expression:
$$\int \sqrt{\left(\sin^2 \frac x 3\right)^2 + \left(\frac 2 3 \sin \frac x 3 \cos \frac x 3\right)^2}dx$$
I got this in my mid-term and I was wondering what the hell is wrong with the teacher.
-
3
-1 Maybe the problem isn't with the teacher. Who else could be the problem? I mean, teachers do give bad problems sometimes and that is sort of crappy for the students, but for a student to assume they know so much as to know without a doubt that the problem is with the teacher is arrogance. And, if you disrespect your teacher to us, why would we have any incentive to teach you? We know how you treat your teachers. – Graphth Oct 26 '12 at 16:07
He said the integrals would be easy to solve, so we don't spend too much time trying to solve it. – Gladstone Asder Oct 26 '12 at 16:11
1
Well, I figured it out and I didn't think it was all that tough. Perhaps it is a bit harder than what the teacher said, but that's hard for me to know. Maybe if you changed your tone a bit, you might get some help. – Graphth Oct 26 '12 at 16:17
## 1 Answer
The integral is not easy, but doable. I will not worry about constants. Make the change of variable $\phi=\theta/3$. Or not. Bring a $\sin\phi$ outside the square root. (Technically it should be $|\sin\phi|$, but if $\phi$ does not stray beyond $[0,\pi]$ we don't need to worry.) We are integrating $$\sin\phi\sqrt{\sin^2\phi+(4/9)\cos^2\phi}.$$ Inside the square root, replace $\sin^2\phi$ by $1-\cos^2\phi$. So we need to find the integral of $$\sin\phi\sqrt{9-5\cos^2\phi}.$$ Make the substitution $u=\cos\phi$. We end up needing to integrate $$\sqrt{9-5u^2}.$$ Now it is somewhat downhill. Let $u=\dfrac{3}{\sqrt{5}}v$.
With particular limits, things might be easier. For with appropriate limits the definite integral of $\sqrt{9-5u^2}$ will be the perhaps easily found area of a nice part of a circle.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9715083837509155, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/218661/how-strong-is-an-egg?answertab=oldest
|
# How Strong is an Egg?
You have two identical eggs. Standing in front of a 100 floor building, you wonder what is the maximum number of floors from which the egg can be dropped without breaking it. What is the minimum number of tries needed to find out the solution?
-
1
@Dennis: You can have more than two tries: an egg doesn’t necessarily break when you drop it. – Brian M. Scott Oct 22 '12 at 12:28
3
@Dennis: Of course, we probably are to assume that a successful drop doesn’t weaken the egg, which seems rather unlikely! – Brian M. Scott Oct 22 '12 at 12:30
1
@GiovanniDeGaetano: You've proven that the minimum number of tries is $\leq 20$. In fact, if you go through your argument carefully, I think you actually get an upper bound of $19$. [Worst case: try floors $10, 20, \dots, 90, 91, 92, \dots, 99, 100$. I think you have to try floor $100$ in case it lands unbroken from floor $99$ since you aren't told that you know it will definitely break if dropped from floor $100$.] But you might be able to get a bit better by testing $\sqrt{M}$ floors above your previous one, where $M$ is the number of remaining steps. – Michael Joyce Oct 22 '12 at 13:06
1
1. The egg breaks, and you have one egg to test for the remaining $k-1$ floors. By the first paragraph, this implies you will need $k$ moves in total. 2. The egg doesn't break; in this case, the first $k$ floors don't matter, and you are playing with 2 eggs and $100-k$ floors, and thus you need $1+f(100-k)$ moves in this case. So this should let you code a solution to the problem. – only Oct 22 '12 at 13:16
3
The answer is $0$: If you drop an egg onto the pavement below from a first-floor (or higher) window, then it will break. – John Bentin Oct 22 '12 at 13:29
show 9 more comments
## 3 Answers
Now that you have got the answer by yourself, let me post what I was going to, as well. (A slow formal proof that $14$ is the minimum, and the answer for general $N$ in place of $100$.)
Just to be clear, the assumption in the problem is that all answers are possible, from $0$ to $100$ inclusive: it is possible the egg breaks when dropped from the very first floor itself, it is possible the egg can survive a fall from even the $100$th floor, and everything in between. (Also, the result of a fall is either that the egg breaks, and can no longer be used, or that it doesn't break and the fall has no effect on it whatsoever.)
Let us first solve the problem where you have only one egg. Then it should be clear that the only solution is to drop it from the first floor, then if it doesn't break drop it from the second floor, and so on up to $100$ floors. [Proof: Suppose you first drop the egg from a height $h_1$, then if it doesn't break you drop it from a height $h_2$, and so on. Then,
• $h_1$ must be $1$. Because if $h_1 > 1$, and it breaks when you drop it from height $h_1$, then you have the egg no more, and have no way of distinguishing between the potential answers $0, \dots, h_1-1$.
• $h_2$ must be $h_1 + 1$, and in general $h_{n+1} = h_n + 1$, for a similar reason: if the egg breaks when you drop it from $h_{n+1} > h_{n} + 1$, then you have no way of distinguishing between the potential answers $h_{n}, \dots, h_{n+1}-1$. So $h_1, h_2, h_3, \dots$ are $1, 2, 3, \dots$ respectively. $\Box$]
So now with two eggs, suppose you drop it from a height $h_1$, then if it doesn't break you drop it from a height $h_2$, and so on. (Obviously to be optimal this will be an increasing sequence, since if the egg doesn't break when dropped from some height, it is never necessary to drop it from a smaller height as you already know the result.) For convenience, denote $h_0 = 0$. If on some drop $h_{n+1}$ the egg breaks ($n \ge 0$), then you are left with the task of distinguishing between the possible answers $h_n, \dots, h_{n+1}-1$ with just one egg, which by the reasoning of the first part above, takes (up to) $h_{n+1} - h_{n} - 1$ drops (basically you drop the egg from $h_{n} + 1$, then from $h_{n} + 2$, and so on up to $h_{n+1} -1$). So the total number of drops needed in this scenario is $n+1$ (for the drops $h_1$ to $h_{n+1}$) followed by $h_{n+1} - h_{n} - 1$ drops, for a total of $n + 1 + h_{n+1} - h_{n} - 1$, namely $n + h_{n+1} - h_{n}$ drops (where $n\ge 0$). In the worst case, we will incur the maximum cost of this over all $n$, and that is what we want to minimize: we want to minimize $$\begin{align}\max( &0 + h_1,\\ &1 + h_2 - h_1, \\ &2 + h_3 - h_2, \\ &\dots,\\ &n + h_{n+1} - h_{n})\end{align}$$ where $n$ now is such that $h_{n+1} = 100$ (if your egg never breaks, you'll keep using the first egg until you drop it from the $100$th floor, so we can wlog assume that the $h_i$s form a finite sequence ending with $100$). Note that the sum of all these quantities telescopes, and it is $\frac{n(n+1)}{2} + 100$. So since the sum of $n+1$ numbers is that much, the largest of them is at least $\frac{1}{n+1}$th of it, namely $$\frac{1}{n+1}\left(\frac{n(n+1)}{2} + 100\right) = \frac{n}{2} + \frac{100}{n+1} \ge 10\sqrt{2} - \frac12 \approx 13.6,$$ so we need at least $14$ drops.
And the method for actually achieving $14$ drops suggests itself: to make the maximum equal when the sum is fixed, the general way is to make all of them as equal as possible, which suggests we can take $h_1 = 14$, then $h_2 - h_1 = 13$ (so $h_2 = 14 + 13 = 27$), then $h_3 = 14 + 13 + 12$, and so on up to $h_{11} = 14 + 13 + \dots + 4 = 99$, and then $h_{12} = 100$.
(There are many other solutions, e.g. $h_1 = 7$, then $h_2 = 7 + 13$, $h_3 = 7 + 13 + 12$ and so on up to $h_{14} = 7 + (13 + 12 + \dots + 2 + 1) = 100$, but the salient fact is that $13 + 12 + \dots + 2 + 1 = 91 < 100$, while $14 + 13 + \dots + 2 + 1 = 105 > 100$. If instead of $100$ we were given $105$, the answer would still be $14$, and the method would be unique.)
Of course, we did not here use any special property of $100$, and repeating the above argument for general $N$ shows that the number of steps needed is: $$\left\lceil \sqrt{2N} - \frac{1}{2} \right\rceil$$
A simpler alternative way of saying this is: $$\text{the smallest number $n$ such that $\frac{n(n+1)}{2} \ge N$.}$$
-
A simpler argument than the above would have been to calculate what's the largest $N$ for which we can solve the problem in at most $n$ drops, and we'd have found $N$ to be $1 + 2 + \dots + n = \frac{n(n+1)}{2}$. – ShreevatsaR Oct 23 '12 at 13:08
Note: On the face of it the two expressions look different (the second one algebraically gives $n = \left\lceil \sqrt{2N + \frac14} - \frac{1}{2} \right\rceil$), but because for integer $n$ we have $\frac{n(n+1)}{2}$ also an integer, we can see that $$\frac{n(n+1)}{2} \ge N \iff \frac{n(n+1)}{2} \ge N - \frac18 \iff n \ge \sqrt{2N} - \frac12,$$ so they are the same. – ShreevatsaR Oct 23 '12 at 13:23
what if the egg hatches in between, did you take that possibility into account?? :):) – dineshdileep Oct 23 '12 at 13:25
(btw, very nice proof!) – dineshdileep Oct 23 '12 at 13:25
I found the solution. Here it is:
The easiest way to do this would be to start from the first floor and drop the egg. If it doesn’t break, move on to the next floor. If it does break, then we know the maximum floor the egg will survive is `0`. If we continue this process, we will easily find out the maximum floors the egg will survive with just one egg. So the maximum number of tries is `100` that is when the egg survives even at the 100th floor.
Can we do better? Of course we can. Let’s start at the second floor. If the egg breaks, then we can use the second egg to go back to the first floor and try again. If it does not break, then we can go ahead and try on the 4th floor (in multiples of `2`). If it ever breaks, say at floor `x`, then we know it survived floor `x-2`. That leaves us with just floor x-1 to try with the second egg. So what is the maximum number of tries possible? It occurs when the egg survives 98 or 99 floors. It will take 50 tries to reach floor 100 and one more egg to try on the 99th floor so the total is 51 tries. Wow, that is almost half of what we had last time.
Can we do even better? Yes we can. What if we try at intervals of 3? Applying the same logic as the previous case, we need a max of 35 tries to find out the information (33 tries to reach 99th floor and 2 more on 97th and 98th floor).
````Interval – Maximum tries
1 – 100
2 – 51
3 – 35
4 – 29
5 – 25
6 – 21
7 – 20
8 – 19
9 – 19
10 – 19
11 – 19
12 – 19
13 – 19
14 – 20
15 – 20
16 – 21
````
So picking any one of the intervals with 19 maximum tries would be fine.
Instead of taking equal intervals, we can increase the number of floors by one less than the previous increment. For example, let’s first try at floor 14. If it breaks, then we need 13 more tries to find the solution. If it doesn’t break, then we should try floor 27 (14 + 13). If it breaks, we need 12 more tries to find the solution. So the initial 2 tries plus the additional 12 tries would still be 14 tries in total. If it doesn’t break, we can try 39 (27 + 12) and so on. Using 14 as the initial floor, we can reach up to floor 105 (14 + 13 + 12 + … + 1) before we need more than 14 tries. Since we only need to cover 100 floors, 14 tries is sufficient to find the solution.
Therefore, 14 is the least number of tries to find out the solution.
-
I understand your argument that it can be done in 14 tries, but why exactly is that the least number? – Jason DeVito Oct 23 '12 at 12:07
@JasonDeVito Using 14 as the initial floor, we can reach up to floor 105 (14 + 13 + 12 + … + 1) before we need more than 14 tries. – Jaguar Oct 24 '12 at 3:38
I understand that 14 works. What I was wondering is why, say, 13 wouldn't work. Obviously 13 wouldn't work with your chosen strategy, but who's to say there isn't some crazy approach where $13$ is sufficient? – Jason DeVito Oct 24 '12 at 12:23
The critical strip of eggs is the interval $C:=[k,k+1]$ such that, when an egg is dropped from a height $\leq k$ it survives, and when it is dropped from a height $\geq k+1$ it brakes.
The numbers $$h_r(n)\qquad(r\geq0,\ n\geq0)$$ ("allowed length for $r$ eggs and $n$ trials") are defined as follows: If we know that $C$ lies in a certain interval of length $\ell\leq h_r(n)$ we can locate $C$ with $r$ eggs in $n$ trials, but if we know only that $C$ lies in a certain interval of length $\ell>h_r(n)$ then there is no deterministic algorithm that allows to locate $C$ with $r$ eggs in $n$ trials for sure. Obviously $$h_0(n)=1\quad(n\geq0)\ ,\qquad h_r(0)=1\quad(r\geq0)\ .$$ The numbers $h_r(n)$ satisfy the recursion $$h_r(n)=h_{r-1}(n-1)+h_r(n-1)\qquad(r\geq1,\ n\geq1)\ .\qquad(*)$$ Proof. Assume $C$ lies in a certain interval $I$ of length $\ell:=h_{r-1}(n-1)+h_r(n-1)$. We may as well assume that the lower end of $I$ is at level zero. Drop the first egg at height $h_{r-1}(n-1)$. If it brakes then $C$ is contained in the interval $[0,h_{r-1}(n-1)]$ and can be located with the remaining $r-1$ eggs in $n-1$ trials. If it survives then $C$ is contained in the interval $[h_{r-1}(n-1),\ell]$ of length $h_r(n-1)$ and can be located with the $r$ eggs in $n-1$ trials. This proves $h_r(n)\geq\ell$.
Conversely, assume that we know only that $C$ lies in a certain interval of length $\ell'>\ell$ and that there is an algorithm that locates $C$ with $r$ eggs in $n$ trials. This algorithm would tell us the height $k$ at which we should drop the first egg. If $k>h_{r-1}(n-1)$ and the egg brakes or if $k\leq h_{r-1}(n-1)$ and the egg survives it would be impossible to finish the task, as the remaining interval that contains $C$ is larger than allowed for the remaining resources. It follows that $h_r(n)\leq\ell$.
From $(*)$ we obtain $$h_1(n)=n+1\ ,\quad h_2(n)={1\over2}(n^2+n+2),\quad h_3(n)={1\over6}(n^3+5n+6)\qquad(n\geq0)\ .$$ As $h_2(13)<100<h_2(14)$ we need at least $14$ trials to locate $C$ (or eliminate $C\subset[0,100]$) in the original setup.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 135, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513810873031616, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/brownian-motion+statistics
|
Tagged Questions
1answer
64 views
Strong markov property on max of brownian motion
For $B_t$ Brownian Motion with drift $\mu<0$, I have the max value, $X = \max_{0<t<\infty}B_t$ . I need to prove with the Strong Markov Property that, $P(X>c+d)=P(X>c)P(X>d)$ a. It ...
2answers
108 views
Max of Brownian motion with drift is finite almost surely
For $B_t$ Brownian Motion with drift $\mu<0$, I need to prove that the max value, $X = \max_{0<t<\infty}B_t$ is finite almost surely, ie $P(X<\infty)=1$. Now, I know that because the mean ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823804259300232, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/10/06/vector-valued-functions/?like=1&source=post_flair&_wpnonce=9916d21231
|
# The Unapologetic Mathematician
## Vector-Valued Functions
Now we know how to modify the notion of the derivative of a function to deal with vector inputs by defining the differential. But what about functions that have vectors as outputs?
Well, luckily when we defined the differential we didn’t really use anything about the space where our function took its values but that it was a topological vector space. Indeed, when defining the differential of a function $f:\mathbb{R}^m\rightarrow\mathbb{R}^n$ we need to set up a new function that takes a point $x$ in the Euclidean space $\mathbb{R}^m$ and a displacement vector $t$ in $\mathbb{R}^m$ as inputs, and which gives a displacement vector $df(x;t)$ in $\mathbb{R}^n$ as its output. It must be linear in the displacement, meaning that we can view $df(x)$ as a linear transformation from $\mathbb{R}^m$ to $\mathbb{R}^n$. And it must satisfy a similar approximation condition, replacing the absolute value with the notion of length in $\mathbb{R}^n$: for every $\epsilon>0$ there is a $\delta>0$ so that if $\delta>\lVert t\rVert_{\mathbb{R}^m}>0$ we have
$\displaystyle\lVert\left[f(x+t)-f(x)\right]-df(x)t\rVert_{\mathbb{R}^n}<\epsilon\lVert t\rVert_{\mathbb{R}^m}$
From here on we’ll just determine which norm we mean by context, since we only have one norm on each vector space.
Okay, so we can talk about differentials of vector-valued functions: the differential of $f$ at a point $x$ (if it exists) is a linear transformation $df(x)$ that turns displacements in the input space into displacements in the output space, and does so in the way that most closely approximates the action of the function itself. But how do we define the function?
If we pick an orthonormal basis $e_i$ for $\mathbb{R}^n$ we can write the components of $f$ as separate functions. That is, we say
$\displaystyle f(x)=f^i(x)e_i$
Now I assert that the differential can be taken component-by-component, just as continuity works: $df(x)=df^i(x)e_i$. On the left is the differential of $f$ as a vector-valued function, while on the right we find the differentials of the several real-valued functions $f^i$. The differential exists if and only if the component differentials do.
First, from components to the vector-valued function. Clearly this definition of $df(x)$ gives us a linear map from displacements in $\mathbb{R}^m$ to displacements in $\mathbb{R}^n$. But does it satisfy the approximation inequality? Indeed, for every $\epsilon>0$ we can find a $\delta$ so that all the inequalities
$\displaystyle\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert<\frac{\epsilon}{n}\lVert t\rVert$
are satisfied when $\delta>\lVert t\rVert>0$. Of course, there are different $\delta$s that work for each component, but we can pick the smallest of them. Then it’s a simple matter to find
$\displaystyle\begin{aligned}\left\lVert\left[f(x+t)-f(x)\right]-df(x)t\right\rVert&=\left\lVert\left(\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right)e_i\right\rVert\\&\leq\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert\lVert e_i\rVert\\&=\sum\limits_{i=1}^n\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert\\&<\sum\limits_{i=1}^n\frac{\epsilon}{n}\lVert t\rVert\\&=\epsilon\lVert t\rVert\end{aligned}$
so if the component functions are differentiable, then so is the function as a whole.
On the other hand, if the differential $df(x)$ exists then for every $\epsilon>0$ there exists a $\delta>0$ so that if $\delta>\lVert t\rVert>0$ we have
$\displaystyle\lVert\left[f(x+t)-f(x)\right]-df(x)t\rVert<\epsilon\lVert t\rVert$
But then it’s easy to see that
$\displaystyle\begin{aligned}\left\lvert\left[f^k(x+t)-f^k(x)\right]-df^k(x)t\right\rvert&=\sqrt{\left\lvert\left[f^k(x+t)-f^k(x)\right]-df^k(x)t\right\rvert^2}\\&\leq\sqrt{\sum\limits_{i=1}^n\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert^2}\\&=\lVert\left[f(x+t)-f(x)\right]-df(x)t\rVert\\&<\epsilon\lVert t\rVert\end{aligned}$
and so each of the component differentials exists.
Finally, I should mention that if we also pick an orthonormal basis $\tilde{e}_j$ for the input space $\mathbb{R}^m$ we can expand each component differential $df^i(x)$ in terms of the dual basis $dx^j$:
$\displaystyle df^i(x)=\frac{\partial f^i}{\partial x^1}dx^1+\dots+\frac{\partial f^i}{\partial x^m}dx^m=\frac{\partial f^i}{\partial x^j}dx^j$
Then we can write the whole differential $df(x)$ out as a matrix whose entry in the $i$th row and $j$th column is $\frac{\partial f^i}{\partial x^j}$. If we write a displacement in the input as an $m$-dimensional column vector we find our estimate of the displacement in the output as an $n$-dimensional column vector:
$\displaystyle\begin{pmatrix}df^1(x;t)\\\vdots\\df^n(x;t)\end{pmatrix}=\begin{pmatrix}\frac{\partial f^1}{\partial x^1}&\dots&\frac{\partial f^1}{\partial x^m}\\\vdots&\ddots&\vdots\\\frac{\partial f^n}{\partial x^1}&\dots&\frac{\partial f^n}{\partial x^m}\end{pmatrix}\begin{pmatrix}t^1\\\vdots\\t^m\end{pmatrix}$
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 5 Comments »
1. Nicely done! Once again, you’ve brought order out of chaos.
Comment by | October 6, 2009 | Reply
2. [...] since each of the several is differentiable we can pick our radius so that all of the [...]
Pingback by | October 7, 2009 | Reply
3. [...] derivatives of . This works out because when we consider differentials as linear transformations, the matrix entries are the partial derivatives. The composition of the linear transformations and is given by the product of these matrices, and [...]
Pingback by | October 8, 2009 | Reply
4. [...] just as we did for vector-valued functions, we’ll just take the differentials of each of these components separately, and then cobble [...]
Pingback by | October 16, 2009 | Reply
5. [...] some open region in . That is, if we pick a basis and coordinates of , then the function is a vector-valued function of real variables with components . The differential, then, is itself a vector-valued function [...]
Pingback by | November 11, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 52, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192745685577393, "perplexity_flag": "head"}
|
http://www.math.uah.edu/stat/games/PokerDice.html
|
$$\newcommand{\P}{\mathbb{P}}$$ $$\newcommand{\E}{\mathbb{E}}$$ $$\newcommand{\R}{\mathbb{R}}$$ $$\newcommand{\N}{\mathbb{N}}$$ $$\newcommand{\bs}{\boldsymbol}$$ $$\newcommand{\var}{\text{var}}$$
## 3. Simple Dice Games
In this section, we will analyze several simple games played with dice--poker dice, chuck-a-luck, and high-low. The casino game craps is more complicated and is studied in the next section.
### Poker Dice
The game of poker dice is a bit like standard poker, but played with dice instead of cards. In poker dice, 5 fair dice are rolled. We will record the outcome of our random experiment as the (ordered) sequence of scores:
$\bs{X} = (X_1, X_2, X_3, X_4, X_5)$
Thus, the sample space is $$S = \{1, 2, 3, 4, 5, 6\}^5$$. Since the dice are fair, our basic modeling assumption is that $$\bs{X}$$ is a sequence of independent random variables and each is uniformly distributed on $$\{1, 2, 3, 4, 5, 6\}$$.
Equivalently, $$\bs{X}$$ is uniformly distributed on $$S$$:
$\P(\bs{X} \in A) = \frac{\#(A)}{\#(S)}, \quad A \subseteq S$
In statistical terms, a poker dice hand is a random sample of size 5 drawn with replacement and with regard to order from the population $$D = \{1, 2, 3, 4, 5, 6\}$$. For more on this topic, see the chapter on Finite Sampling Models. In particular, in this chapter you will learn that the result of Exercise 1 would not be true if we recorded the outcome of the poker dice experiment as an unordered set instead of an ordered sequence.
#### The Value of the Hand
The value $$V$$ of the poker dice hand is a random variable with support set $$\{0, 1, 2, 3, 4, 5, 6\}$$. The values are defined as follows:
1. None alike. Five distinct scores occur.
2. One Pair. Four distinct scores occur; one score occurs twice and the other three scores occur once each.
3. Two Pair. Three distinct scores occur; one score occurs twice and the other three scores occur once each.
4. Three of a Kind. Three distinct scores occur; one score occurs three times and the other two scores occur once each.
5. Full House. Two distinct scores occur; one score occurs three times and the other score occurs twice.
6. Four of a king. Two distinct scores occur; one score occurs four times and the other score occurs once.
7. Five of a kind. Once score occurs five times.
Run the poker dice experiment 10 times in single-step mode. For each outcome, note that the value of the random variable corresponds to the type of hand, as given above.
#### The Probability Density Function
Computing the probability density function of $$V$$ is a good exercise in combinatorial probability. In the following exercises, we will need the two fundamental rules of combinatorics to count the number of dice sequences of a given type: the multiplication rule and the addition rule. We will also need some basic combinatorial structures, particularly combinations and permutations (with types of objects that are identical).
The number of different poker dice hands is $$\#(S) = 6^5 = 7776$$.
$$\P(V = 0) = \frac{720}{7776} = 0.09259$$.
Proof:
Note that the dice scores form a permutation of size 5 from $$\{1, 2, 3, 4, 5\}$$.
$$\P(V = 1) = \frac{3600}{7776} \approx 0.46296$$.
Proof:
The following steps form an algorithm for generating poker dice hands with one pair. The number of ways of performing each step is also given:
1. Select the score that will appear twice: $$6$$
2. Select the 3 scores that will appear once each: $$\binom{5}{3}$$
3. Select a permutation of the 5 numbers in parts (a) and (b): $$\binom{5}{2, 1, 1, 1}$$
$$\P(V = 2) = \frac{1800}{7776} \approx 0.23148$$.
Proof:
The following steps form an algorithm for generating poker dice hands with two pair. The number of ways of performing each step is also given:
1. Select two scores that will appear twice each: $$\binom{6}{2}$$
2. Select the score that will appear once: $$4$$
3. Select a permutation of the 5 numbers in parts (a) and (b): $$\binom{5}{2, 2, 1}$$
$$\P(V = 3) = \frac{1200}{7776} \approx 0.15432$$.
Proof:
The following steps form an algorithm for generating poker dice hands with three of a kind. The number of ways of performing each step is also given:
1. Select the score that will appear 3 times: $$6$$
2. Select the 2 scores that will appear once each: $$\binom{5}{2}$$
3. Select a permutation of the 5 numbers in parts (a) and (b): $$\binom{5}{3, 1, 1}$$
$$\P(V = 4) = \frac{300}{7776} \approx 0.03858$$.
Proof:
The following steps form an algorithm for generating poker dice hands with a full house. The number of ways of performing each step is also given:
1. Select the score that will appear 3 times: $$6$$
2. Select the score that will appear twice: $$5$$
3. Select a permutation of the 5 numbers in parts (a) and (b): $$\binom{5}{3, 2}$$
$$\P(V = 5) = \frac{150}{7776} = 0.01929$$.
Proof:
The following steps form an algorithm for generating poker dice hands with four of a kind. The number of ways of performing each step is also given:
1. Select the score that will appear 4 times: $$6$$
2. Select the score that will appear once: 5
3. Select a permutation of the 5 numbers in parts (a) and (b): $$\binom{5}{4, 1}$$
$$\P(V = 6) = \frac{6}{7776} \approx 0.00077$$.
Proof:
There are 6 choices for the score that will appear 5 times.
Run the poker dice experiment 1000 times and note the apparent convergence of the relative frequency function to the density function.
Find the probability of rolling a hand that has 3 of a kind or better.
Answer:
0.2130
In the poker dice experiment, set the stop criterion to the value of $$V$$ given below. Note the number of hands required.
1. $$V = 3$$
2. $$V = 4$$
3. $$V = 5$$
4. $$V = 6$$
### Chuck-a-Luck
Chuck-a-luck is a popular carnival game, played with three dice. According to Richard Epstein, the original name was Sweat Cloth, and in British pubs, the game is known as Crown and Anchor (because the six sides of the dice are inscribed clubs, diamonds, hearts, spades, crown and anchor). The dice are over-sized and are kept in an hourglass-shaped cage known as the bird cage. The dice are rolled by spinning the bird cage.
Chuck-a-luck is very simple. The gambler selects an integer from 1 to 6, and then the three dice are rolled. If exactly $$k$$ dice show the gambler's number, the payoff is $$k : 1$$. As with poker dice, our basic mathematical assumption is that the dice are fair, and therefore the outcome vector $$\bs{X} = (X_1, X_2, X_3)$$ is uniformly distributed on the sample space $$S = \{1, 2, 3, 4, 5, 6\}^3$$.
Let $$Y$$ denote the number of dice that show the gambler's number. Then $$Y$$ has the binomial distribution with parameters $$n = 3$$ and $$p = \frac{1}{6}$$:
$\P(Y = k) = \binom{3}{k} \left(\frac{1}{6}\right)^k \left(\frac{5}{6}\right)^{3 - k}, \quad k \in \{0, 1, 2, 3\}$
Let $$W$$ denote the net winnings for a unit bet. Then
1. $$W = - 1$$ if $$Y = 0$$
2. $$W = Y$$ if $$Y \gt 0$$
The probability density function of $$W$$ is given by
1. $$\P(W = -1) = \frac{125}{216}$$
2. $$\P(W = 1) = \frac{75}{216}$$
3. $$\P(W = 2) = \frac{15}{216}$$
4. $$\P(W = 3) = \frac{1}{216}$$
Run the chuck-a-luck experiment 1000 times and note the apparent convergence of the empirical density function of $$W$$ to the true probability density function.
The expected value and variance of $$W$$ are
1. $$\mathbb{E}(W) = -\frac{17}{216} \approx 0.0787$$
2. $$\text{var}(W) = \frac{75815}{46656} \approx 1.239$$
Run the chuck-a-luck experiment 1000 times and note the apparent convergence of the empirical moments of $$W$$ to the true moments. Suppose you had bet \$1 on each of the 1000 games. What would your net winnings be?
### High-Low
In the game of high-low, a pair of fair dice are rolled. The outcome is
• high if the sum is 8, 9, 10, 11, or 12.
• low if the sum is 2, 3, 4, 5, or 6
• seven if the sum is 7
A player can bet on any of the three outcomes. The payoff for a bet of high or for a bet of low is $$1:1$$. The payoff for a bet of seven is $$4:1$$.
Let $$Z$$ denote the outcome of a game of high-low. Find the probability density function of $$Z$$.
Answer:
$$\P(Z = h) = \frac{15}{36}$$, $$\P(Z = l) = \frac{15}{36}$$, $$\P(Z = s) = \frac{6}{36}$$, where $$h$$ denotes high, $$l$$ denotes low, and $$s$$ denotes seven.
Let $$W$$ denote the net winnings for a unit bet. Find the expected value and variance of $$W$$ for each of the three bets:
1. high
2. low
3. seven
Answer:
Let $$W$$ denote the net winnings on a unit bet in high-low.
1. Bet high: $$\E(W) = -\frac{1}{6}$$, $$\var(W) = \frac{35}{36}$$
2. Bet low: $$\E(W) = -\frac{1}{6}$$, $$\var(W) = \frac{35}{36}$$
3. Bet seven: $$\E(W) = -\frac{1}{6}$$, $$\var(W) = \frac{7}{2}$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 85, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9076699614524841, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/292/how-does-asymmetric-encryption-work/516
|
How does asymmetric encryption work?
I've always been interested in encryption but I have never found a good explanation (beginners explanation) of how encryption with public key and decryption with private key works.
How does it encrypt something with one key and decipher it with another key?
-
2
It would be useful if you comment on the existing answers where these are not yet good enough for you, and/or what more you are searching with your bounty. (Or if you already selected a winner.) – Paŭlo Ebermann♦ Aug 26 '11 at 11:41
5 Answers
Generally speaking, the public key and its corresponding private key are linked together through their internal mathematical structure; such keys are not "just" arbitrary sequences of random bits. The encryption and decryption algorithms exploit that structure.
One possible design for a public key encryption system is that of a trapdoor permutation. A trapdoor permutation is a mathematical function which is a permutation of some space, such that computing the function in one way is easy, but the reverse is hard, unless you know some information on how the trapdoor permutation was built. Encryption is applying the function on a message, decryption is about using the trapdoor to reverse the function.
For RSA, a well-known asymmetric encryption algorithm, consider a rather big (say 300 digits or more) integer n. n is composite: it is the product of two big prime integers p and q. There are more such composite integers than there are particles in the observable universe, so there is no problem in generating such an integer. n is called the modulus because we will compute things "modulo n". It turns out that:
• it is easy to compute cubes modulo n (given m, compute m3 mod n);
• if p and q were chosen appropriately (I skip the details), raising to the cube is a permutation modulo n (no two integers modulo n will have the same cube, so any integer between 0 and n-1 has a unique cube root modulo n);
• computing the cube root of an integer modulo n is extremely hard
• ... unless you know p and q, in which case it becomes computationally easy;
• recovering p and q from n (that's factorization) is extremely hard.
So the public key is n, the trapdoor permutation is raising to the power 3, and the reverse is computing a cube root; knowledge of p and q is the private key. The private key can remain "private" even if the public key is public, because recomputing the private key from the public key is a hard problem.
There is a bit more to RSA than the description above (in particular, RSA does not necessarily uses the power 3), but the gist of the idea is there: a function which is one-way for everybody, except for the one who selected the function in the first place, with a hidden internal structure which he can use to reverse the function.
SSL (now known as TLS) is a more complex beast which usually uses some asymmetric encryption during the initial stages of the connection, so that client and server end up having a "shared secret" (a commonly known sequence of bits); the shared secret is then used to apply conventional encryption and decryption to the rest of the data ("conventional" meaning: same key to encrypt and decrypt).
-
1
"computing the cube root of a random integer modulo n is extremely hard" would be more correct. Computing the cube root of 0, 1, 8, n-27, or interestingly the product modulo n of numbers for which we already know the plaintext, is easy. – fgrieu Aug 26 '11 at 15:13
1
I would be careful to avoid saying that factorization is "extremely hard" but instead say that "no one knows how to do it efficiently." (And the truly correct statement is that "no one in the public community knows how to do it efficiently.") – Fixee Aug 27 '11 at 6:00
A lot of answers have said "a mathematical structure" which is absolutely right, but I still see there might be a question: how on earth can one exist? I'll try and fill that gap at a simpler level.
So a simple caeser-shift cipher might look like this: $x+7$. In our really simplistic, dangerous example, the "key" here is $7$. If I have some encrypted text, I know that by subtracting $7$, I get back to $x$. Therefore, this is a really simple example of a symmetric cipher, because what I do one way also allows me to undo it.
As you might have noticed, most domains in cryptography are finite - for example, there are only 26 letters in the alphabet. At some point you need to "wrap around". Mathematics provides us with a technique to do this called modulo arithmetic. Essentially, under a modulus, you can think "if I divide this by the modulus, the number I have is the remainder". some examples:
• $4 = 4 \mod 7$
• $8 = 1 \mod 7$ (8/7 = 1 remainder 1)
• $4+7 = 4 \mod 7$ (11/7 = 1 remainder 4)
• $-3 = 4 \mod 7$ (not so hard... what happens when you add 7 to -3?)
As you can see, arithmetic holds under modulo. Undergraduate mathematics courses rigorously establish these truths and if you're interested, read up on Number and Group theory. The next step is to understand that multiplication also holds:
• $2*4 = 1 \mod 7$ (2*4 = 8, as above)
• $5*3 = 1 \mod 7$ (5*3 = 15, 15/7 = 2 remainder 1)
and so on. Multiplication in mathematics often throws up some problems when it comes to inverses - for example, how do I go from 1 back to 5? I can multiply by 5. How do I go back to 2 from 1? Multiply by 2. These examples are not quite right in terms of what I wanted to show, so here are some more:
• $6*3 = 4 \mod 7$ (6*3=18. 18/7 = 2 remainder 4)
• $4*5 = 6 \mod 7$ (4*5=20. 20/7 = 2 remainder 6).
With these examples, I've shown that you can go from 6 to 4 and back to 6 using multiplication, but by multiplying by different things.
This is one, very simple way to create such a mathematical structure. The genius of RSA is choosing the numbers involved in such a way that one can easily determine how to get to an encrypted value, but not back again. I've explained it fully in another answer; however, in essence, it is simply a more complicated version of what we've done here. The clever part is understanding those structures and which choices of numbers make good/bad keys and which work/do not.
But you're telling me multiplication is hard to undo: what about division?
Firstly, it really depends on circumstances. Under some circumstances, like the trivial example I present above, finding an inverse or even using division is easy. It's important to think about what division means. In a rational (any number you can write as a fraction) field, multiplicative inverses exist in the form $p * (1/p)$ (as well as $p*q = 1$).
However, when considering RSA, note that encryption is $t\times t\times \ldots = t^e = c \mod n$ for some public key $e$. So to compute the inverse, we'd need to compute $c \times (1/t) \times (1/t) \times \ldots = c \times (1/t)^e \mod n$. The reason for this is that each multiplication of $t$ needs to be undone by an inverse $(1/t)$ but it should be clear that if we only have the ciphertext $c$, we don't know $t$ to compute $1/t$.
So our next possible route is to compute $c^{1/e}$ as $t^{e*1/e} = t$. This is equivalent to computing the eth-square root of $c$ (for example, $x^{1/2} = \sqrt{x}$) which is hard to do when under a modulus of the size that RSA requires you use - under certain circumstances. Under others, it's known as the "cube root" attack: see this presentation and this one.
Other public key crypto systems use similar observations - for example, Diffie-Hellman relies on this property: $$a^x = b \mod n$$ Under certain cases of n (for example $(\mathbb{Z}_p, \times )$ i.e. when n is prime and we are interested only in multiplication and therefore a is greater or equal to 1) this is hard to reverse. This forms the basis of a number of other public cryptosystems.
-
great answer. The problems I had when I was learning public-private key encrypt was not thinking about the basic math under it. – woliveirajr Aug 26 '11 at 12:50
For the Caesar-cipher you could say that modular subtraction is a decryption algorithm (with the same key as the encryption) or you could say that using the "private" key $+19$ with the encryption algo modular addition undoes the encryption with the "public" key $+7$. The problem being of course that anyone can deduce the private key from the public key. But otherwise it looks already a little bit like asymmetric encryption. – jug Aug 27 '11 at 19:45
@Jug True. For very simple examples this falls down fairly easily; however, the nth-root under a modulus becomes more difficult. Multiplication as a whole often has situations where division is a much more difficult concept (matrices, for example). Obviously it doesn't make a great cryptosystem, but it is a place to start to look for possible trapdoor functions compared to addition, which is usually quite simple to reverse. – Antony Vennard Aug 27 '11 at 20:17
One can repeat what I wrote for modular addition/subtraction quite similar for modular multiplication/division (maybe one should exclude 0 as plain/cipher). Here finding the "private key" is already a little bit more difficult: one has to use Euclid's algorithm. It still doesn't have a trapdoor, but you do if you go to the next more complicated operation: modular exponentiation with its inverse taking modular roots. It can be solved efficiently only if one knows the factorization of the modulus. As finding and multiplying primes is easy, but factoring isn't, one gets a good trapdoor. – jug Aug 28 '11 at 17:38
Your answer already points in the direction I write about in my comment. But in my opinion it is not (yet?) as well written as it could be. Least I like the 3 lines starting with "However, when...". Multiplying with (1/t)*...*(1/t) and then the step to taking roots is in my opinion not quite logical (in the sense why should one multiply and then try suddenly the root). But I think that your answer has great potential. – jug Aug 28 '11 at 17:46
At a basic level, the client (i.e. your browser) and the server negotiate a key exchange algorithm to derive a random session key and then they use that private key to encrypt traffic with a symmetric algorithm.
There's a lot of detail to the process and I wrote about this extensively on my blog: The First Few Milliseconds of an HTTPS Connection
-
1
Could you try to summarize the article here, in case your site goes off-line? (If that's possible, of course) – Arsen7 Aug 3 '11 at 11:16
I added a small synopsis, but without pictures and captures like I have on my blog – Jeff Moser Aug 3 '11 at 11:27
The simplest way to explain it is that the two keys have a special mathematical relationship that allows one (the secret or private key) to undo what the other (the public key) does.
All asymmetric encryption schemes require some operation that cannot be easily reversed unless you have some secret piece of knowledge. With that piece of knowledge, the operation can be reversed.
One common encryption scheme uses the following relationships:
Given very large numbers m and n, pick a number p, compute:
s = p m modulo n
Now try to reverse that operation and get p from s, m and n. Given p, m, and n, calculating s is easy, but given m, n, and s, calculating p is very hard.
Given two very large prime numbers, m and n, compute:
k = m x n
Now try to reverse that operation and get m and n from k. Multiplication is simple. Factoring very large near primes is very hard.
-
2^{243112609} is a very large number near a prime (it's one more than a prime). And I can factor it in my head. – Fixee Aug 27 '11 at 6:03
By "near prime", I mean a number with very few factors. – David Schwartz Aug 27 '11 at 10:39
– Fixee Aug 27 '11 at 16:54
1
– Fixee Aug 28 '11 at 5:07
1
@David Schwartz, your use of the phrase "near prime" is sloppy, ambiguous, and confusing. I think that's causing all sorts of problems, so I suggest you change it to something more precise: factoring a product of two large primes is very hard. – D.W. Sep 1 '11 at 20:41
show 2 more comments
The "How" is already explained well by others, but as far I can see, nobody wrote about "What does asymmetric encryption do?".
It does not, as many think, magically allow two parties Alice and Bob to communicate securely over a public channel.
Instead, it converts an authentic channel from Alice to Bob into a secret channel from Bob to Alice (given a public channel in both directions).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469144940376282, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Newton's_Basin&diff=3342&oldid=3341
|
# Newton's Basin
### From Math Images
(Difference between revisions)
| | | | |
|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 41: | | Line 41: | |
| | | | |
| | ====Creating Newton's Basin==== | | ====Creating Newton's Basin==== |
| - | [[Image:NewtonBasin_5Roots.gif|thumb|left|210px|Newton Basin with 5 Roots]] | + | [[Image:NewtonBasin_5Roots.gif|thumb|left|200px|Newton Basin with 5 Roots]] |
| | To produce an interesting fractal, the Newton Method needs to be extended to the complex plane and to imaginary numbers. Newton's Basin is created using a <balloon title="load:Content">complex polynomial</balloon><span id="Content" style="display:none">Or a polynomial with co-efficients that are complex, such as <math> p(z) = z^3 - 2z + 2</math></span>, with real and/or complex roots. In addition, each root in a Newton's Basin fractal is usually given a distinctive color. It is clear that the fractal on the left has a total of five roots colored magenta, yellow, red, green, and blue. | | To produce an interesting fractal, the Newton Method needs to be extended to the complex plane and to imaginary numbers. Newton's Basin is created using a <balloon title="load:Content">complex polynomial</balloon><span id="Content" style="display:none">Or a polynomial with co-efficients that are complex, such as <math> p(z) = z^3 - 2z + 2</math></span>, with real and/or complex roots. In addition, each root in a Newton's Basin fractal is usually given a distinctive color. It is clear that the fractal on the left has a total of five roots colored magenta, yellow, red, green, and blue. |
| | | | |
| Line 51: | | Line 51: | |
| | | | |
| | | | |
| - | [[Image:NewtonFractalZoom.png|500px|thumb|right|Close up of Newton Basin with 3 Roots]] | + | |
| - | For example, the image to the right was created from the equation <math> p(z) = z^3 - 2z + 2</math>. Since this equation is a 3rd degree complex polynomial, it has three roots, two of which are complex: z = -1.7693, 0.8846 + 0.5897i, and 0.8846 - 0.5897i. The resulting solution map of these solutions are: | + | [[Image:Roots.gif|thumb|right|Solutions <math> p(z) = z^3 - 2z + 2</math>]] |
| - | [[Image:Roots.gif|thumb|left|Solutions to <math> p(z) = z^3 - 2z + 2</math>]] | + | For example, the image below was created from the equation <math> p(z) = z^3 - 2z + 2</math>. Since this equation is a 3rd degree complex polynomial, it has three roots, two of which are complex: z = -1.7693, 0.8846 + 0.5897i, and 0.8846 - 0.5897i. The resulting solution map of these solutions are to the right, and you can see that the Newton's Basin created from this complex polynomial has three roots (yellow, blue, and green) that correspond to the solution map. |
| - | As you can see, the Newton's Basin created from this complex polynomial has three roots (yellow, blue, and green) that correspond to the solution map. | + | [[Image:NewtonFractalZoom.png|500px|thumb|center|Newton Basin with 3 Roots]] |
| | | + | |
| | | + | |
| | | + | |
| | | + | |
| | | + | |
| | | | |
| | | | |
## Revision as of 11:47, 4 June 2009
Newton's Basin
Newton's Basin is a visual representation of Newton's Method, which is a procedure for estimating the root of a function.
Newton's Basin
Fields: Fractals and Calculus
Created By: Nicholas Buroojy
# Basic Description
Animation Emphasizing Roots
Newton Basin with 3 Roots
This image is one of many examples of Newton's Basin or Newton's Fractal. Newton's Basin is based on a calculus concept called Newton's Method, a procedure Newton developed to estimate a root of an equation.
The colors in a Newton's Basin usually correspond to each individual root of the equation, and can be used to infer where each root is located. The region of each color reflects the set of coordinates (x,y) whose x-values, after undergoing iteration with the equation describing the fractal, will eventually get closer and closer to the value of the root.
The animation emphasizes the roots in a Newton's Basin, whose equation clearly has three roots. The image to the right is also a Newton's Basin with three roots, presented more artistically.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Calculus
[Click to view A More Mathematical Explanation]
The featured image on this page is a visual representation of Newton's Method for calculus expanded i [...]
[Click to hide A More Mathematical Explanation]
The featured image on this page is a visual representation of Newton's Method for calculus expanded into the complex plane. To read a brief explanation on this method, read the following section entitled Newton's Method.
### Newton's Method
Newton's Method for calculus is a procedure to find a root of a polynomial, using an estimated coordinate as a starting point. Usually, the roots of a linear equation: $y = mx + b$ can be simply found by setting y = 0 and solving for x. However, with higher degree polynomials, this method can be much more complicated.
Newton devised an iterated method (animated to the right) with the following steps:
• Estimate a starting coordinate on the graph near to the root
• Find the tangent line at that starting coordinate
• Find the root of the tangent line
• Using the root as the x-coordinate of the new starting coordinate, iterate the method to find a better estimate
The results of this method lead to very close estimates to the actual root. Newton's Method can also be expressed:
$f'(x_n) = \frac{\mathrm{\Delta y}}{\mathrm{\Delta x}} = \frac{f(x_n)}{x_n - x_{n+1}}$
$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$
### Newton's Basin
#### Creating Newton's Basin
Newton Basin with 5 Roots
To produce an interesting fractal, the Newton Method needs to be extended to the complex plane and to imaginary numbers. Newton's Basin is created using a complex polynomialOr a polynomial with co-efficients that are complex, such as $p(z) = z^3 - 2z + 2$, with real and/or complex roots. In addition, each root in a Newton's Basin fractal is usually given a distinctive color. It is clear that the fractal on the left has a total of five roots colored magenta, yellow, red, green, and blue.
Every pixel in the image is assigned a complex number coordinate. The coordinates are applied to the equation and iterated continually with the output of the previous iteration becoming the input of the next iteration. If the iterations lead the x-values of the coordinates to converge towards a particular root, the pixel is colored accordingly. If the iterations lead to a loop and not a root, then the pixel is usually colored black because the x-values do not converge.
Each root has a set of initial (or pixel) coordinates $x_0$ that converge to the root. This set of coordinates that are complex number values is called the root's basin of attraction- where the name of this fractal comes from. In addition, some images including shading in each basin. The shading is determined by the number of iterations it takes each pixel to converge to a particular root, and it allows us to see the location of the root more clearly.
Solutions $p(z) = z^3 - 2z + 2$
For example, the image below was created from the equation $p(z) = z^3 - 2z + 2$. Since this equation is a 3rd degree complex polynomial, it has three roots, two of which are complex: z = -1.7693, 0.8846 + 0.5897i, and 0.8846 - 0.5897i. The resulting solution map of these solutions are to the right, and you can see that the Newton's Basin created from this complex polynomial has three roots (yellow, blue, and green) that correspond to the solution map.
Newton Basin with 3 Roots
#### Self-Similarity
As with all other fractals, Newton's Basin exhibits self-similarity. The video to the left is an interactive representation of the continual self-similarity displayed by a Newton's Basin with a root degree of 5 (similar to the fractal shown in the previous section). Towards the end of the video, you will notice that the pixels are no longer adequate to continue magnifying the image...however, the fractal still goes on.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# About the Creator of this Image
Nicholas Buroojy has created many math images including Newton's Lab fractals, Julia and Mandelbrot Sets, Cantor Sets...
# Related Links
### Additional Resources
http://www.chiark.greenend.org.uk/~sgtatham/newton/ for further mathematical explanation
If you are able, please consider adding to or editing this page!
Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9039111733436584, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/91716-solved-addition-multiplication-tables.html
|
# Thread:
1. ## [SOLVED] addition and multiplication tables
I have these two Cayley tables. The addition table is complete and the multiplication table is not. The instructions say to use the distributive laws, but I can't seem to find the solutions using the distributive laws. I can only find the solution using the definition of a group.
Can someone help me?
ring R = {a, b, c}
+ a b c
a a b c
b b c a
c c a b
x a b c
a a a a
b a c
c a
Thanks a bunch!
2. The additiion table shows that $\langle R+\rangle$ is a group with identity element $a.$ This is the zero element of $R.$
To complete the table for multiplication, take for example $bc.$ From the addition table, $c=b+b.$ Hence, $bc=b(b+b)=bb+bb=c+c=b.$ (You know $bb=c$ from the partially completed table.)
Do the same for $cb$ and $cc.$ (Hint: It will turn out that $c$ is the multiplicative identity, and the ring is a field with 3 elements.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8929373621940613, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=124396
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
To find logarithm of a complex number
#2
Hi,
Well can anyone tell me how to find the natural logarithm of a complex number p + iq.
Also please tell me how to convert it into logarithm to the base 10.
An external link to a webpage (where all the details are given) will be appreciated.
Recognitions: Homework Help Science Advisor Write the complex number in polar form and it should be evident how to find the logarithm. To convert the logarithm to base 10 use log z = ln z/ln 10.
Recognitions: Gold Member Science Advisor Staff Emeritus Of course, since the argument of a complex number (the angle part of the polar form) can have any multiple of $2\pi$ added to it, the log function is, like most complex functions, multi-valued.
Recognitions:
Homework Help
Science Advisor
To find logarithm of a complex number
the defn of ln(z) is the integral of the differential dw/w along a path from w = 1 to w = z, but not passing through w=0. the ambiguity is in the choice of that path.
if you transform it to r,theta coordinates I am guessing that differential becomes dr/r + i dtheta. since dr/r has a primitive, namely it equals dln|r|, which is dwefined everywhere except at 0, that part of the integral is not dependent on the path.
but the other part idtheta, does depend on how much angle is swept out wrt the origin by the whole path.
so the real part of the log of z is just the log of the absolute value |z|, but the complex part equals i times some determination of the angle arg(z).
I didn't really calculate this out just now, but this is probably very close to correct, as this is one of my long time favorite objects. I never understood complex functions or logs until I saw it explained roughly this way in Courant's calculus book (where else?).
Recognitions: Homework Help Science Advisor thats why 3^(ipi) = -1, i.e. minus one has absolute value 1 so real log zero, but angle pi, so complex part ipi. and also why e^(2ipi) = 1, since one determination of the angle of 1 is 2pi.
I remember something like Ln(z) = Ln(|z|) + iArg(z) where |z| is the amplitude and arg(z) is the angle in radians between the point z and x axis.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Kaan I remember something like Ln(z) = Ln(|z|) + iArg(z) where |z| is the amplitude and arg(z) is the angle in radians between the point z and x axis.
And, again, since the angle Arg(z) + $2kpi$, for any integer i, gives the same point, ln(z)= ln(|z|)+ i Arg(z)+ $2ki\pi$
Quote by mathwonk the defn of ln(z) is the integral of the differential dw/w along a path from w = 1 to w = z, but not passing through w=0. the ambiguity is in the choice of that path.
i don't really want to start anything, wonk, but while i believe that is true, is it the definition?
i didn't even think that was the definition for the real logarithm
$$\log(x) \equiv \int_1^x \frac{1}{u} du$$
i thought that the definition had more to do with
$$\log(x y) = \log(x) + \log(y)$$
that's what defines some function as a logarithm.
then it can be shown that for the function:
$$f_A(x) \equiv \int_1^x \frac{A}{u} du$$
$$f_A(xy) = \int_1^{xy} \frac{A}{u} du = \int_1^{x} \frac{A}{u} du + \int_x^{xy} \frac{A}{u} du = \int_1^{x} \frac{A}{u} du + \int_1^{y} \frac{A}{u} du = f_A(x) + f_A(y)$$
so we can say that fA is logarithmic. and the natural logarithm was such that numerator A is 1.
isn't this a difference between what is definition and what is a resulting property?
They're both perfectly valid definitions. Some properties are easier to prove using one choice of definition over the other. An example is this PDF using an alternate definition of the exp(x), and deriving some specific results from that. http://www.math.lsu.edu/~mcgehee/Exp.pdf
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by rbj i don't really want to start anything, wonk, but while i believe that is true, is it the definition? i didn't even think that was the definition for the real logarithm $$\log(x) \equiv \int_1^x \frac{1}{u} du$$ i thought that the definition had more to do with $$\log(x y) = \log(x) + \log(y)$$ that's what defines some function as a logarithm. then it can be shown that for the function: $$f_A(x) \equiv \int_1^x \frac{A}{u} du$$ $$f_A(xy) = \int_1^{xy} \frac{A}{u} du = \int_1^{x} \frac{A}{u} du + \int_x^{xy} \frac{A}{u} du = \int_1^{x} \frac{A}{u} du + \int_1^{y} \frac{A}{u} du = f_A(x) + f_A(y)$$ so we can say that fA is logarithmic. and the natural logarithm was such that numerator A is 1. isn't this a difference between what is definition and what is a resulting property?
What makes you think there is such a thing as such a thing as "the" definition of ln(x)?
Any function can have many different equivalent definitions. though it is a matter of taste, I would think that a definition that gives a direct formula ($ln(x)= \int_1^x dt/t$) is better than a definition that says the fnction has such and such properties (ln(x) is the function f such that f(xy)= f(x)+ f(y) and f(1)= 0)/
Quote by HallsofIvy What makes you think there is such a thing as such a thing as "the" definition of ln(x)?
i guess i normally think that a definition comes first, chronologically in the history or pedagogy.
in that sense, i think of logarithms as the generalized exponent of some given base (which is "e" if it's a "natural logarithm" - why e takes on the value that it does has to do with that integral, that wonk was using as a defining expression).
i didn't like it, but in my old calc book (Seeley, and i generally really like the book), they simple defined the natural exponential of a complex number as
$$e^z = e^{\mathrm{Re}(z)} \cos\left(\mathrm{Im}(z)\right) \ + \ i e^{\mathrm{Re}(z)} \sin\left(\mathrm{Im}(z)\right)$$
and then stated that the definition was a good one since it reverted to the previous definition of ex for a real argument (when Im(z)=0) and satisfied ez+w = ez ew for any complex z and w. for me, that was an unsatisfying "definition".
Hello: So if my function was LN [ -exp(cx)/sin(d) - i ] = ? Then, I would decompose it to real and imaginary parts? The larger problem is LN [ -exp(cx)/sin(d) - i ] - LN [ -exp(cx)/sin(d) + i ] where c and d are constants. ???
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by BCox Hello: So if my function was LN [ -exp(cx)/sin(d) - i ] = ? Then, I would decompose it to real and imaginary parts?
Assuming that -exp(cx)/sin(d) is real, that is of the form ln(a- i) with a= -exp(cx)/sin(d). Rewrite a- i in "polar form", $re^{i\theta}$ with $r= \sqrt{exp(2cx)/sin^2(d)+ 1}$$= \sqrt{exp(2cx)+ sin^2(d)}/sin(d)$ and $\theta= arctan(sin(d)/exp(cx))$. Then $ln(a- i)= ln(r)+ \theta i+ 2k\pi i$ or
$$ln(-exp(cx)/sin(d)- i)= \frac{1}{2}ln(exp(2cx)+ sin^2(d))- ln(sin(d))+ arctan(sin(d)/exp(cx))i+ 2k\pi i$$
The larger problem is LN [ -exp(cx)/sin(d) - i ] - LN [ -exp(cx)/sin(d) + i ] where c and d are constants. ???
Hmm... perhaps a rephrase of the question is more appropriate. I want to take the limit of the following function as x -> infinity (1/d) * arcot [ - exp(cx) / sin(d) ] where d is (-pi,0) Now a prior condition would stipulate that the above has to go to zero as x tends to infinity. But I am more interested at the rate at which the function goes to zero as x -> infinity. For one case as d->0, the first term tends to infinity. But I have another condition telling me that the whole function must approach zero; so that the second term arcot() must approach zero at a faster rate than 1/d. But analytically, what is the rate that (1/d) * arcot [ - exp(cx) / sin(d) ] ~ ? as x->infinity
I'm not really a "mathematician", but for what its worth, you could put the complex number into a power series and hopefully try to see what it converges too...no? :( ??
Recognitions: Gold Member Science Advisor Staff Emeritus ?? A power series is necessarily a function. How do you put a number in a power series.
Quote by HallsofIvy ?? A power series is necessarily a function. How do you put a number in a power series.
right. It's tricky
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|------------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: To find logarithm of a complex number | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 12 |
| | Calculus & Beyond Homework | 0 |
| | Precalculus Mathematics Homework | 14 |
| | Calculus & Beyond Homework | 1 |
| | Calculus | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443142414093018, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87024/criterions-for-reflexiveness-of-sheaves-and-a-special-case/87029
|
## Criterions for Reflexiveness of sheaves and a special case
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A coherent sheaf $V$ on a say noetherian scheme is called reflexive if the canonical map $V \rightarrow \mathcal Hom_{\mathcal O_X}(\mathcal Hom_{\mathcal O_X}(V,\mathcal O_X),\mathcal O_X)$ is an isomorphism of sheaves.
In principle, one can define this notion also for quasicoherent sheaves, and this is what my question is about: does one have criteria about when a quasicoherent module is reflexive? Or is the question of reflexiveness in general very hard to answer?
What I am particularly interested in as a special case:
If one has an infinite chain of inclusions $V_1 \subset V_2 \subset ...$ reflexive sheaves, then one knows of course that each $V_i$ is reflexive. But is the union of all these sheaves also reflexive?
-
## 4 Answers
I'll add another example to the mix even in the ring setting (as opposed to the sheaf setting). Fix a DVR $(R, \langle x \rangle)$. Then the fraction field $K(R) = \bigcup_n (x^{-n}R)$, is an ascending union of free (reflexive) modules. But clearly $K(R)$ is not itself reflexive.
On the other hand, under mild conditions on the scheme $X$, for example if it is normal, then for a coherent module being reflexive is equivalent to being S2 (Serre's second condition). The S2 condition behaves fairly well for quasi-coherent modules and might be useful for you. For example, see the paper(s) of Hartshorne: "Generalized divisors ..."
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $R$ be a discrete valuation ring and $V$ a free $\mathcal O_X$-module of infinite rank on $X=\mathrm{Spec}(R)$. Then, neither $\mathcal Hom(V,\mathcal O_X)$ nor $\mathcal Hom(\mathcal Hom(V,\mathcal O_X),\mathcal O_X)$ is quasi-coherent. In particular, $V$ is not reflexive.
-
Let $O$ be the valuation ring of a valuation $v$ having a subgroup of the reals different from the integers as its value group. Let $v_1>v_2>\ldots >0$ be a monotonously decreasing sequence of values converging to $0$ and chose elements $x_i\in O$ such that $v(x_i)=v_i$. Then $O x_1\subset O x_2\subset\ldots \subset M$, where $M$ is the maximal ideal of $O$. Moreover
$M=\bigcup\limits_{i}Ox_i$.
But $M$ is not reflexive: $(O:M)=O$, hence $(O:(O:M))=(O:O)=O$.
However: this may only tell us that we should define reflexivity for non-coherent modules / sheaves in a different way.
-
I now clearly see the problems. Thanks to all for the nice examples and hints! – Veen Jan 30 2012 at 17:23
The problem with this attempt is that the dual of a direct sum is a direct product. So let $X=\mathrm{Spec} k$ and $V$ an infinite dimensional vector space over $k$. Then the (double) dual of $V$ is much larger than $V$ so they cannot be isomorphic.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108211398124695, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/23547/
|
## Does pi contain 1000 consecutive zeroes (in base 10)?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The motivation for this question comes from the novel Contact by Carl Sagan. Actually, I haven't read the book myself. However, I heard that one of the characters (possibly one of those aliens at the end) says that if humans compute enough digits of $\pi$, they will discover that after some point there is nothing but zeroes for a really long time. After this long string of zeroes, the digits are no longer random, and there is some secret message embedded in them. This was supposed to be a justification of why humans have 10 fingers and increasing computing power.
Anyway, apologies for the sidebar, but this all seemed rather dubious to me. So, I was wondering if it is known that $\pi$ does not contain 1000 consecutive zeroes in its base 10 expansion? Or perhaps it does? Of course, this question makes sense for any base and digit. Let's restrict ourselves to base 10. If $\pi$ does contain 1000 consecutive $k$'s, then we can instead ask if the number of consecutive $k$'s is bounded by a constant $b_k$.
According to the wikipedia page, it is not even known which digits occur infinitely often in $\pi$, although it is conjectured that $\pi$ is a normal number. So, it is theoretically possible that only two digits occur infinitely often, in which case $b_k$ certainly exist for at least 8 values of $k$.
Update. As Wadim Zudilin points out, the answer is conjectured to be yes. It in fact follows from the definition of a normal number (it helps to know the correct definition of things). I am guessing that a string of 1000 zeroes has not yet been observed in the over 1 trillion digits of $\pi$ thus computed, so I am adding the open problem tag to the question. Also, Douglas Zare has pointed out that in the novel, the actual culprit in question is a string of 0s and 1s arranged in a circle in the base 11 expansion of $\pi$. See here for more details.
-
22
The movie has excellent special effects and Jodie Foster. – Will Jagy May 5 2010 at 4:59
3
It's a conjecture: any possible pattern occurs in the decimal record of $\pi$ (so, $\pi$ is normal in base 10). – Wadim Zudilin May 5 2010 at 5:16
5
In the book, it was a pattern of 0s and 1s arranged in a circle in the base 11 expansion of pi. If pi is normal, then this pattern should occur at some point, but in the novel this shows up surprisingly soon. en.wikipedia.org/wiki/Contact_(novel) – Douglas Zare May 5 2010 at 5:32
7
If you want to find a given sequence of 1000 digits in pi (for example, 1000 zeros), then, given that it's a reasonable hypothesis that digits really do occur "randomly" (and hang the novel), you'll have to look through about 10^1000 digits before you find your pattern. So you can be pretty sure that we've not found 1000 zeroes yet! – Kevin Buzzard May 5 2010 at 8:00
9
I assume you know about the Feynman point? en.wikipedia.org/wiki/Feynman_point – Theo Johnson-Freyd May 5 2010 at 8:08
show 10 more comments
## 4 Answers
Summing up what others have written, it is widely believed (but not proved) that every finite string of digits occurs in the decimal expansion of pi, and furthermore occurs, in the long run, "as often as it should," and furthermore that the analogous statement is true for expansion in base b for b = 2, 3, .... On the other hand, for all we are able to prove, pi in decimal could be all sixes and sevens (say) from some point on.
About the only thing we can prove is that it can't have a huge string of zeros too early. This comes from irrationality measures for pi which are inequalities of the form $|\pi-(p/q)|>q^{-9}$ (see, e.g., Masayoshi Hata, Rational approximations to $\pi$ and some other numbers, Acta Arith. 63 (1993), no. 4, 335-349, MR1218461 (94e:11082)), which tell us that such a string of zeros would result in an impossibly good rational approximation to pi.
-
17
As for the irrationality measure of $\pi$ Hata's longstanding record was recently beat: V.Kh. Salikhov, Russ. Math. Surv. 63, No. 3, 570-572 (2008); translation from Usp. Mat. Nauk 63, No. 3, 163-164 (2008). – Wadim Zudilin May 5 2010 at 6:08
@Wadim, thanks for the update. – Gerry Myerson May 5 2010 at 23:52
@Gerry: Thanks for reminding me of why the zeroes should be far away. – Wadim Zudilin May 6 2010 at 11:53
1
matwbn.icm.edu.pl/ksiazki/aa/aa63/aa6344.pdf – Charles May 21 2010 at 4:05
@Charles, thanks for the link to the Hata paper. – Gerry Myerson May 21 2010 at 4:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Mahler's paper [1] shows that you can't have too many zeros too soon (or any other string of identical digits, or anything else that would give too good of a rational approximation). In this context the result is weak: there's no string of 1000 zeros starting after the 1000/41 st digit of pi... but it's easy enough to calculate 24 digits of pi as it is.
Even assuming that Salikhov's result holds for numbers this small, it won't exclude more than 150 digits (that is, no spans of 1000 zeros in the first 1150 decimal digits of pi).
[1]: K. Mahler, On the approximation of π, Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen Series A, Mathematical Sciences 56 (1953), pp. 30-42.
-
Thanks Charles. By 'after' do you mean 'before'? – Tony Huynh May 21 2010 at 15:26
Perhaps I was unclear. Mahler's result says that you can't have a string of 1000 zeros in the decimal expansion of pi appear just after the 24th digit, or before then -- that is, no such string in the first 1024 digits. (That this is 2^10 is coincidence; it's just 1000 (1 + 1/(42 - 1)) rounded down.) – Charles May 21 2010 at 20:46
A similar question (1 million consecutive 7 in the decimal expansion of pi) has been discussed by Timothy Gowers in a text published in 2006 (see Reuben Hersh: 18 unconventional essays).
His (quite classical) heuristic arguments in favor of yes were even used for a study on the influence of autority on persuasiveness in mathematics (See Matthew Inglis and Juan Pablo Mejia-Ramos, Cognition and Instruction Journal, Routledge, 2009).
-
I do not know what kind of effect this has on your question, but it might be a good thing to look at in this context. In 1995, a formula for the $n$-th digit of $\pi$ written in hexadecimal was found. See the wikipedia article, http://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266319870948792, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/60380-module-homomorphism-ring-subset.html
|
# Thread:
1. ## module homomorphism, ring, subset
Background: Let $R$ be a ring, $S \subseteq R$ a multiplicatively closed subset, and let $M$, $N$ be $R-$modules. Let $\phi : M \rightarrow N$ be an $R-$module homomorphism, and let $\phi_s : S^{-1}M \rightarrow S^{-1}N$ denote the $S^{-1}R-$module homomorphism, and let $\phi_s(\frac{m}{s})=\frac{\phi(m)}{s}.$ You may assume without proof that $\phi_s$ is a well-defined $S^{-1}R-$module homomorphism.
Prove:
(a) Prove that if $\phi$ is injective, then $\phi_s$ is injective.
(b) Prove that if $\phi$ is surjective, then $\phi_s$ is surjective.
(c) Prove that if $M$ is a finitely generated free $R-$module with basis $\{ b_1,\ldots, b_m \}$ then $S^{-1}M$ is a finitely generated free $S^{-1}R-$module with basis $\{ \frac{b_1}{1},\ldots, \frac{b_m}{1} \}.$
(d) Now suppose $R$ is an integral domain and that $m$, $n$ are positive integers. Prove that if $R^m \cong R^n$, then $m=n$. (Hint: consider $S=R-\{0\}$.)
2. Originally Posted by Erdos32212
Background: Let $R$ be a ring, $S \subseteq R$ a multiplicatively closed subset, and let $M$, $N$ be $R-$modules. Let $\phi : M \rightarrow N$ be an $R-$module homomorphism, and let $\phi_s : S^{-1}M \rightarrow S^{-1}N$ denote the $S^{-1}R-$module homomorphism, and let $\phi_s(\frac{m}{s})=\frac{\phi(m)}{s}.$ You may assume without proof that $\phi_s$ is a well-defined $S^{-1}R-$module homomorphism.
Prove:
(a) Prove that if $\phi$ is injective, then $\phi_s$ is injective.
(b) Prove that if $\phi$ is surjective, then $\phi_s$ is surjective.
(c) Prove that if $M$ is a finitely generated free $R-$module with basis $\{ b_1,\ldots, b_m \}$ then $S^{-1}M$ is a finitely generated free $S^{-1}R-$module with basis $\{ \frac{b_1}{1},\ldots, \frac{b_m}{1} \}.$
(d) Now suppose $R$ is an integral domain and that $m$, $n$ are positive integers. Prove that if $R^m \cong R^n$, then $m=n$. (Hint: consider $S=R-\{0\}$.)
(a): if $\phi_S(m/s)=0,$ then $\frac{\phi(m)}{s}=0.$ thus there exists $t \in S$ such that $t \phi(m)=0.$ hence: $\phi(tm)=t\phi(m)=0.$ but $\phi$ is injective. thus $tm=0,$ i.e. $\frac{m}{s}=0.$
(b) if $x=\frac{n}{s} \in S^{-1}N,$ then since $\phi$ is surjective, there exists $m \in M$ such that $\phi(m)=n.$ thus: $x=\frac{\phi(m)}{s}=\phi_S(m/s).$ hence $\phi_S$ is surjective.
(c) let $x=\frac{v}{s} \in S^{-1}M.$ then $v=\sum_{i=1}^m r_ib_i, \ r_i \in R.$ thus: $x=\sum_{i=1}^m \frac{r_i}{s}\frac{b_i}{1},$ which proves that $I=\{\frac{b_i}{1} : i=1,2, \cdots m \}$ generates $S^{-1}M$ as $S^{-1}R$ module, because $\frac{r_i}{s} \in S^{-1}R.$
to prove that the set $I$ is linearly independent, we assume that $\sum_{i=1}^m \frac{r_i}{s_i}\frac{b_i}{1}=0,$ which after clearing denominators gives us: $\sum_{i=1}^n t_ir_ib_i=0,$ where $t_i \in S.$ since $\{b_i: \ i=1,2, \cdots, m\}$
is assumed to be a basis for M, it's a set of linearly independent elements of R. thus $t_i r_i = 0, \ i = 1, 2, \cdots , m.$ hence $\frac{r_i}{s_i}=0, \ i=1,2, \cdots , m.$
(d) let $S=R- \{0 \},$ which is obviously multiplicatively closed because R is a domain. now $Q(R)=S^{-1}R$ is the field of fractions of $R.$ on the other hand we have:
$\bigoplus_{i=1}^m Q(R)=\bigoplus_{i=1}^m S^{-1}R \simeq S^{-1}R^m \simeq S^{-1}R^n \simeq \bigoplus_{i=1}^n S^{-1}R=\bigoplus_{i=1}^n Q(R). \ \ \ \ (1)$
now since $Q(R)$ is a field, the LHS and RHS of (1) are two $Q(R)-$vector spaces. since they are isomorphic, their dimensions must be equal. (recall from linear algebra!) thus $m=n.$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 91, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9105111956596375, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/142167-confused-taylor-series-problem.html
|
# Thread:
1. ## Confused with this Taylor series problem.
The problem says, "Write down the Taylor series of $Cos(x)$ about $x=0$ and sum the series $\sum_{0}^{inf} (-\pi ^2)^n / (2n)! 9^n$"
First question is, when do you stop writing terms of the series if it doesn't tell you?
Also, what does the second series have to do with the first one? This seems like two totally different problems crammed into one. Am I wrong?
2. Taylor for cosx
$\cos x = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n)!}$
our series
$\sum_{n=0}^{\infty} \frac{ (-\pi^2)^n}{(2n)! 9^n}$
$\sum_{n=0}^{\infty} \frac{ (-1)^n (\pi^2)^n}{9^n} \frac{1}{(2n)!}$
$\sum_{n=0}^{\infty} \frac{(-1)^n }{(2n)!}\left(\frac{\pi}{3}\right)^{2n}$
ahaa what do you think now
3. I am still not totally sure what they have to do with eachother.
4. the difference is just in second summation $\frac{\pi}{3}$ and in first $x$
so $\sum_{n=0}^{\infty} \frac{(-1)^n }{(2n)!}\left(\frac{\pi}{3}\right)^{2n}= \cos \frac{\pi}{3}= \frac{1}{2}$
5. Originally Posted by Mattpd
The problem says, "Write down the Taylor series of $Cos(x)$ about $x=0$ and sum the series $\sum_{0}^{inf} (-\pi ^2)^n / (2n)! 9^n$"
First question is, when do you stop writing terms of the series if it doesn't tell you?
You don't. A Taylor polynomial stops after a finite number of terms. A Taylor series is an infinite series.
Also, what does the second series have to do with the first one? This seems like two totally different problems crammed into one. Am I wrong?
The Taylor series for cos(x) is $\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}x^{2n}$
The sum you are given is either $\sum_{n=0}^\infty \frac{(-\pi^2)^n}{n!}9^n$ or $\sum_{n=0}^\infty \frac{(-\pi^2)^n9^n}{n!|9^n$ since, in what you wrote, it is not clear whether the $9^n$ is supposed to be in the numerator or denominator.
In either case, notice that we have n! in the denominator, as in the cos(x) series. Now, because of the " $x^{2n}$" in the cos(x) series, try to try powers as "2n". We can write $(-\pi^2)^n= (-1)^n \pi^{2n}$ and, of course, $9^n= 3^{2n}$
That is, depending upon whether the " $9^n$ is in the numerator or denominator we get
$\sum_{n=0}^\infty \frac{(-\pi^2)^n}{n!}9^n= \sum_{n=0}^\infty \frac{(-1)^n}{n!}\left(3\pi\right)^{2n}= cos(3\pi)= -1$
or
$\sum_{n=0}^\infty \frac{(-\pi^2)^n{n! 9^n}= \sum_{n=0}^\infty \frac{(-1)^n}{n!}\left(\frac{\pi}{3}\right)^n= cos(\frac{\pi}{3})= \frac{1}{2}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9686262011528015, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26297?sort=oldest
|
## Which finite groups are generated by n involutions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
One of the interesting problems in abstract polytope theory is to determine, for a given finite group, when that group is the automorphism group of a regular abstract polytope. This is equivalent to the following question: Given a finite group G, when is G generated by involutions $\rho_0, \ldots, \rho_n$ such that $(\rho_i \rho_j)^2 = 1$ if $|i - j| \geq 2$ and such that for all $I, J \subset \{0, \ldots, n\}$ we have $\langle \rho_i \mid i \in I \rangle \cap \langle \rho_i \mid i \in J \rangle = \langle \rho_i \mid i \in I \cap J \rangle$?
The last property can be difficult to check, so let's relax that requirement for now. If a finite group G is generated by n involutions such that non-adjacent generators commute, what can we say about the structure or size of G? Of particular interest: what if G is simple?
Here are a few simple observations:
1. For each n, the smallest (abstract) n-polytope has an automorphism group that is isomorphic to the direct product of n copies of $C_2$, corresponding to the trivial Coxeter diagram on n nodes. So a finite group G cannot be the automorphism group of an abstract regular n-polytope for $n > \log_2(|G|)$.
2. A (nontrivial) group generated by involutions has even order.
3. The abelianization of a group generated by involutions such that nonadjacent generators commute is a quotient of the group in (1), the direct product of n copies of $C_2$.
-
I am having trouble following your simple observations. I thought the smallest $n$-polytope was the $n$-simplex, with symmetric automorphism group. Also, the trivial group has odd order but is generated by involutions (zero of them). – S. Carnahan♦ May 29 2010 at 5:51
In 1), I meant the smallest abstract polytope, not the smallest convex polytope. In 2), I meant nontrivial. I'll update to clarify. – Gabe Cunningham May 31 2010 at 3:23
what is an abstract polytope? – Mariano Suárez-Alvarez May 31 2010 at 4:24
Okay, Wikipedia has enlightened me a bit. I guess the smallest abstract polytope has two cells of each dimension $0,\ldots,n-1$. I don't understand the "strongly connected" axiom well enough to see why this puts a lower bound on the size of the automorphism group. Aren't there abstract $n$-polytopes with trivial automorphism group for all $n>1$? – S. Carnahan♦ May 31 2010 at 6:44
Oh, the regularity hypothesis implies the lower bound. Never mind. – S. Carnahan♦ May 31 2010 at 6:47
## 3 Answers
Have you read "Regular Polytopes" by H.S.M. Coxeter? or any other text book on reflection groups or Coxeter groups?
In a nutshell, the strategy is to write down the Gram matrix of an inner product that is preserved by the reflections and so by the group generated by the reflections. Then the group is finite if and only if this inner product is positive definite.
-
The groups I'm dealing with are not just Coxeter groups but rather quotients of Coxeter groups. Also, the question here isn't so much "when are these groups finite" but "which finite groups arise this way". For example: is there a presentation for the Monster group satisfying the above criteria? – Gabe Cunningham May 28 2010 at 19:26
To be precise, without the last condition (involving subgroups generated by involutions indexed by $I$ and $J$), they are finite quotients of a single $\textit{right-angled}$ Coxeter group with $n$ generators with RACG graph $K_n\setminus C_n.$ I am mystified why they should classify the automorphism groups of regular abstract polytopes, though. – Victor Protsak May 28 2010 at 21:48
It is only with the last condition that they classify the automorphism groups of regular abstract polytopes. In fact, you can build a regular abstract polytope out of such a group out of the cosets of certain subgroups. The precise construction is given in chapter 2 of Schulte and McMullen's book Abstract Regular Polytopes. – Gabe Cunningham May 29 2010 at 1:44
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Really quite a few finite simple groups are generated by three involutions, two of which commute (n=3).
For instance, this papers provides a (revised) proof that almost all sporadic groups have such a generating set:
Mazurov, V. D. "On the generation of sporadic simple groups by three involutions, two of which commute." Sibirsk. Mat. Zh. 44 (2003), no. 1, 193–198; translation in Siberian Math. J. 44 (2003), no. 1, 160–164 MR1967616 DOI:10.1023/A:1022028807652
Its references provide a large list of other simple groups with this property:
• almost all groups of lie type in char 2: MR1131150
• almost all alternating groups: MR1172472
• almost all groups of lie type in odd char: MR1454692 (low rank exceptions) and MR1601503 (all large rank)
Since one of the reviews was inaccurate, I quickly checked through the sources I had access to, and the following is probably quite close to accurate: Every finite simple group other than:
• A6, A7, A8, S4(3)=U4(2), M11, M22, M23, McL
• L3(q), U3(q) for all prime powers q
• L4(q) for even prime powers q
has a generating set consisting of three involutions, two of which commute. In particular the Monster group does have such a generating set (a short proof in the first paper, an earlier proof due to Simon Norton in a letter).
I (quickly) verified that A6, A7, A8, S4(3), M11, M22, M23 have no generating set of involutions where non-adjacent involutions commute (even for more than three generators). You can bound n by the 2-rank of the group: having lots of commuting involutions means you have a large elementary abelian subgroup, and so n ≤ 5 for these groups.
-
Thank you for the detailed answer, Jack. I'll look into these to see if the satisfy the intersection property as well. – Gabe Cunningham May 31 2010 at 15:48
Gabe, Dimitri Leemans does a lot of work on this kind of question - have you got all his stuff?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220713376998901, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/254356/choosing-a-5-member-team-out-of-12-girls-and-10-boys/254359
|
Choosing a 5 member team out of 12 girls and 10 boys
We must choose a 5-member team from 12 girls and 10 boys. How many ways are there to make the choice so that there are no more than 3 boys on the team?
The correct answer is (22 choose 5) - (12 choose 1)(10 choose 4) - (10 choose 5).
I understand the (22 choose 5) part, but where I am confused at is the other two parts. I do not know how to get those parts.. Can anyone help me understand how to get to the solution?
-
2 Answers
From the total number of ways to form a $5$ member team we subtract the numbers corresponding to the cases when exactly $4$ boys or exactly $5$ boys are chosen to form teams that have no more than $3$ boys. The first case happens when we choose $4$ boys and $1$ girl and the second happens when we choose $5$ boys and $0$ girls. This gives us $\binom{12+10}{5}-\binom{12}{1}\binom{10}{4}-\binom{12}{0}\binom{10}{5}$.
-
Thank you. I didn't think it would be that simple. – Wooooop Dec 9 '12 at 7:53
That solution proceeds by starting with $\binom{22}5$, the total number of possible $5$-person teams, and subtracting the $\binom{12}1\binom{10}4$ teams that have one girl and four boys and the $\binom{10}5$ teams that have five boys.
The problem could also be solved by noting that there are $\binom{12}2\binom{10}3$ teams with two girls and three boys, $\binom{12}3\binom{10}2$ teams with three girls and two boys, $\binom{12}4\binom{10}1$ teams with four girls and one boy, and $\binom{12}5$ teams with five girls, and calculating the sum
$$\binom{12}2\binom{10}3+\binom{12}3\binom{10}2+\binom{12}4\binom{10}1+\binom{12}5\;,$$
but it’s pretty clearly more efficient to calculate
$$\binom{22}5-\binom{12}1\binom{10}4-\binom{10}5\;.$$
-
I haven't thought about doing it that way! Brian, you have been helping me a lot, and I really appreciate that someone with your knowledge is on here helping others out. Thank you. – Wooooop Dec 9 '12 at 7:57
1
@kevlar: I enjoy it, and you’re very welcome. Quite a few problems of this general type can be attacked either by subtracting bad cases or adding good ones; it’s worth thinking of both possibilities and then deciding which one will involve less calculation. – Brian M. Scott Dec 9 '12 at 8:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9590852856636047, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/moon?sort=active&pagesize=15
|
# Tagged Questions
The moon tag has no wiki summary.
learn more… | top users | synonyms
2answers
69 views
### What is our estimated running speed on Moon's surface?
I was wondering if we have the chance to run on the Moon's surface, how would you expect it look like? I expect our velocity will increase for the same work we do on Earth, but not sure if this will ...
1answer
262 views
### Is there a simple yet accurate formula for where on Earth the Sun and Moon are directly overhead?
I'm trying to improve a site that shows the region of the Earth currently under daylight, and I need a formula that, given the current time, tells where (latitude/longitude) the sun and moon are ...
3answers
670 views
### Why are there more lunar maria on the near side than on the far side?
It is widely known that the side of the moon that faces us has a particularly large number of dark patches, which resemble seas of liquid, hence, "maria". It is strange, though, that the far side of ...
0answers
288 views
### Strange things about new moon [duplicate]
I have some strange and infantile questions about new moon. I want to know how is it possible that the Moon is not visible at night and also at day it is not Sun eclipse? I will explain the problem in ...
2answers
202 views
### Why don't we see solar and lunar eclipses often?
Since we see the new moon at least once in a month when the Moon gets in between of the Sun and the Moon at the night and as far as I know if this happens during the day, you'll get to see a solar ...
5answers
2k views
### Why do we always see the same side of the Moon?
I am puzzled why we always see the same side of the Moon even though it is rotating around its own axis apart from revolving around the earth. Shouldn't this only be possible if the Moon is not ...
1answer
78 views
### Has anyone on Earth ever seen the dark side of the moon and if so where are the pictures? [duplicate]
If the Moon rotates then we should see the dark side right? But as far as I know the Moon only shows one side to Earth, how can this be if it is rotating?
2answers
180 views
### What kind of cooling mechanism could be used in outer space?
This question arises out of this question on Quora - Apollo 11: 1969 Moon Landing: Did Neil Armstrong really land on the moon? I'm convinced with most of the explanations provided in the first ...
2answers
115 views
### If the moon was rapid enough would it be able to orbit the earth from a close distance?
If the moon was close in orbit that it's surface was like 100 km away from the earth's surface. And it had a large enough angular velocity will it be able to hold orbit? If this was possible, is ...
1answer
57 views
### Where can I search for high quality telescope images of Earth's moon?
I am developing a sensor calibration capability that compares a telescope lunar observation to a physics-based radiometric model. I'd like to find some high quality lunar images to test our ...
3answers
260 views
### Speed of the Moon
Why the motion of the Moon looks very slow in the sky? Doesn't it need the high speed in order to escape the earth's gravity?
2answers
80 views
### Spot of my light on the moon
This is a funny question, but worth answering. The distance between the moon and the Earth is 384,400 km. The speed of light is 299792.458 km/s. It will take 1.3 seconds (Approx.) for my laser beam to ...
3answers
13k views
### Why does the moon sometimes appear giant and a orange red color near the horizon?
I've read various ideas about why the moon looks larger on the horizon. The most reasonable one in my opinion is that it is due to how our brain calculates (perceives) distance, with objects high ...
1answer
533 views
### When will the Moon reach escape velocity?
From what I know, the Moon is accelerating away from the Earth. Do we know when it will reach escape velocity? How do we calculate this?
1answer
32 views
### What role has our Moon played in creating a persistent geomagnetic field?
The question comes from a comment by Mark Rovetta on my earlier question about the Earth's core going cold.
8answers
12k views
### Why doesn't the Moon fall upon Earth?
Why doesn't the Moon, or for that matter anything rotating another larger body, ever fall into the larger body?
2answers
55 views
### Re: recoil (bounce) of objects under varying gravitational forces
First, I don't have a strong knowledge of physics, so please forgive my lack of precision in defining my question. Consider an airless free-fall situation where a steel ball and a balsa ball (with ...
2answers
124 views
### How does earth carry moon with it, if it can not force moon to touch it by gravitational force?
earth's gravitational force is acting on its moon in such a way that it forces moon to rotate round its orbit by centripetal force and carries it while rotating round the sun by gravitational force. I ...
0answers
81 views
### What can be the lightest possible moon launch vehicle?
I tried calculating this, but it gets too complicated. Assume, we have a Moon orbit station and ISS on Earth orbit. We have a Moon base. We want to send a tourist for a week on the Moon and back. We ...
2answers
221 views
### Is it possible that I just saw Jupiter's moons?
Today at about 18:00 I was looking for Venus near the moon and I saw a short bright line. I thought that maybe I was seeing Venus' crescent but it was perpendicular to the crescent of the moon. I then ...
0answers
39 views
### Does Celestia take the lunar orbit precession into account?
Does the lunar orbit plane rotate (8 year period precession)? Does Celestia take this into account while drawing the moon orbit?
1answer
6k views
### Is the moon a planet?
Can our moon qualify as a planet? With regard or without regard to the exact definition of the planet, can the moon be considered as planet as Mercury, Venus and Earth etc. not as the satellite of the ...
1answer
42 views
### Best observing techniques for a Total Lunar Eclipse?
Is there any good tricks to observing a total lunar eclipse that I should be aware of? Just wanting to know what to do to be prepared for the upcoming one, but please post in general for future ...
1answer
131 views
### How long will our artifacts last in moon & space?
Given all the different space probes and equipment that have been either launched into space or lying on the moon. How long will they last before they get decayed into dust or some unrecognizable ...
2answers
138 views
### Planet's Moon attrated by sun [closed]
I'm currently writing a code to generate solar system and $N$ number of planets / moons. I use real data to test (earth / sun / moon data). I succeeded in placing the earth and make it orbit around ...
3answers
11k views
### The Moon during the day
Why do we see the Moon during the day only on certain days and not every day?
1answer
46 views
### How do the “tidal forces warming moons” theories hold when apart from heating from expansion, there may be also cooling from contraction?
I can understand a temporary heating, from the tital forces exerted on the moon but wouldn't there be cooling as well eventually when particles "give in" to contraction? Wouldn't they eventually net a ...
1answer
37 views
### Why did the june 2011 lunar eclipse last so long?
It was kind of hard to miss the lunar eclipse this week, although I didn't see it in person (Sod's law means that on every relatively major astronomical event clouds cover where I am). From what I ...
4answers
2k views
### Why is a new moon not the same as a solar eclipse?
Forgive the elementary nature of this question: Because a new moon occurs when the moon is positioned between the earth and sun, doesn't this also mean that somewhere on the Earth, a solar eclipse ...
4answers
247 views
### Can a moon have another large body as a satellite, and are there any examples of such?
In my mind, I'm comparing it to the Sun-Earth-Moon system. After all, the Earth is primarily a satellite of the Sun, but the Moon is still gravitationally bound to the Earth. Could something like this ...
2answers
33 views
### Where to find Lunar Eclipse data
I am wondering whether there's a good resource to find data about upcoming Lunar Eclipses. For example, showing the percentage of the eclipse over time. Such as: 17:23 GMT - 0%, 17:40 GMT - 10%, etc. ...
3answers
444 views
### Does the Moon's core still contain significant heat?
On earth, using earth-sheltering techniques can significantly reduce the temperature fluctuations on a structure. Would the same statement be true as well on the Moon? Does the Moon's core still ...
1answer
186 views
### If the Moon had gravity as good as the Earth and a magnetic field could it have supported life?
If the Moon had gravity as good as Earth and a magnetic field could it have supported life? Because if the Moon had gravity, it could have retained water more than is present today on the surface. ...
1answer
624 views
### How can the Moon have such a strong effect on the ocean?
The gravitational acceleration on Earth is approximately $10 \mathrm{m}/\mathrm{s}^2$. Compared to this, the tidal effect of the Moon's gravity gives a local variation in the acceleration of ...
1answer
99 views
### Can you tell just from its gravity whether the Moon is above or below you?
If you are on a place of Earth where the Moon is currently directly above or directly below you, you experience a slightly reduced gravitational acceleration because of Moon's gravity. This is what ...
2answers
107 views
### In what ways can a lunar eclipse occur?
In what ways can a lunar eclipse occur? Also, on what percentage of the Earth are they usually viewable? I am aware that there are multiple configurations that constitute a lunar eclipse (umbral, ...
2answers
395 views
### Observing Jupiter's non-Galilean moons
What strength of telescope is required to observe some of the non-Galilean moons of Jupiter? My current telescope at 50 magnification resolves the Galilean moons well, but I'm guessing it's far ...
1answer
146 views
### The Moon is slowly moving away from the earth. Does this mean that a total solar eclipse wasn't possible at some point in earth's history?
When the moon was closer to earth, was it still possible to witness a total solar eclipse millions of years ago? Or was the view-able space so small that it was impractical to even witness it?
1answer
52 views
### Phases of the moon video
I am an educator, and I am looking for a specific video. In the video, they ask some middle school students and some college graduates about why the moon has phases. Most of the students in both the ...
0answers
12 views
### How did the radiative flux of each gas giant planet change with respect to time (since their formation)?
We know that each gas giant planet was warmest when it was young. This warmth came from internal heating from both radioactive decay and from gravitational potential energy. This warmth, in turn, ...
2answers
63 views
### What distinguishes a moon from orbiting space debris? Or in other words, when is a satellite “too small” to be a moon?
The Wikipedia article on Natural Satellites doesn't really give an adequate distinction as to what distinguishes a moon from other orbiting bodies. What I am looking for is a classification that ...
1answer
70 views
### Does the Earth help stabilize changes in the moon's obliquity as well?
We know that the moon helps stabilize changes in Earth's obliquity. But what about Earth and the moon? Are some of the obliquity-stabilizing effects (of the moon on the Earth) communicated through ...
1answer
93 views
### How do the day/night temperature variations of moons compare to those of their planets?
Does the planet's eclipse have a significant impact on the flux of light hitting the moon? Does tidal locking have any effect on the day-night difference of the planet?
1answer
52 views
### Shadow of a Jovian moon over the Great Red Spot
Where can I find pictures of the shadow of any of the Jovian moons partially covering the Great Red Spot? A series of such pictures over time would even be better. The idea is to learn more about the ...
1answer
689 views
### Why don't we see solar and lunar eclipses often? [duplicate]
Since we see the new moon at least once in a month when the Moon gets in between of the Sun and the Moon at the night and as far as I know if this happens during the day, you'll get to see a solar ...
1answer
162 views
### Impact location that created the moon
I was reading an article today about the 1000th orbit of the Lunar Reconnaissance Orbiter, and as many of you know NASA created an animation that simulates the history of the moon. It is speculated ...
1answer
301 views
### Is there any evidence for the claim that the moon was once part of the Earth?
There is a hypothesis that says a part of the Earth was split away and became the Moon. Is there any scientific evidence for this claim?
2answers
35 views
### Scientific value of timing total lunar occultations
Is there still scientific value in timing and reporting total lunar occultations? Why would I time total lunar occultations (grazing occultations are out of the question)? When I reported ...
1answer
121 views
### Stability of moons around tidally locked exoplanets
Can someone send me pointers to work (either theoretical or simulations) showing (in)stability of satellite orbits around tidally locked exoplanets? I want to know firstly if satellite orbits can ...
3answers
35 views
### How can one predict the length of the Synodic month? Why is it irregular?
I'd like to write a program that uses the exact (down to the second) amount of time from one new moon (or full moon) to the next. Yet, I am told that this period is irregular. Yet, it seems to be ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422921538352966, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/94610/smoothing-a-sobolev-function/192888
|
# smoothing a Sobolev function
Let $u \in H^1({\mathbb R}^n)$, $n \geq 2$. Let $\varphi \in C^\infty_0({\mathbb R}^n)$ with $\varphi \geq 0$. Let $\eta$ be a smoothing kernel with $\eta \in C^\infty_0({\mathbb R}^n)$, $\eta \geq 0$, $\int \eta \,dx = 1$. For $t > 0$, define $\eta_t$ by $\eta_t(x)=\frac{1}{t^n}\eta(\frac{x}{t})$. Define ${\tilde u}$ by $${\tilde u}(x)= \begin{cases} u(x); &\text{if } \varphi(x)=0, \\ \\ \int_{{\mathbb R}^n} \eta_{\varphi(x)}(y-x) u(y) dy; & \text{if } \varphi(x) > 0. \end{cases}$$
My question is, is ${\tilde u}$ in $H^1({\mathbb R}^n)$?
-
Fixed it. Using the equation environment is really bad TeXing. In this site you can just use . If you're TeXing your own stuff, I suggest either the align environment or the gather environment. – Patrick Da Silva Dec 28 '11 at 3:18
I do not know the answer. May I ask you why you are considering this strange "regularization"? – Siminore Apr 23 '12 at 17:09
If you formally differentiate $\tilde{u}$, are you sure you can avoid troubles if $\varphi$ approaches zero? It goes in the denominator, and, in principle, this may produce a singularity. – Siminore Apr 23 '12 at 17:13
## 1 Answer
It's not an answer, but there are just some ideas. Maybe it will help.
We can write $$\widetilde u(x)=\int_{\Bbb R^n}\eta(t)u(x+\varphi(x)t)dt,$$ since it's true when $\varphi(x)=0$, and when it's not the case we use a substitution.
When $u$ is a test functions, it appears that $\widetilde u\in L^2(\Bbb R^n)$ and $$\partial_j\widetilde u(x)=\int_{\Bbb R^n}\eta(t)\sum_{k=1}^n\left(1+\partial_k\varphi(x)t\right)\partial_ju(x+\varphi(x)t)dt,$$ which proves that $\widetilde u\in H^1(\Bbb R^n)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242345094680786, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/61684/generalized-vector-bundles-with-singularities-on-riemann-surfaces
|
## Generalized vector bundles with singularities on Riemann surfaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a Riemann surface of genus $g \geq 2$ or in other words a complex curve.
Let $P_1, \ldots, P_m$ be points in $X$ and $E \rightarrow X$ surjective map such that is is a complex $n$-dimensional vector bundle map on $X-{P_1, \ldots, P_m}$ since in $P_1, \ldots, P_m$ the local triviality condition is not satisfied.
I am looking for a general notion of this phenomena. In the spirit of, "locally free sheaves correspond to vector bundles". I am not sure if coherent sheaves are the right objects because I do not want the fibre dimension to vary but only to have no local triviality at a finite number of points.
-
Do you want the fibers over the $P_i$'s to have a linear structure? Do you want $E$ to be an analytic space? Do you require the map $E\to X$ to be continuous/holomorphic? – Qfwfq Apr 14 2011 at 12:15
may be parabolic bundles are what you are looking for ? – Niels Apr 15 2011 at 7:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8798080682754517, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/31855-clearification-poisson-distribution.html
|
# Thread:
1. ## Clearification on Poisson Distribution
Q: A parking lot has 2 entrances. Cars arrive at entrance 1 and 2 according to a Poisson distribution an average of 3/hour, and entrance 2 at an average of 4/hour, respectively. What is the probability that a total of 3 cars will arrive at the parking lot in a given hour? (assume arrivals are independent at the 2 entrances).
Solution:
$\begin{array}{l}<br /> \lambda_p = \lambda_1 + \lambda_2 = 3+4 = 7 \\ \\<br /> P(3) = \frac{7^3}{3!} e^{-7} =0.0521295\\<br /> \end{array}$
Is this correct?
2. Originally Posted by lllll
Q: A parking lot has 2 entrances. Cars arrive at entrance 1 and 2 according to a Poisson distribution an average of 3/hour, and entrance 2 at an average of 4/hour, respectively. What is the probability that a total of 3 cars will arrive at the parking lot in a given hour? (assume arrivals are independent at the 2 entrances).
Solution:
$\begin{array}{l}<br /> \lambda_p = \lambda_1 + \lambda_2 = 3+4 = 7 \\ \\<br /> P(3) = \frac{7^3}{3!} e^{-7} =0.0521295\\<br /> \end{array}$
Is this correct?
Yes
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375342130661011, "perplexity_flag": "middle"}
|
http://matthewkahle.wordpress.com/2010/09/
|
The fundamental group of random 2-complexes
Eric Babson, Chris Hoffman, and I recently posted major revisions of our preprint, “The fundamental group of random 2-complexes” to the arXiv. This article will appear in Journal of the American Mathematical Society. This note is intended to be a high level summary of the main result, with a few words about the techniques.
The Erdős–Rényi random graph $G(n,p)$ is the probability space on all graphs with vertex set $[n] = \{ 1, 2, \dots, n \}$, with edges included with probability $p$, independently. Frequently $p = p(n)$ and $n \to \infty$, and we say that $G(n,p)$ asymptotically almost surely (a.a.s) has property $\mathcal{P}$ if $\mbox{Pr} [ G(n,p) \in \mathcal{P} ] \to 1$ as $n \to \infty$.
A seminal result of Erdős and Rényi is that $p(n) = \log{n} / n$ is a sharp threshold for connectivity. In particular if $p > (1+ \epsilon) \log{n} / n$, then $G(n,p)$ is a.a.s. connected, and if $p < (1- \epsilon) \log{n} / n$, then $G(n,p)$ is a.a.s. disconnected.
Nathan Linial and Roy Meshulam introduced a two-dimensional analogue of $G(n,p)$, and proved an analogue of the Erdős-Rényi theorem. Their two-dimensional analogue is as follows: let $Y(n,p)$ denote the probability space of all 2-dimensional (abstract) simplicial complexes with vertex set $[n]$ and edge set ${[n] \choose 2}$ (i.e. a complete graph for the 1-skeleton), with each of the ${ n \choose 3}$ triangles included independently with probability $p$.
Linial and Meshulam showed that $p(n) = 2 \log{n} / n$ is a sharp threshold for vanishing of first homology $H_1(Y(n,p))$. (Here the coefficients are over $\mathbb{Z} / 2$. This was generalized to $\mathbb{Z} /p$ for all $p$ by Meshulam and Wallach.) In other words, once $p$ is much larger than $2 \log{n} / n$, every (one-dimensional) cycle is the boundary of some two-dimensional subcomplex.
Babson, Hoffman, and I showed that the threshold for vanishing of $\pi_1 (Y(n,p))$ is much larger: up to some log terms, the threshold is $p = n^{-1/2}$. In other words, you must add a lot more random two-dimensional faces before every cycle is the boundary of not any just any subcomplex, but the boundary of the continuous image of a topological disk. A precise statement is as follows.
Main result Let $\epsilon >0$ be arbitrary but constant. If $p \le n^{-1/2 - \epsilon}$ then $\pi_1 (Y(n,p)) \neq 0$, and if $p \ge n^{-1/2 + \epsilon}$ then $\pi_1 (Y(n,p)) = 0$, asymptotically almost surely.
It is relatively straightforward to show that when $p$ is much larger than $n^{-1/2}$, a.a.s. $\pi_1 =0$. Almost all of the work in the paper is showing that when $p$ is much smaller than $n^{-1/2}$ a.a.s. $\pi_1 \neq 0$. Our methods depend heavily on geometric group theory, and on the way to showing that $\pi_1$ is non-vanishing, we must show first that it is hyperbolic in the sense of Gromov.
Proving this involves some intermediate results which do not involve randomness at all, and which may be of independent interest in topological combinatorics. In particular, we must characterize the topology of sufficiently sparse two-dimensional simplicial complexes. The precise statement is as follows:
Theorem. If $\Delta$ is a finite simplicial complex such that $f_2 (\sigma) / f_0(\sigma) \le 1/2$ for every subcomplex $\sigma$, then $\Delta$ is homotopy equivalent to a wedge of circle, spheres, and projective planes.
(Here $f_i$ denotes the number of $i$-dimensional faces.)
Corollary. With hypothesis as above, the fundamental group $\pi_1( \Delta)$ is isomorphic to a free product $\mathbb{Z} * \mathbb{Z} * \dots * \mathbb{Z} / 2 * \mathbb{Z}/2$, for some number of $\mathbb{Z}$‘s and $\mathbb{Z} /2$‘s.
It is relatively easy to check that if $p = O(n^{-1/2 - \epsilon})$ then with high probability subcomplexes of $Y(n,p)$ on a bounded number of vertices satisfy the hypothesis of this theorem. (Of course $Y(n,p)$ itself does not, since it has $f_0 = n$ and roughly $f_2 \approx n^{5/2}$ as $p$ approaches $n^{-1/2}$.)
But the corollary gives us that the fundamental group of small subcomplexes is hyperbolic, and then Gromov’s local-to-global principle allows us to patch these together to get that $\pi_1 ( Y(n,p) )$ is hyperbolic as well.
This gives a linear isoperimetric inequality on $pi_1$ which we can “lift” to a linear isoperimetric inequality on $Y(n,p)$.
But if $Y(n,p)$ is simply connected and satisfies a linear isoperimetric inequality, then that would imply that every $3$-cycle is contractible using a bounded number of triangles, but this is easy to rule out with a first-moment argument.
There are a number of technical details that I am omitting here, but hopefully this at least gives the flavor of the argument.
An attractive open problem in this area is to identify the threshold $t(n)$ for vanishing of $H_1( Y(n,p), \mathbb{Z})$. It is tempting to think that $t(n) \approx 2 \log{n} / n$, since this is the threshold for vanishing of $H_1(Y(n,p), \mathbb{Z} / m)$ for every integer $m$. This argument would work for any fixed simplicial complex but the argument doesn’t apply in the limit; Meshulam and Wallach’s result holds for fixed $m$ as $n \to \infty$, so in particular it does not rule out torsion in integer homology that grows with $n$.
As far as we know at the moment, no one has written down any improvements to the trivial bounds on $t(n)$, that $2 \log{n} / n \le t(n) \le n^{-1/2}$. Any progress on this problem will require new tools to handle torsion in random homology, and will no doubt be of interest in both geometric group theory and stochastic topology.
Posted in research
Mathematical art
Blog at WordPress.com. | Theme: Dusk To Dawn by Automattic.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 73, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9294835329055786, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/51531/theorems-that-are-obvious-but-hard-to-prove/51532
|
Theorems that are ‘obvious’ but hard to prove
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are several well-known mathematical statements that are 'obvious' but false (such as the negation of the Banach--Tarski theorem). There are plenty more that are 'obvious' and true. One would naturally expect a statement in the latter category to be easy to prove -- and they usually are. I'm interested in examples of theorems that are 'obvious', and known to be true, but that lack (or appear to lack) easy proofs.
Of course, 'obvious' and 'easy' are fuzzy terms, and context-dependent. The Jordan curve theorem illustrates what I mean (and motivates this question). It seems 'obvious', as soon as one understands the definition of continuity, that it should hold; it does in fact hold; but all the known proofs are surprisingly difficult.
Can anyone suggest other such theorems, in any areas of mathematics?
-
17
Perhaps the isoperimetric inequality. – Péter Komjáth Jan 9 2011 at 11:28
13
Perhaps the Kepler Conjecture: en.wikipedia.org/wiki/Kepler_conjecture – Aaron Meyerowitz Jan 9 2011 at 11:57
18
A former colleague of mine used to say (to students), "A theorem is obvious if a proof instantly springs to mind," a maxim I like a lot. I think what you are talking about is theorems where a plausible argument instantly springs to mind but falls short of being a proof. – gowers Jan 9 2011 at 15:12
5
I am tempted to vote to close as subjective and argumentative given the comments on the existing answers. Can we narrow the definition of 'obvious' being used? Something like gowers' definition is good, but depends a lot on one's training. Perhaps something like "if you asked an undergraduate if it were true, they'd bet yes." – Qiaochu Yuan Jan 9 2011 at 16:46
17
I disagree that the Jordan curve theorem is "obvious" but admits a surprisingly difficult proof. The proof for curves with reasonable regularity is not difficult, while the truth of the theorem for wild curves is not so obvious, I think. (At least, I think it is reasonable to argue that most people's sense of this being intuitively clear comes from imagining a rather regular curve in the plane, not a wild one.) – Emerton Jan 10 2011 at 16:46
show 10 more comments
52 Answers
If $I_1,I_2,\dots$ are intervals of real numbers with lengths that sum to less than 1, then their union cannot be all of $[0,1]$. It is quite common for people to think this statement is more obvious than it actually is. (The "proof" is this: just translate the intervals so that the end point of $I_1$ is the beginning point of $I_2$, and so on, and that will clearly maximize the length of interval you can cover. The problem is that this argument works just as well in the rationals, where the conclusion is false.)
-
1
Can you expand on this? – Lennart Meier Jan 9 2011 at 20:40
16
@Lennart: enumerate the rationals, and take an interval of length $\epsilon / 2^n$ around the $n$th rational. You get a countable collection of intervals with lengths summing to $\epsilon$ whose union contains all the rationals (never mind just those in $[0,1]$). – Chris Eagle Jan 9 2011 at 21:07
8
@Joe and @Harry, it is of course trivial if you know that there is a countably additive measure that extends lengths of intervals. But that result is not trivial, and many people are tempted to think that the simple statement about intervals is just plain obvious. – gowers Jan 13 2011 at 12:16
3
Using Lebesgue measure, you can do it. However, you probably need to know this fact (or something equivalent) in order to construct Lebesgue measure. The proof in my book uses compactness and reduces to the case of finitely many intervals. (Measure, Topology, and Fractal Geometry, Lemma 5.1.1) – Gerald Edgar Mar 4 2011 at 17:50
2
The proof (any proof) of Heine Borel can be adapted to simultaneously prove the statement here (along with Heine Borel at the same time). For example, consider a covering of $[0,1]$ by open intervals and let $S$ be the set of $x$ so that $[0,x]$ is covered by a finite union of these intervals such that $\sum_{k=1}^n |I_k \cap (- \infty,x]| > x$. $S$ is nonempty since $0 \in S$, and the supremum of $S$ can't be less than $1$. The other proof of Heine Borel (shrinking closed intervals intersect) can also be adapted to give a direct proof of this fact (and countless other basic facts). – Phil Isett Aug 27 2011 at 3:39
show 3 more comments
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
$\mathbb R^n$ is not homeomorphic to $\mathbb R^m$ unless $m = n$.
-
4
1+. I think this is the key point of the answer of Georges Elencwajg. It's not so really about one specific topological dimension and its computation, but rather that we can distinguish the affine spaces at all. I wonder how many decades (centuries?) mathematicians were convinced of this fact without having a proof? – Martin Brandenburg Jan 12 2011 at 8:01
show 1 more comment
The most deadly example I know is the Hauptvermutung in dimensions 2 and 3 (in dimension $>3$, it is the ultimate "obvious but false" theorem). The Hauptvermutung, or "Main Conjecture" states that any two triangulations of a polyhedron are combinatorially equivalent, i.e. they become isomorphic after subdivision.
The Hauptvermutung is so obvious that it gets taken for granted everywhere, and most of us learn algebraic topology without ever noticing this huge gap in its foundations (of the text-book standard simplicial approach). It is implicit every time one states that a homotopy invariant of a simplicial complex, such as simplicial homology, is in fact a homotopy invariant of a polyhedron, unless one also proves independence relative to triangulation.
The Hauptvermutung for 2-manifolds was proven by Radó, and for 3-manifolds by Moïse in 1953. It is a genuinely deep, difficult theorem.
Edit: This answer is essentially taken from Page 4 of The Hauptvermutung Book.
-
4
At least since the introduction of singular homology, there is no gap in the foundations of algebraic topology. But I agree that the Hauptvermutung has a feeling like being obvious. – Lennart Meier Jan 10 2011 at 18:46
3
By "subdivision," I believe you mean PL-subdivision, making the Hauptvermutung a little less obvious. An essentially equivalent statement to its falsity in dimension 5 that I find very unintuitive is that there exists a 4-dimensional simplicial complex $K$ which is not a triangulation of a manifold, but whose suspension $\Sigma K$ is a triangulation of a 5-sphere. – Richard Stanley Jan 12 2011 at 0:59
show 1 more comment
How about the fact that a sphere is the surface of minimal area that bounds a given volume? (BTW, if it isn't geometrically obvious to you, and you understand a little physics and about surface tension, then the roundness of bubbles is a "proof".)
-
8
This is my idea of an "obvious" answer that is hard to prove, i.e. one that has a physically persuasive justification. – roy smith Jan 10 2011 at 5:17
10
The fact that bubbles are round only demonstrates a local, not gloabal, minimum, surely? The bubble can't explore arbitrary regions of phase space: it can only go downhill. – Max Jan 10 2011 at 9:10
4
@Max: If it was only a local minimum then some bubbles would be round, and others would form different shapes. – George Lowther Jan 19 2011 at 23:31
2
@George: suppose you have a double-well model with one well lower than the other. Also suppose that the physical states are (for whatever reason) always created in the higher well and that the barrier between the wells is higher than the fluctuations in the system at the given temperature. Surely then the physical system wouldn't see the global minimum. So this argument doesn't work even at the physical level (never mind rigorous treatment)$\ldots$ – Marek Jul 29 2011 at 21:09
The Jordan curve theorem! Of course in this case the real problem is the meaning of "closed curve".
-
28
Wasn't this already mentioned as the motivating example in the OP? – Todd Trimble Jan 9 2011 at 16:10
4
@Sean: I think "easy" here should be taken to mean "without machinery." With enough machinery, everything is simple... – Qiaochu Yuan Jan 9 2011 at 17:30
9
@Sean: Maybe they are hard just because the machinery to make it simple hasn't been invented yet :) – José Figueroa-O'Farrill Jan 9 2011 at 18:14
4
It is not guaranteed by a silly counting argument that there are short simple statements with irreducibly hard proofs. – gowers Jan 10 2011 at 7:50
4
But there aren't infinitely many that are readable within a human lifetime, the criterion you were using for proofs. – gowers Jan 10 2011 at 20:13
show 11 more comments
It takes Russell and Whitehead several hundred pages to prove that $1+1=2$ in Principia Mathematica. They then say that "the above proposition is occasionally useful."
-
5
I've always wondered about the next sentence: "It is used at least three times, in $\ast 113 \cdot 66$ and $\ast 120 \cdot 123 \cdot 472$." [my emphasis]. Does this express some sort of dry humour or is it meant seriously? – Theo Buehler Jan 12 2011 at 3:58
show 2 more comments
There are a number of facts in multivariable calculus that are obvious but hard to prove. For instance, the change-of-variables formula in a multiple integral is very easy to justify heuristically by talking about little parallelepipeds but troublesome (as I discovered to my cost in a course I once gave) to justify rigorously. And the same goes for the inverse function theorem: although the proof can be made quite transparent and the need for continuous differentiability makes good intuitive sense, there seems to be an irreducible core of actual work needed (in particular, the use of a fixed-point theorem to replace the use of the intermediate-value theorem in the 1D case).
I'd be quite glad to be told that this answer was wrong. If anyone knows of a link to an exposition of these results, particularly the first, that does proper justice to their intuitive obviousness, I'd be very pleased to hear about it.
-
7
I'm afraid you're right. The change of variables formula for multiple integrals is a notorious "Is it really this hard?" moment in mathematical exposition. – Pete L. Clark Jan 11 2011 at 18:14
1
The inverse and implicit function theorems are actually equivalent. – Paul Siegel Jan 15 2011 at 15:57
show 1 more comment
That $\mathbb R^n$ has topological dimension $n$. In a similar vein that affine space $\mathbb A^n_k$ over a field $k$ has Zariski dimension $n$.
-
2
Isn't the problem how to <b>define</b> topological dimension? With the "right" definition, is the proof hard? – Daniel Moskovich Jan 9 2011 at 15:21
3
Dear Daniel, indeed defining topological dimension required amazing ingenuity, from Lebesgue among others. He later explained that his intuition came from contemplating a brick wall and noticing that some points had to be covered by 3(=2+1) bricks! However even granting this, proving that $\mathbb R^n$ has dimension $n$ remains a difficult problem needing techniques of algebraic topology to be solved, even though the definition is on the level of general topology. (To be continued) – Georges Elencwajg Jan 9 2011 at 16:11
3
(Continuation) This is confirmed by Munkres in his well-known text-book Topology. After 10 pages (§50) devoted to dimension theory, he concludes "we do not ask you to prove...that the topological dimension of an $m$-manifold is precisely $m$. And for good reason; the proof requires the tools of algebraic topology." Another indication of hardness is the "invariance of domain" theorem: non-empty open subsets of $\mathbb R^n$ and $\mathbb R^m$ are never homeomorphic unless $n=m$.This does not involve the definition of "dimension" but is quite a difficult theorem (first proved by Brouwer). – Georges Elencwajg Jan 9 2011 at 16:32
show 1 more comment
The trefoil knot is knotted.
-
3
I don't think this is hard to prove, as long as you don't insist on a "useful" proof. See Tietze's 1908 proof. The moment you have any presentation of the fundamental group (Wirtinger/Dehn/whatever), you find a representation onto a non-abelian group and you finish. – Daniel Moskovich Jan 9 2011 at 18:37
2
@Daniel: that seems like another example of hiding behind machinery to me. You at least need to know that fundamental groups are homotopy invariants and what the fundamental group of the circle is. If you wrote out the complete proofs of all the results you're depending on, would you still consider the resulting proof "easy"? – Qiaochu Yuan Jan 9 2011 at 20:37
5
I strongly disagree with what everyone is saying. There's the formulation, then there's the proof. The formulation says (or at least immediately implies) that a certain 3-manifold (the trefoil complement) is not the solid torus. The proof is about as straightforward as any statement about 3-manifolds could possibly be. Moreover, the proof was found almost immediately when the techniques existed- it was never an "open problem". So yes- if I wrote out all the proofs of everything I use, I'd consider it long but easy. Indeed, I think the fundamental group proof is easiest. – Daniel Moskovich Jan 9 2011 at 20:57
6
Modulo the Reidemeister moves I think an easier proof is noticing that the trefoil is $\mathbb{F}_2$ colorable and the unknot isn't. – solbap Jan 9 2011 at 21:19
8
@Daniel: in the context of the OP's clarification in the comments ("I want theorems for which a plausible argument springs to mind at the level of sophistication required to understand the statement, but for which a proof requires a higher level of sophistication.") I think it is totally reasonable to argue that the statement of the result requires a much lower level of sophistication than any of its proofs. (The plausible argument here is something like "if you try it with a physical trefoil knot it's obviously knotted.") – Qiaochu Yuan Jan 9 2011 at 23:48
show 1 more comment
There is a whole class of examples of the following general form: There is an obvious candidate for the solution to an optimization problem, and the obvious candidate is in fact best, but it's very hard to prove that it's best. Two of the examples mentioned in the comments—isoperimetric inequalities and sphere packing—fall into this class. Lower bounds in computational complexity furnish other examples, although our knowledge in this area is so pitiful that the best examples are still conjectural.
I like these examples better than the topological ones like the Jordan curve theorem and the invariance of domain, because there is room to argue that (for example) what makes the Jordan curve theorem hard is that modern mathematics has an exceedingly general definition of a Jordan curve that includes monsters that are non-rectifiable, nowhere differentiable, etc. The "man in the street" doesn't have these monsters in mind when judging that the Jordan curve theorem is obvious. In contrast, if we take something like "the kissing number of the sphere is 12," the man-in-the-street's conception of a counterexample is really no different from the mathematician's. It's just that the man in the street will be convinced after a few minutes of playing with velcro balls and the mathematician won't.
-
1
Right. Any Jordan curve the "man in the street" is thinking of is basically piecewise linear. – Qiaochu Yuan Jan 9 2011 at 22:32
12
Well, unless they've never seen a circle, piecewise $C^\infty$ ... – Yemon Choi Jan 10 2011 at 1:40
2
I don't draw PL-approximations to circles... – Steven Gubkin Jan 10 2011 at 15:45
23
I don't even draw PL approximations to polygons! – George Lowther Jan 11 2011 at 2:54
1
There is a trick of using your pinky knunckle as the point of a hand compass which allows you to draw "perfect" circles without the aid of a metal compass. I always impress students during office hours with this trick. – Steven Gubkin Jan 11 2011 at 20:55
show 3 more comments
The consistency of Peano Arithmetic. This is provably hard to prove, and I think that most would agree that it is obviously true (if not, why are we still doing mathematics?)
-
9
Peano arithmetic doesn't have to be consistent in order for us to meaningfully do mathematics. It just has to have the property that the shortest contradiction is so long that it is impossible to write down before the end of the universe! – Qiaochu Yuan Jan 9 2011 at 23:00
9
Wait a minute. If there is a contradiction, then everything is provable, right? So you can get nice short contradictions. I guess it could still be that the shortest proof of a contradiction must be very long. – Jeff Strom Jan 9 2011 at 23:13
2
The consistency of Peano arithmetic is not "hard to prove" in the intended sense of "hard." Logical strength is not the same as psychological difficulty or length of proof or any plausible notion of the hardness of finding a proof. – Timothy Chow Jan 10 2011 at 22:43
2
Gentzen's proof cheats as well; if you harbor serious doubts about the consistency of PA then you're likely to harbor doubts about such a strong induction principle. Cheating is inevitable. That doesn't mean that it's hard to cheat. – Timothy Chow Jan 11 2011 at 14:56
show 2 more comments
Subgroups of free groups are free. The plausible argument is that any relation satisfied in a subgroup must somehow translate to a relation satisfied in the larger group. Nowadays I guess most people see the proof which proceeds through the fact that the fundamental group of a graph is free, but it's not trivial to set up this machinery (even if one uses a purely combinatorial definition of fundamental group). I don't know how hard the algebraic proof is; perhaps it's easier.
-
6
The original algebraic proof (due to Nielsen) is not that bad. The basic idea is that you show that any subgroup has a generating set $S$ such that if $w$ is a reduced word in $S$, then when you plug the free group elements corresponding to elements of $S$ into $w$, at least one letter survives from each "letter" in $w$. The argument that you can do this is (slightly) similar to row reduction in matrices. There's a readable account of this towards the beginning of Lyndon and Schupp's classic book on combinatorial group theory that is well worth reading. – Andy Putman Jan 10 2011 at 4:21
10
For me, the Theorem that subgroups of free groups are free, is not obvious at all. – Martin Brandenburg Jan 12 2011 at 8:03
3
For me neither. The plausible argument above also "shows" that submonoids of a free monoid are free. – Jan Weidner Aug 3 2011 at 16:01
1
Well, I didn't say it was right! – Qiaochu Yuan Aug 3 2011 at 16:50
show 1 more comment
I think that the ergodic theorem is a good example of this. In down-to-earth terms it says that if you have a box full of gas then the average velocity of all of the gas particles at a given time (the space average) equals the average velocity of a single given particle over time (the time average). This can be regarded as at least a partial theoretical justification for the fact that gas in a container reaches an equilibrium state over time. And what could be more obvious than that?
Yet the ergodic theorem revealed itself as frustratingly difficult to prove. You might think that the challenge would be to just come up with the right precise formulation of the problem; indeed, I don't think it was until people started to identify the measure theoretic underpinnings of probability theory that this was really possible. But while any student with a semester of measure theory under his/her belt can understand the modern formulation of the pointwise ergodic theorem, I highly doubt that very many could supply a correct proof without a hint. For some reason, the proof simply demands an ingenious combinatorial trick.
-
9
I would not view the ergodic theorem as obvious. – Igor Rivin Jan 9 2011 at 21:22
Chess is not a forced win for black.
-
13
@Professor Borcherds: if this is a theorem, could you give some indications of the proof? – Pete L. Clark Jan 9 2011 at 17:40
1
All the more surprising, since the game which is just like chess, but where each side gets to make two moves in a row is provably not a forced move for black (proof: exercise) – Igor Rivin Jan 9 2011 at 21:24
8
Along the same lines is, "Chess is a forced win for black if White gives queen odds." – Timothy Chow Jan 9 2011 at 21:49
3
@Tim van Beek: no, but there's an obvious reason white can at least draw. An example of a silly chess variant where white actually has a demonstrable winning strategy is when the first check wins the game. – Chris Eagle Jan 9 2011 at 22:56
show 6 more comments
As an undergradued, I remember having precisely this feeling when encountering (a version of) the weak Nullstellensatz, which says that the maximal ideals in $\mathbb C[x_1,\ldots, x_n]$ are the sets of all polynomials vanishing on a fixed point $(x_1,\ldots, x_n)$. This must be pretty obvious, what else a maximal ideal could be?
However, the statement now does not look so "obvious" to me anymore... and I don't know if this is a good or a bad thing :-)
-
1
It's straightforward to see that any maximal ideal with residue field C has this property, so the weak Nullstellensatz is equivalent to the statement that all residue fields are C. I don't think this is "obvious," exactly. Certainly similar obvious statements are false. For example, one might guess that all residue fields of an infinite direct product of copies of R are equal to R, and this is completely wrong. – Qiaochu Yuan Jan 9 2011 at 20:42
show 2 more comments
A Theorem is 'obvious' when one does not see an immediate obstruction (for instance a counter-example). Of course it may be true or false, depending on how you are lucky or not. An obvious true theorem whose proof is notoriously difficult is the existence of solutions to linear PDEs $P(i\nabla_x)u=f$ for constant coefficients operators (Malgrange-Ehrenpreis theorem). I don't mean elliptic, hyperbolic, parabolic PDEs, or PDEs of principal type. No, just PDEs. It is not only true but somehow accurate, because it becomes false when the coefficients are non constant, even with analytic coefficients (H. Lewy counter-example).
Dick: At first glance, the Fourier transform reduces the question to the resolution of an algebraic equation $P(\xi)\hat u(\xi)=\hat f(\xi)$. The difficulty is whether $\xi\mapsto\hat f(\xi)/P(\xi)$ is the Fourier transform of a distribution. Because $P$ may vanish, and $P^{-1}(0)$ can be quite singular, this is not a piece of cake. Malgrange had to prove his division theorem to solve it.
-
2
Denis, thanks a lot for baiting me. I appreciate very much linquistic lessons I've learned. Of course, there are people to whom "the existence of non-trivial solutions to linear PDEs for constant coefficients operators" is obvious. Enjoy. – Wadim Zudilin Jan 9 2011 at 14:07
1
I believe "H. Levy" should by "H. Lewy", c.f. en.wikipedia.org/wiki/Hans_Lewy. – Pete L. Clark Jan 9 2011 at 17:38
1
@Dick: Yes, it basically does. But the problem is then to solve this and it is not obvious that there is an invers of this polynomial function in the space of (tempered) distributions. To prove this you need some tricks (a priori estimates to use Hahn-Banach / Bernstein-Sato-polynomials / an explicit formula for the inverse / ...) Also the idea to use the Fourier transformation is not obvious for every mathematician. – Johannes Hahn Jan 10 2011 at 0:15
show 2 more comments
I agree this question is interesting, but only in a psychological rather than mathematical sense, i.e. the only reason the jordan curve theorem seems obvious is that we do not appreciate the generality of the definition of "continuous", rather taking our simplest intuitive examples as typical. Indeed the proof for smooth functions is pretty easy (cf. Guillemin and Pollack), and how many of us distinguish intuitively between (piecewise) smooth and continuous functions? For instance young students assume the intermediate value theorem is obvious because they do not appreciate the local nature of the definition of continuity, i.e. they intuitively assume that the intermediate value theorem is the definition of continuity, as indeed it was in a less rigorous time. Of course the proof of the IVT is a justification of the reasonableness of the definition of continuity. As Moishezon remarked to us as students: " even if it is obvious, you still have to prove it". Or as Tate said after giving an irresistible pictorial argument in first year honors calc. for the continuity of a composition of continuous functions; :"Of course this is NOT a proof! I have merely rendered it intuitively plausible!" (a statement i did not believe at the time.)
Problems in freshman calculus: 1) Give a characterization of a function g such that g is a primitive of a given Riemann integrable function f. Is it enough to assume that g is continuous and differentiable wherever f is continuous, and that g has derivative equal to f at such points? E.g. is a continuous function which is differentiable with derivative zero a.e. a constant function? If not, what assumptions do you have to add?
2) Give an intrinsic characterization of a function g that is a primitive of some unknown Riemann integrable function on [a,b]. Is it enough to assume that g is Lipschitz continuous?
I guess i would give more credence to this if it concerned say theorems that have physically compelling arguments that are hard to make mathematically rigorous, such as Riemann's arguments for the existence of meromorphic functions of second kind with arbitrary poles.
When someone says it is "obvious" that Euclidean space R^n has dimension n, they are really saying that any definition for which this is false is a bad definition, not that it is easy to give an appropriate definition, nor that it is easy to prove the theorem even for a good definition. So this is just an imprecise use of language.
Let me pose a little fun question: Since everyone knows that if n < m, there can be a continuous surjection, but no homeomorphism from R^n to R^m, what about a continuous injection from R^m to R^n? What is the obvious answer? Is it also the correct answer? How much does your response draw on some non obvious mathematical reasoning?
My best idea in the direction of the original question is: "why is a straight line the shortest smooth curve joining two points?"
-
That every continuous vector field on ${\bf S}^2$ has a zero is pretty "obvious" (when you think about the image of trying to comb the hair on a billiard ball) yet takes considerable machinery to prove.
-
1
But see Milnor's proof of the Hairy Ball Theorem in the American Mathematical Monthly, July 1978, pp. 521-524. (I wrote about this here: topologicalmusings.wordpress.com/2008/07/22/… .) – Todd Trimble Jan 9 2011 at 15:36
1
I misstated Lefschetz' easy argument. If the vector at p is non zero, then on a small disc around p all vectors are roughly parallel to that one. It is possible then to see that on the disc which is the complement of that one, the degree of the map defined by the vectors on the boundary is ±2. (Collapsing the external disc onto the inner disc, by deflating the balloon, reflects the vectors on the boundary circle in the tangent lines to that circle. This fixes the vectors at top and bottom, rotating the rest 180 deg every quarter circle. thus there is a zero outside the original disc.) – roy smith Jan 12 2011 at 18:21
show 2 more comments
That the surface of a sphere is not homeomorphic to the real plane.
This may be unfair, in that it requires a good understanding of continuous functions. But it is intuitively obvious at a significantly lower level of mathematical sophistication than is required for the proof.
Then again, it is almost equally "obvious" at the same level of sophistication that you can't turn a sphere inside out.
So the notion of "obvious" in this sense is too crude to distinguish between true statements and false ones, and the question shows hides a bias. It would be more balanced also to ask whether there are "obvious" statements which it is hard to prove false.
-
2
compactness is a n important concept. – roy smith Jan 10 2011 at 3:58
show 2 more comments
The notorious Dehn's Lemma, and it's generalizations, the Loop Theorem and the Sphere Theorem. It was a bone in the throat of 3-manifold topology for almost half a century, despite being `obvious', until proven by Papakyriakopoulos in 1957.
A comment on why it is obvious: the only singularities one can possibly imagine a disc having are things like "stretch a feeler out and around and around through the disc"- and it's obvious how to re-embed to get rid of those. Dehn's Lemma is a statement in the PL category, not in the topological category, so nothing pathological can occur. There's nothing which could possibly go wrong- no way you could possibly create a singularity in a DISC which you can't kill by re-embedding. But... prove it!
-
7
-1: All kinds of stuff could go wrong! – Richard Kent Jan 9 2011 at 16:23
2
I don't know much about 3-manifolds, so perhaps my intuition is undeveloped--that said, when I found out about the sphere theorem, I did a double-take. Were someone to have asked me if I thought this was true before I ran across it, I would have guessed it was wildly optimistic. – Daniel Litt Jan 10 2011 at 18:39
2
@Richard: Almost all undergraduates I know would answer neither "yes" nor "no" but would stare blankly, not understanding the question. – Timothy Chow Aug 26 2011 at 18:26
show 5 more comments
The Kneser-Poulsen conjecture says that if a finite set of (labeled) unit balls in $\mathbb{R}^n$ is rearranged so that in the new configuration, no pairwise distance is increased, then the volume of the union of the balls does not increase. This was finally proved by Bezdek and Connelly in dimension 2 but remains open in higher dimensions.
There are several other notorious elementary problems in geometry that might qualify, e.g., the equichordal point problem, though this one is not quite as "obvious" as the Kneser-Poulsen conjecture.
-
2
It seems to me far from obvious that that result should be true in all dimensions, especially after the dramatic disproof of the Borsuk conjecture due to Kahn and Kalai or the disproof of the Busemann-Petty conjecture. – gowers Jan 13 2011 at 16:23
show 1 more comment
"Global regularity of the Navier-Stokes equation" is not yet in this category, but once a proof is found, I am sure it will be.
More generally, there are many PDE which are "obviously" solvable for physical reasons, but for which actually proving existence (particularly in "global", "non-perturbative" situations, and requiring strong (regular) solutions rather than weak ones), is extremely difficult. A typical example is the Boltzmann equation, for which good global regularity results have only become available recently, with the work of Villani and others.
EDIT: Admittedly, many of the global regularity problems become a lot easier if one applies a physically reasonable truncation. For instance, global regularity for Boltzmann is much easier if one can somehow restrict the particle velocities to never exceed some upper bound $c$. But then the non-obvious fact moves elsewhere; rather than global regularity, the issue is whether one has sufficiently quantitative bounds that these thresholds rarely get triggered. Physically, it is intuitively obvious that a Boltzmann gas is not routinely churning out particles travelling at close to the speed of light; but it is remarkably difficult to quantify and then establish this rigorously.
-
That the identity map of the circle is not nullhomotopic. [When one thinks about it, it is pretty much equivalent to the Brouwer fixed point theorem, which is not as obvious.]
-
2
This was my introduction to many sophisticated mathematical ideas. When teaching several variable calculus I asked my self how I could convince students that Stokes theorem was useful, and found that it could be used to prove this fact, hence also the fundamental theorem of algebra, Brouwer fix point theorem, etc... Equivalently, why is the one form "dtheta" locally exact but not exact? – roy smith Jan 20 2011 at 6:04
The Carpenter's rule: a planar linkage can be straightened without the links running into each other. Although the statement had initially seemed obvious, its truth or falsity was a matter of debate among the experts for several years until Bob Connelly, Eric Demaine, and Günter Rote finally proved it. (The analogous statement in 3 dimensions is actually false.)
-
3
And the analogous statement in dimensions greater than 3 is true: they can always be straightened! $\mathbb{R}^3$ is the exception. – Joseph O'Rourke Aug 26 2011 at 23:51
I would mention the triangulation and smooth stratification theorems for algebraic varieties and variants thereof (analytic, real analytic etc.) These results are quite useful and I would say they seem obvious, at least from my experience. However, it is tricky to find complete proofs in the literature (especially in the real analytic case, which implies all the rest). I would say this is not because the proof is that difficult; it is not, but it's a bit tedious to spell out all the details.
While I'm at it, let me also mention that proofs of these theorems can be found in the references given in the answers to this MO question: http://mathoverflow.net/questions/36050/embeddings-and-triangulations-of-real-analytic-varieties (many thanks again to Mohan and Benoit).
-
The independence of the Parallel Postulate (especially since the proof, which consists of demonstrating that elliptic geometry satisfy the other axioms, is not hard, yet too 2500 years to find).
-
8
How is this obvious? – Andres Caicedo Jan 9 2011 at 21:37
1
Do you mean hyperbolic geometry? or is that the same as elliptic geometry? Spherical geometry certainly does not satisfy all the other axioms. – Sean Tilson Jan 9 2011 at 23:02
2
obvious after the fact :) – Igor Rivin Jan 10 2011 at 1:26
6
If this was obviuos, there wouldn't have been a whole bunch of people trying to prove the dependence in history. In old times it was more counterintuitive than obvious, it seems. – Lennart Meier Jan 10 2011 at 18:49
Inspired by the trefoil knot is knotted" answer, how about the fact that Reidemeister moves generate isotopy of PL knots? This is pretty obvious but a full proof requires a lot of machinery. Indeed, the proof was not known to Reidemeister, who took the fact that his eponymous moves generate isotopy as an unproven axiom. (See Daniel Moskovich's comment.)
-
1
Are you sure? I think Reidemeister did prove it in Knottentheorie (1932), entirely combinatorially. There is a "general position" issue, which is an issue for all of PL topology- but it's no harder for the Reidemeister Theorem than anywhere else. See mathoverflow.net/questions/15217/… – Daniel Moskovich Jan 10 2011 at 0:52
1
I stand corrected. It didn't make it into the original edition of Knotentheorie, but was proved 6 years earlier by Reidemeister (1926), and independently by Alexander and Briggs (1927), as I found by nosing around in the math library this morning. All details are there, including general position arguments. Wikipedia, to my surprise, is entirely accurate: en.wikipedia.org/wiki/Reidemeister_move – Daniel Moskovich Jan 10 2011 at 13:40
show 1 more comment
Inspired by the trefoil knot example: "If two knots are smoothly* isotopic, then their complements are homeomorphic." I'm not sure exactly how hard the proof is, but it certainly seems obvious, and I don't think there is a one line proof.
*Thanks to Richard Kent for pointing out that I need this adverb.
-
7
You need to put "smoothly" in front of "isotopic," otherwise the statement is false: all knots are isotopic (by shrinking all the "knottedness" to a point, a.k.a. bachelor's unknotting). Usually, one fixes this by defining knots to be equivalent if they are ambient isotopic, in which case you get the homeomorphism for free. (In the smooth case, the proof amounts to proving that the isotopy extends to the 3-sphere.) – Richard Kent Jan 10 2011 at 0:36
1
That was a misleading passage in Wikipedia. I tried to clarify it. – Douglas Zare Jan 11 2011 at 1:30
1
Glad to help, David. I agree with the assessment that the extension is obvious but not easy. – Richard Kent Jan 11 2011 at 4:22
4
@Daniel I disagree about which is the natural formalism. My physical intuition for why you can't untie a trefoil is that it would have to pass through itself, which I capture with the injectivity hypothesis. The fact that I would have to move space out of the way is completely irrelevant. After all, if I put a knotted rope in a vacuum, it is still knotted! – David Speyer Jan 11 2011 at 14:50
show 7 more comments
The first of the Tait Conjectures seems intuitively obvious:
Any reduced diagram of an alternating link has the fewest possible crossings.
This 19th century conjecture is difficult to prove, with the proof coming only in 1987 by Kauffman, Murasugi, and Thistlethwaite, using the Jones Polynomial. The discovery of this proof was a huge coup for quantum topology; a quantum invariant was used to prove a difficult classical open problem.
While this is certainly hard to prove, I don't think it's unexpectedly hard to prove. Knot diagrams modulo Reidemeister moves form a rather complicated algebraic structure; and there's no reason to expect that any statement about knot diagrams should be easy to prove.
-
In the same vein, chating: My advisor used to say "An interesting theorem is a theorem true which looks false". Well, tastes and colors... ;-)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477856755256653, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/196412-isosceles-triangle-problem-using-right-triangle-trig.html
|
2Thanks
• 1 Post By Prove It
• 1 Post By cshanholtzer
# Thread:
1. ## Isosceles triangle problem Using right triangle trig
I am not understanding how to solve this one, so any help would be greatly appreciated!
Problem: An isosceles triangle has equal sides of length 10 cm and a base of 8 cm. Find the angles of the triangle. Hint: draw a perpendicular bisector at the base.
Here's where I started- i have two right triangles, bisected, so working with one triangle I have a hypotenuse of 10 cm and a base of 4. I know I need to use the COS (4/10), but I was not sure if it needed to be an inverse COS to get the angle. I got an angle of 66.4 degrees, but I am not sure if I solved that right. (I am not in trig, our teacher just did a short intro to right triangle trig and now i'm stuck.)
2. ## Re: Isosceles triangle problem Using right triangle trig
It's not cos(4/10), it's $\displaystyle \begin{align*} \cos{\theta} = \frac{4}{10} \end{align*}$. Yes, you need to use the inverse cosine to evaluate the angle. Make sure your calculator is in degree mode.
3. ## Re: Isosceles triangle problem Using right triangle trig
Since $\cos \theta = \frac{4}{10}$, $\theta = \cos^{-1} \frac{4}{10} \approx 66.4$ degrees as you've said. To finish, note that you'll have the same angle on the other side of the base of your triangle and that the three angles should sum to 180 degrees.
4. ## Re: Isosceles triangle problem Using right triangle trig
cshanholtzer- thank you. I wanted to verify I was getting this right. I am taking a trig class next semester because I need to build on my math skills. I appreciate your help.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129834771156311, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/19802?sort=newest
|
## How much complex geometry does the zeta-function of a variety know
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
From Weil conjecture we know the relation between the zeta-function and the cohomology of the variety, however it appears that there are certainly more information containing in the zeta-function, and the question remains whether they can be used to compute some more geometric invariants of the variety, such as the Chern classes. For instance, can one spot a Calabi-Yau manifold just by looking at the zeta-function? Is the zeta-function a birational invariant, or stronger?
And consider the roots and poles of the zeta-function. Their absolute values are determined by the Riemann Hypothesis, nevertheless the "phases" still appear to be very mysterious. Are that any good explanations for them, e.g. in the case of elliptic curves?
-
I am bewildered by this question. The title is "how much complex geometry does..." but then you talk about the Weil conjectures, which are about algebraic varieties over finite fields. Your question is about varieties---but over which base field? – Kevin Buzzard Mar 30 2010 at 9:36
Hi Kevin, what i mean is whether it is possible to spot the complex geometry of the variety over C, from the zeta-function which is about the variety over F_{p^n}. And if the local zeta-function is not enough, will the global L-function be so? – Bo Peng Mar 30 2010 at 9:45
1
I think the question is something like this: if we know the zeta function over F_p, then we know the Betti numbers over C. But the zeta function has more information than the Betti numbers: it also has Frobenius eigenvalues, and maybe they give us additional information about the complex variety. – Qiaochu Yuan Mar 30 2010 at 11:50
Doesn't the action of Aut(C) preserve the zeta-function but generally change the complex geometry? If I'm right, such examples would put limits on what the zeta function can know. – David Feldman Dec 20 2010 at 17:21
## 3 Answers
Although the question is phrased a bit sloppily, there is a standard interpretation: Given a smooth complex proper variety $X$, choose a smooth proper model over a finitely generated ring $R$. Then one can reduce modulo maximal ideals of $R$ to get a variety $X_m$ over a finite field, and ask what information about $X$ it retains. As has been remarked, the zeta function of $X_m$ gives back the Betti numbers of $X$.
I believe Batyrev shows in this paper that the zeta function of a Calabi-Yau is a birational invariant and deduces from this the birational invariance of Betti numbers for Calabi-Yau's. And then, Tetsushi Ito showed here that knowledge of the zeta function at all but finitely many primes contains info about the Hodge numbers. (He did this for smooth proper varieties over a number field, but a formulation in the 'general' situation should be possible.)
For an algebraic surface, once you have the Hodge numbers, you can get the Chern numbers back by combining the fact that
$c_2=\chi_{top},$
the topological Euler characteristic, and Noether's formula:
$\chi(O_X)=(c_1^2+c_2)/12.$
I guess this formula also shows that if you know a priori that $m$ is a maximal ideal of ordinary reduction for both $H^1$ and $H^2$ of the surface, then you can recover the Chern numbers from the zeta functions, since $H^1(O_X)$ and $H^2(O_X)$ can then be read off from the number of Frobenius eigenvalues of slope 0 and of weights one and two.
You might be amused to know that the homeomorphism class of a simply-connected smooth projective surface can be recovered from the isomorphism class of $X_m$. (One needs to formulate this statement also a bit more carefully, but in an obvious way.) However, not from the zeta function. If you compare $P^1\times P^1$ and $P^2$ blown up at one point, the zeta functions are the same but even the rational homotopy types are different, as can be seen from the cup product in rational cohomology. See this paper.
Added: Although people can see from the paper, I should have mentioned that Ito even deduces the birational invariance of the Hodge (and hence Betti) numbers for smooth minimal projective varieties, that is, varieties whose canonical classes are nef. Regarding the last example, I might also point out that this is a situation where the real homotopy types are the same.
Added again: I'm sorry to return repeatedly to this question, but someone reminded me that Ito in fact does not need the zeta function at 'all but finitely many primes.' He only needs, in fact, the number of points in the residue field itself, not in any extension.
-
+1: a very useful answer. To go along with my previous comment, let me note that Batyrev construes Calabi-Yau in the weakest possible way (and thus has the strongest possible result): he requires only that the canonical bundle be trivial. So in particular this includes abelian varieties. – Pete L. Clark Mar 30 2010 at 23:06
Very nice answer! But either you have a minor error at the end or I misunderstand you. $P^1 \times P^1$ is not homotopic to $P^2$ blown up at a point. In $H^2(P^1 \times P^1, \mathbb{Z})$, every class has even self intersection; this is not true in $P^2$ blown up at a point. – David Speyer Mar 31 2010 at 12:16
Hmm. Perhaps I'm mistaken, but the real homotopy type is determined by the real cohomology ring, according to the theorem of Deligne-Griffiths-Morgan-Sullivan. This is not saying the spaces are homotopic. As mentioned, they aren't even rationally homotopic, exactly because of the intersection form you allude to. – Minhyong Kim Mar 31 2010 at 12:21
I see, I did not realize that "real homotopy" was a technical term. Thanks! – David Speyer Mar 31 2010 at 13:50
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is essentially a string of comments that would not fit into the box, with one answer in the middle.
@Kevin: for instance, we could take $X$ to be a variety over a $p$-adic field $K$ with good reduction. I agree that the question should begin by making this explicit. The Betti numbers have an interpretation in terms of $\ell$-adic cohomology, so are independent of the choice of embedding $K \hookrightarrow \mathbb{C}$.
@OP: Note that the zeta function cannot be a birational invariant because the Betti numbers are not birational invariants: if you blow up e.g. a surface at a point, $B_2$ increases by $1$. The fact that, as Felipe says, the zeta function is a rational isogeny invariant of abelian varieties -- along with the implication that, for algebraic curves, the zeta function depends only on the isogeny class of the Jacobian -- is the strongest invariance statement I know of along these lines.
The question about Calabi-Yau's seems interesting, although one should say exactly what one means by a Calabi-Yau variety; there is more than one definition.
The question about the Frobenius eigenvalues seems prohibitively vague to me: I do not know what it means to "explain" them.
-
For an elliptic curve the global zeta function should determine it up to isogeny. You cannot expect anything more. I am not much of an analytic number theorist but I believe you might be able to extract the number field if you are given an analytic function which is the L-function of some elliptic curve over some number field. Once this is done, you can read off the traces of Frobenius, hence the $\ell$-adic rep, hence the curve up to isogeny.
In general, I think the question is too illl-posed to get meaningful answers.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291688799858093, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/43/is-there-a-standard-model-for-market-impact/277
|
# Is there a standard model for market impact?
Is there a standard model for market impact? I am interested in the case of high-volume equities sold in the US, during market hours, but would be interested in any pointers.
-
## 2 Answers
There is a family of models that is so commonly used among practitioners that it can be almost regarded as standard. For a survey, check out Rob Almgren's entry in the Encyclopedia of Quantitative Finance. Check out also Barra, Axioma and Northfield's handbooks. In general, the impact term per unit traded currency is of the form
$$MI \propto \sigma_n \cdot \text{(participation rate)}^\beta$$
where the exponent is somewhere between 1/2 and 1, depending on the model being used, and the participation rate is the percentage of total volume of the trade, during the trading interval itself. When including the total MI in optimization, the models commonly used are the "3/2" model and the "5/3" model, in which the costs are proportional to (dollar value being traded for asset i)^{3/2, 5/3}. Since the term is not quadratic (and not solvable by a quadratic optimizer) some people approximate it by a linear term plus a quadratic one, or by a piece-wise linear convex function.
-
I don't believe that there is a "standard" model (per say); in fact, there are many considerations around market impact models, so you would need to be more specific. At the most basic level, you might define market as $P_{first fill} - P_{last fill}$ once your order in actually in the order book (e.g. not including other costs like "opportunity cost"). This doesn't take into account any other trades that may be taking place at the same time or other events that might be impacting the price beyond your order. It doesn't doesn't help you to forecast market impact on an impending order (which would require some knowledge of time of day, volume, volatility, etc.).
That being said, I would certainly recommend reading "Optimal Trading Strategies" (Kissell, Glantz 2003) which gives a good overview (in addition to covering other transaction cost subjects).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568973779678345, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Appell%e2%80%93Humbert_theorem
|
# Appell–Humbert theorem
In mathematics, the Appell–Humbert theorem describes the line bundles on a complex torus or complex abelian variety. It was proved for 2-dimensional tori by Appell (1891) and Humbert (1893), and in general by Lefschetz (1921)
## Statement
Suppose that T is a complex torus given by V/U where U is a lattice in a complex vector space V. If H is a Hermitian form on V whose imaginary part is integral on U×U, and α is a map from U to the unit circle such that
$\alpha(u+v) = e^{i\pi E(u,v)}\alpha(u)\alpha(v)\$
then
$\alpha(u)e^{\pi H(z,u)+H(u,u)\pi/2}\$
is a 1-cocycle on U defining a line bundle on T.
The Appell–Humbert theorem (Mumford 2008) says that every line bundle on T can be constructed like this for a unique choice of H and α satisfying the conditions above.
## Ample line bundles
Lefschetz proved that the line bundle L, associated to the Hermitian form H is ample if and only if H is positive definite, and in this case L3 is very ample. A consequence is that the complex torus is algebraic if and only if there is a positive definite Hermitian form whose imaginary part is integral on E×E.
## References
• Appell, P. (1891), "Sur les functiones périodiques de deux variables", J. de math Sér IV 7: 157–219
• Humbert, G. (1893), "Théorie générale des surfaces hyperelliptiques", J. de math Sér IV 9: 29–170, 361–475
• Lefschetz, Solomon (1921), "On Certain Numerical Invariants of Algebraic Varieties with Application to Abelian Varieties", (Providence, R.I.: American Mathematical Society) 22 (3): 327–406, ISSN 0002-9947, JSTOR 1988897
• Lefschetz, Solomon (1921), "On Certain Numerical Invariants of Algebraic Varieties with Application to Abelian Varieties", (Providence, R.I.: American Mathematical Society) 22 (4): 407–482, ISSN 0002-9947, JSTOR 1988964
• Mumford, David (2008) [1970], Abelian varieties, Tata Institute of Fundamental Research Studies in Mathematics 5, Providence, R.I.: American Mathematical Society, ISBN 978-81-85931-86-9, MR 0282985, OCLC 138290
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7531222701072693, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/204529/prove-if-lim-n-to-inftya-n-l-and-a-n-a-for-all-n-then-l-geq-a
|
# Prove: If $\lim_{n\to\infty}a_n = L$ and $a_n > a$ for all $n$ then $L \geq a$
Prove: If $\lim_{n\to\infty}a_n = L$ and $a_n > a$ for all $n$ then $L \geq a$
Proof: We know from the definition of the limit that $\forall_{\epsilon > 0} \exists_n s.t. \forall_{n>N} |a_n - L| < \epsilon$. Now since $a_n > a$ for all $n$...
I am not really sure where to go from here. Is it the case that all sequences defined by this statement are monotone non-increasing? Then intuitively we could say $a_n = L$ for sufficiently large $n$. Thus, by transitivity $L > a$
-
## 3 Answers
Suppose that $L<a$. We put $\varepsilon=a-L>0$. Since $\displaystyle\lim_{n\rightarrow\infty}a_n=L$, there exists $N_0\in \mathbb{N}$ such that $$|a_n-L|<\varepsilon \quad\forall n\geq N_0.$$ Then $a_n-L<a-L$ for all $n\geq N_0$, or $a_n<L$ for all $n\geq N_0$. This is contradict to the assumption $a_n>a$ for all $n$. Hence $L\geq a$. In the above argument, we have seen that we only need $a_n>L$ for sufficiently large $n$.
Moreover, in the general case we do not have $L>a$. Indeed, we observe that although $\displaystyle a_n=\frac{1}{n}>0$ for all $n$ but $\displaystyle L=\lim_{n\rightarrow\infty}a_n=\lim_{n\rightarrow\infty}\frac{1}{n}=0$.
-
@CodeKingPlusPlus: Please see the solution and extended disscusion on your question. – blindman Oct 5 '12 at 4:56
Where and what is the extended discussion? – CodeKingPlusPlus Oct 5 '12 at 14:10
Hint: To show that $L\geqslant a$, it is enough to show that $a\leqslant L+\varepsilon$, for every $\varepsilon\gt0$. Now, $a_n\to L$ hence, for every $\varepsilon\gt0$, there exists $n_\varepsilon$ such that for every $n\geqslant n_\varepsilon$, $a_n$ is such that...
-
Why is it enough to show that $a \leq L +\epsilon,$ for every $\epsilon > 0$? If it is, that is what I have. – CodeKingPlusPlus Sep 29 '12 at 20:05
Suppose by contradiction that $L<a\leq L+\epsilon \ \forall \epsilon$... – Marco Sep 29 '12 at 20:30
@PlusPlus If $a\leqslant L+\varepsilon$ for every $\varepsilon\gt0$, then $a\leqslant\inf\{L+\varepsilon\mid\varepsilon\gt0\}=\ldots$ – Did Sep 29 '12 at 20:54
Does that mean we take $\epsilon$ to be practically zero? Where did you get that implication $a \leq inf\{L+\epsilon|\epsilon >0\}$ – CodeKingPlusPlus Oct 1 '12 at 0:03
1
For any set $S$, [$a\leqslant s$ for every $s$ in $S$] is logically equivalent to [$a\leqslant\inf S$]. Is this your question? – Did Oct 1 '12 at 5:32
The sequence need not be monotone. But, here's a hint for one approach towards a proof:
Try arguing by contradiction: Assume $L<a$. Now choose an $\epsilon>0$ so that $L+\epsilon<a$. What can you say about the terms $a_n$ of the sequence for large $n$?
-
my intuition tells me that for large values of $n$ that I can say: $L = a$ and then $a + \epsilon$ is not less than $a$ – CodeKingPlusPlus Sep 29 '12 at 22:19
@CodeKingPlusPlus For large $n$, you would know $|a_n-L|<\epsilon$; and this would imply $a_n<L+\epsilon$. But $L+\epsilon<a$, so... – David Mitra Sep 29 '12 at 23:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413226246833801, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/93452/a-question-on-taylor-series-and-polynomial/206673
|
# A question on Taylor Series and polynomial
Suppose $f(x)$ that is infinitely differentiable in $[a,b]$.
For every $c\in[a,b]$ the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$ is a polynomial.
Is true that $f(x)$ is a polynomial?
I can show it is true if for every $c\in [a,b]$, there exists a neighborhood $U_c$ of $c$, such that
$$f(x)=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n\quad\text{for every }x\in U_c,$$ but, this equality is not always true.
What can I do when $f(x)\not=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$?
-
8
– t.b. Dec 22 '11 at 13:39
4
Put $F_n:=\bigcap_{k\geq n}\{x\in [a,b], f^{(k)}(x)=0\}$ and apply Baire's category theorem. – Davide Giraudo Dec 22 '11 at 13:39
5
I'm left wondering if the stronger assumptions here permit some more elementary proof. – leonbloy Dec 22 '11 at 14:44
If the expression is a polynomail then there should be a $n$ such that for every $k \geq n$ you must have $f^{(k)}(c)=0$. If the function is not a polynomial and the series terminate for some $k$ the function is not analytical at the given point, such as $e^{-x^{-2}}$ at $x=0$. – Peter Tamaroff Feb 19 '12 at 5:19
1
@t.b. Would you (or @Davide) mind typing up a correct answer (possible just taken from MO), perhaps as community wiki? (Or, I can do it if no one else wants to). There are currently 10 incorrect answers (some deleted), and no correct answers. – Jason DeVito Oct 22 '12 at 19:44
## 6 Answers
1. All polynomials are Power Series but not all Power Series are not polynomials. For a certain Power Series $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k = a_0 + a_1 (x-c)^1 + a_2 (x-c)^2 + a_3 (x-c)^3 + \cdots$ to be a Polynomial of degree $n$, then for all $k>n$, $a_k = 0$.
2. If $f(x)$ is infinitely differentiable in the interval $[a,b]$, then for every $k \in \mathbb{N}$, $f^{(k)}(x) \in \mathbb{R}$ i.e. exists as a finite number. The Taylor Series of $f(x)$ in the neighbourhood of $c$ is $\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k$ and
3. If the remainder, $R_N(x) = f(x) - \sum\limits_{k=0}^N \cfrac{f^{(k)}(c)}{k!}(x-c)^k$ for a certain $N \in \mathbb N$, converges to $0$ then $f(x) = \sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k$
4. Taylor's inequality: If $|f^{(N+1)}(x)|≤ B$ for all $x$ in the interval $[a, b]$, then the remainder $R_N(x)$ (for the Taylor polynomial to $f(x)$ at $x = c$) satisfies the inequality $$|R_N(x)|≤ \cfrac {B}{(N+ 1)!}|x − c|^{N+1}$$ for all $x$ in $[c − d, c + d]$ and if the right hand side of this inequality converges to $0$ then $R_N(x)$ also converges to $0$.
According to your question, supposing that $\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k$, $\forall c \in [a,b]$ is a polynomial which translates to $$\text{given } c\in[a,b],\ \ \exists n_c\in \mathbb N \ (\text{ $n_c$ depends on c}) \quad|\quad\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k=P_{n_c}(x)$$ $$\quad \quad \quad\quad \quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad \text { and} \ \forall k>n_c, \ k\in \mathbb N, \ {f^{(k)}(c)}=0$$
This is true because if one looks at the finite sum $N\ge n_c$, $$\displaystyle\sum^N_{k=0} a_k(x-c)^k=\sum^N_{k=0}\sum^k_{i=0}a_k\binom ki(-1)^{k-i} c^{k-i}x^{i}=\sum^N_{i=0}x^{i}\sum^N_{k=i}a_k\binom ki(-1)^{k-i} c^{k-i}$$ if this is a polynomial $P_{n_c}(x)$ of degree $n_c$, then $$\forall i>n_c,\ \ \displaystyle \sum^N_{k=i}a_k\binom ki(-1)^{k-i} c^{k-i}=0$$ Solving this system of equations gives that $\forall n_c<k\le N, \ \ a_k=0$ and
$$a_k=\cfrac{f^{(k)}(c)}{k!}=0\implies f^{(k)}(c)=0, \ \ \forall k>n_c$$ This holds when $N\rightarrow \infty$
Since $n_c$ depends on each $c\in[a,b]$, it is sufficient to take $\displaystyle n=\max_{c\in[a,b]} (n_c)$ such that for any $c\in [a,b]$ and for any $k>n,\ \ k\in \mathbb N$, we have $f^{(k)}(c)=0$.
Thus, the Taylor series is of $f$ is a polynomial of degree $\displaystyle n=\max_{c\in[a,b]} (n_c)$ because $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k=P_n(x)$.
At this point it is sufficient to prove that $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k=P_n(x)$ using the Taylor Remainder Theorem (#4).
We've already found out that $f^{(k)}(c) = 0,\space \forall k>n$, thus $f^{(n+1)}(x) = 0$ or simply $f^{(n+1)}(x) \le 0$ (to work with inequalities) which implies that $B = 0$. At this point it is clear that $|R_N(x)|≤ \cfrac {B}{(N+ 1)!}|x − c|^{N+1} = 0$ and we can conclude that $R_N(x)$ converges to $0$ and that $f(x) = \sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k = P_n(x)$.
$f$ is a polynomial.
-
The point is that $k$ is allowed to depend on $c$, whereas our $k$ is independent of $c$. – Jason DeVito Oct 22 '12 at 19:42
why the down-vote? – F'Ola Yinka Nov 11 '12 at 16:33
I didn't downvote, but as I said in my comment, this doesn't answer the OPs question. The OPs question is this: "We know that for each point $c$, the Taylor series at $c$ is a polynomial. Why is the original function a polynomial?" We do not know that at each point $c$, the taylor series is a polynomial of degree $n$ for some $n$, because $n$ can vary as $c$ varies. In particular, your use of point $4$ is invalid because it's possible that at some point $c$, there is no universal $n$ that works any interval containing $c$. – Jason DeVito Nov 11 '12 at 18:21
@JasonDeVito I didn't get your point the first time. I'll fix the answer after working on it. Thanks. – F'Ola Yinka Nov 11 '12 at 18:32
Well, I didn't make my point very well the first time - sorry about that! – Jason DeVito Nov 11 '12 at 19:01
show 10 more comments
As I confirmed here, if for every $c\in[a,b]$, the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$ is a polynomial, then for every $c\in[a,b]$ there exists a $k_c$ such that $f^{(n)}(c)=0$ for $n>k_c$.
If $\max(k_c)$ is finite, we're done: $f(x)$ is a polynomial of degree $\le\max(k_c)$.
If $\max(k_c)=\infty$ it means there is an infinite number of unbounded $k_c$'s, but $f$ is infinitely differentiable, so (hand waving) the $c$'s can't have a limit point, i.e. although $\max(k_c)=\infty$ it can't be $\lim_{c\to c_\infty}k_c=\infty$ for some $c_\infty\in[a,b]$ because that would mean $k_{c_\infty}=\infty$, i.e. not a polynomial.
So the infinite number of unbounded $k_c$'s need to be spread apart, e.g. like a Cantor set.
Does this suggest a counterexample or can a Cantor-like distribution of $k_c$'s never be infinitely differentiable?
-
At first , Taylor Series is an approximation of any polynomial for a value between a and b , given the polynomial is differentiable in the closed interval between a and b .
If the equality is not always true , that exists some neighborhood of c inside a and b is not differentiable for the polynomial.
In this sense, the assumption that polynomial is differentiable in the closed interval between a and b fails. And hence, the coefficient of the terms in taylor series and the coucannot be found.
Simply not using this formula in the case is the solution.
-
Unless I'm missing something why doesn't the following work.
Pick a $c\in[a,b]$. By assumption $g(x)=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!} (x-c)^n$ is a polynomial (I'm assuming this is suppose to mean that the series converges to a polynomial function on some nonzero size interval around $c$). This says that $g^{(k)}(c) = 0$ for $k>d$ where $d$ is the degree of that polynomial. The taylor series $g(x)=\sum\limits_{n=0}^d \cfrac{g^{(n)}(c)}{n!} (x-c)^n$, which is just a polynomial, has the same value as $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!} (x-c)^n$ on some interval and therefore the coefficients are equal. We conclude that $f^{(k)}(c)=g^{(k)}(c) = 0$ for $k>d$.
To show that $f(x)$ agrees with its expansion around $c$ consider the lagrange form of the remainder. $$f(x) = \sum_{n=0}^k \frac{f^{(n)}(c)}{n!} (x-c)^n + f^{(k+1)}(h)\frac{(x-c)^{(k+1)}}{(k+1)!}$$ where $h$ lies between $c$ and $x$ and $x\in [a,b]$. That equality holds for $x\in[a,b]$ is guaranteed since $f$ is $k+1$ times differentiable on $[a,b]$.
We choose $k$ so that $k+1>d$, this guarantees $f^{(k+1)}=0$ and simplifies the above to $$f(x) = \sum_{n=0}^k \frac{f^{(n)}(c)}{n!} (x-c)^n$$ where $x\in [a,b]$. Or in other words f is a polynomial.
-
The problem is the assumption that $f^{(k+1)}(h)=0$, when you actually only know that $f^{(k+1)}(c)=0$. Note that $d$ depends on $c$. (Or rather, you have to prove that $d$ can be chosen independently of $c$.) (By the way: the series being a polynomial just means what you concluded, that for all $c$ there exists $d$ such that $f^{(k)}(c)=0$ for all $k>d$. Note that polynomials converge everywhere, so reference to "nonzero size interval" is unnecessary.) – Jonas Meyer Dec 24 '11 at 3:56
The Taylor theorem states that every function which is differentiable over a space [a,b] can be rewritten as a polynomial. It doesn't mean the function was a polynomial at first place. Consider Sin(x), which is uniformly differentiable. The Taylor polynomial (for any space (a,b)) is actually the function itself, but sin(x) is not a polynomial.
-
I think the OP means that the Taylor series is finite. – Javier Badia Aug 23 '12 at 14:33
You're using the word polynomial with two different meanings, saying that $\sin x$ can be written as a polynomial, but saying it is not a polynomial. If something can be written as a polynomial, then it is a polynomial. If you want to distinguish, you could say $\sin x$ can be written as an infinite polynomial, but not a finite polynomial. But, without the adjective in front, polynomial will almost always be taken to mean finite polynomial. So, I don't think your answer adds anything. – Graphth Sep 28 '12 at 20:20
The situation is not as complex as it seems.
Let $n$ be an nonnegative integer.
Let Coo denote infinitely differentiable.
For starters if $f(x)$ is a polynomial of degree $n$ then the expansion around every $c$ (or $x$) must be of degree at most $n$.
Since $f(x)$ is Coo everywhere taking the $n+1$ th derivative of $f$ at $c$ ( or $x$ ) must always give $0$.
For the same reason : taking the $n$ th derivative of $f$ at $x$ must always give the same value $a$.
Hence we get $f(x) = 0 + a + ...$
Analogue taking the $n+2$ th deriv must give $bx$ agreed upon for all $c$ because $f(x)$ is Coo.
Hence we get $f(x) = 0 + a + bx + ...$
(By induction) We can continue this and then end up with $f(x) = 0 + a + bx + cx^2 + ... + £ x^n$ or in other words the polynomial that equals $f(x)$.
Conclusion : same upperbound degree $n$ for every polynomial being the taylor expansion around any $c$ (or $x$) and $f$ being Coo (everywhere) is both sufficient and almost neccesary.
Actually it is only necc to be $C^{n+1}$ however when polynomial it means also Coo of course so actually these conditions are both sufficient and neccesary.
Thus the situation is completely understood.
QED.
-
4
I didn't downvote, but I responded in a similar way on FOla Yinka's answer: The point is that the $n$ that works at each point is allowed to vary as $c$ varies. So it's not "same degree $n$ from every $c$", but rather, "at each $c$, some $n$". Further, there are plenty of $C^{\infty}$ functions which are not equal to their Taylor series at any point, so saying $f(x) = 0+a+....$ is not necessarily valid. – Jason DeVito Nov 11 '12 at 18:24
2
Let me look at your argument line by line. First, the "if $f$ is a polynomial" line is irrelevant because it's assuming the conclusion. What you say in the next line, "Since $f$ is $C^\infty$ everywhere, taking the $n+1$st derivative must give $0$" is incomplete. What's assumed is that at each point $c$, there is an $n_c$ such that taking the derivative $n_c$ or more times, then plugging in $c$ gives $0$. What's not necessarily true is that $n_c =n_{d}$ for two different choices $c$ and $d$. This fact needs to be proven. (Continued) – Jason DeVito Nov 17 '12 at 19:45
1
What could go wrong (before you prove the fact) is that at each $c$, $n_c$ is finite, but if you take the supremum of the $n_c$ over all $c$ you get $\infty$. Said another way, it could be that $f$ looks a like a degree one polynomial (in terms of derivatives at one point), a degree $2$ polynomial at another point, a degree $3$ at another point,...., so that overall, it doesn't look like it has finite degree. – Jason DeVito Nov 17 '12 at 19:47
1
Could you point out exactly where you prove it? It looks like you just assume it starting with your third line. – Jason DeVito Nov 17 '12 at 21:36
3
"if a function is a polynomial in an interval": this isn't relevant since we don't know $f$ is a polynomial in any interval whatsoever. (We cannot assume $f$ is analytic anywhere in its domain, so local Taylor polynomials needn't necessarily agree with $f$ beyond a single point.) This answer shows that if $f$ is a polynomial then its local Taylor polynomials have lesser-or-equal degrees and in fact are equal to $f$. However, this line of reasoning says nothing about what happens if we do not assume $f$ is a polynomial, which is what you would need to do to answer the original question. – anon Nov 24 '12 at 12:47
show 11 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 202, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520140886306763, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/6466/subgroups-klein-bottle
|
Subgroups - Klein bottle
Let $G$ be the fundamental group of the Klein bottle,
$G = \langle x,y \ ; \ yxy^{-1}=x^{-1} \rangle = {\mathbb Z} \rtimes {\mathbb Z} \ .$
What are the nilpotent subgroups of $G$?
I was only able to find a normal series of abelian subgroups with cyclic quotients in $G$, namely
$1\leq \langle y^2 \ ; \ \ \rangle\leq \langle x,y^2 \ ; \ xy^2=y^2x \rangle\leq G \ .$
Since I'm not an algebraist, I'm sorry if this a silly question. Thanks!
-
2
Does nilpotence of a subgroup correspond to something interesting in the covering space? – Aaron Mazel-Gee Oct 11 '10 at 0:22
@da Fonseca: Every cyclic subgroup is abelian, hence nilpotent. – Arturo Magidin Oct 11 '10 at 1:05
@Aaron: I don't know. – da Fonseca Oct 11 '10 at 4:32
Is there a shorter (yet elementary) argument showing that $G$ is not nilpotent? – Florian Pei Oct 11 '10 at 4:32
Shorter than what? The commutator $[x^a,y] = x^{-a}y^{-1}x^ay = x^{-2a}$, so $[x,y,{}_n x]\neq 1$ for all positive integers $n$ (where $[a,{}_1b] = [a,b]$ and $[a,{}_{n+1}b] = [a,{}_n b,b]$). So the group is not nilpotent. – Arturo Magidin Oct 11 '10 at 6:05
show 2 more comments
2 Answers
Every cyclic subgroup is abelian, hence nilpotent. Every such subgroup is generated by an element of the form $x^ay^b$ with $a$ and $b$ integers.
In addition, $y^2$ commutes with $x$. Since $y^2$ is central, to determine the commutator of two elements we only need to consider the parity of the exponents of $y$ in their normal forms. Elements of the form $x^ay^{2b}$ and $x^ry^{2s}$ commute; elements of the form $x^ay^{2k+1}$ and $x^r$ commute if and only if $r=0$; and elements of the form $x^ay^{2k+1}$ and $x^ry^{2s+1}$ commute if and only if $a=r$. That gives you lots of abelian subgroups that are isomorphic to $\mathbb{Z}\times\mathbb{Z}$.
What about higher nilpotency? Suppose you have two noncommuting elements in your subgroup. It is not hard to check that $[x^ay^{2b+1},x^r] = x^{2r}$ and $[x^ay^{2b+1},x^r y^{2s+1}] = x^{2a-2r}$. But no nontrivial power of $x$ commutes with an element of the form $x^a y^{2b+1}$; and further commutators will just yield further nontrivial powers of $x$ that still do not commute with an element of the form $x^ay^{2b+1}$. So if your subgroup $H$ has an element of the form $h_1=x^a y^{2b+1}$, and $h$ is any nontrivial element of $H$ that is not a power of $h_1$, then $h_1$ and $h$ do not commute, and $h_1$ does not commute with any of $[h_1,h]$, $[h_1,h,h_1]$, $[h_1,h,h_1,h_1],\ldots,[h_1,h,h_1,\ldots,h_1]$, etc. But if $H$ is nilpotent of class $c$, then any commutator of weight $c$ would be central in $H$. Thus, $H$ cannot be nilpotent.
So the only nilpotent subgroups of $H$ are abelian, and they are given as above.
-
That's great! Thanks. – da Fonseca Oct 11 '10 at 4:34
Here's a topological version of Arturo's argument. (I'd leave this as a comment, but I don't have enough rep.)
Every covering space of the Klein bottle is a surface, and it's fairly easy to convince yourself that the infinite-degree ones have cyclic, hence nilpotent, fundamental group.
It remains to consider the finite-degree covering spaces. Well, these are all closed surfaces of Euler characteristic zero (because Euler characteristic is nilpotent multiplicative), so by the classification of surfaces they are either tori or Klein bottles. As already noted, the Klein-bottle group itself is not nilpotent, so only the abelian subgroups are left.
-
Very nice answer! Forgive me, but what do you mean by "Euler characteristic is nilpotent". Thanks. – da Fonseca Oct 11 '10 at 4:38
Sorry, that was a typo. I meant 'multiplicative'. – HJRW Oct 11 '10 at 11:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067420363426208, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/199856-prove-limit.html
|
# Thread:
1. ## Prove this limit
lim as x goes to 1 (x^(2)-2x+4)=3
2. ## Re: Prove this limit
If a function f is continuous at x0, then its limit as x -> x0 is f(x0).
3. ## Re: Prove this limit
Using what "basis"? If you know about "continuous functions" and in particular that all polynomials are continous for all values of x then it is sufficient to say, as emakarov does, that the limit is $(1)^2-2(1)+ 4= 1- 2+ 4= 3$.
If, on the other hand, you have only the definition of limit to work with, you need to look at $|f(x)- L|= |x^2- 2x+ 4- 3|= |x^2- 2x+ 1|= |x-1|^2< \epsilon$, then what can you say if you choose $\delta= \sqrt{\epsilon}$ so that $|x- 1|< \delta$ becomes $|x- 1|< \sqrt{\epsilon}$?
4. ## Re: Prove this limit
The function $f(x) = x^2 - 2x + 4$ is continuous everywhere.
Therefore $\lim_{x \to 1} (x^2 - 2x + 4) = (1)^2 - 2(1) + 4 = 3$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125239849090576, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/82917?sort=newest
|
## R-matrices, crystal bases, and the limit as q -> 1
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am seeking references for precise statements and rigorous proofs of some facts about the actions of quantum root vectors and $R$-matrices on crystal bases for finite-dimensional representations of quantum groups. I am very new to crystal bases, so I would also appreciate corrections if my questions are not well-formulated. I am putting the questions first, followed by the motivation for those who are curious.
## Questions
Do you know references (with proofs) for the following statements:
1. Let $V$ be a finite-dimensional $U_q(\mathfrak{g})$-module. Then the divided powers $E_{\beta}^{(t)}, F_\beta^{(t)}$ of the quantum root vectors have matrix coefficients given by Laurent polynomials in $q$, with respect to the global crystal basis for $V$.
2. Let $V,W$ be finite-dimensional $U_q(\mathfrak{g})$-modules. Then the matrix coefficients of the $R$-matrix $R_{V,W}$ are Laurent polynomials in $q$, with respect to the tensor product of the global crystal bases for $V,W$.
I believe I have proofs for these statements, but it would be nice to just reference something definitive instead of writing the proofs out myself.
## Background
Let $U_q(\mathfrak{g})$ be the quantized enveloping algebra of $\mathfrak{g}$ for $q$ not a root of unity, with generators $E_i,F_i,K_i$ corresponding to the simple roots of $\mathfrak{g}$. Using an action of the braid group of $\mathfrak{g}$ on $U_q(\mathfrak{g})$ one can define quantum root vectors $E_\beta,F_\beta$ for all positive roots $\beta$. (This depends on a choice of decomposition of the longest word of the Weyl group, so assume that we have fixed such a decomposition.)
Let $R_{U,V}$ be the action of the $R$-matrix on $U \otimes V$, (as in Chari-Pressley or Klimyk-Schmudgen, say) so $\tau \circ R_{U,V}$ is the braiding. I would like to make sense of the statement that `$R_{V,W} \to \mathrm{id}_{V \otimes W}$` as $q \to 1$ (and hence the braiding tends to the flip as $q \to 1$). This is not trivial because for different $q$'s, the operators $R_{V,W}$ are really operators on different vector spaces. This is where the crystal bases come in.
As I understand it, a crystal basis for a module has the property that the matrix coefficients of the generators $E_i,F_i$ of $U_q(\mathfrak{g})$ (and divided powers of the generators) are given by universal Laurent polynomials in $q$ whose coefficients are independent of $q$. Using this basis we can think of all of the algebras $U_q(\mathfrak{g})$ for various $q$'s acting on the same vector space. The point is that Laurent polynomials are continuous and well-defined at $q=1$, i.e. they are specializable to $q=1$.
Taking the tensor product of the crystal bases for $V$ and $W$, we can think of all of the $R$-matrices for various $q$'s acting on the same space as well, and it makes sense to ask if this family of $R$-matrices is continuous in $q$, and if so, whether it can be extended to $q=1$.
The formula for the action of the $R$-matrix is a big sum of products of operators of the form
`$$ \frac{1}{[t]_{q_\beta}!} E_\beta^t \otimes F_\beta^t$$`
with coefficients given by Laurent polynomials in $q$. Putting the $q$-factorial under, say, the $E_\beta^t$ term gives the divided power $E_\beta^{(t)}$. If the quantum root vectors and their divided powers act by Laurent polynomials, then the $R$-matrix does as well, and hence everything in sight is continuous in $q$, can be specialized to $q=1$, and it is clear that at $q=1$ the $R$-matrix is just the identity.
-
## 1 Answer
I never found a precise reference for the statement about the R-matrix, so I ended up writing it up myself. The precise statements and proofs can be found in $\S 4.1$ of my paper with Alex Chirvasitu, Remarks on quantum symmetric algebras, available here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205658435821533, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/65652/symplectic-classes-on-rational-surfaces
|
## symplectic classes on rational surfaces.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi. I have a stupid question. Let $M$ be a blow-up of the complex projective plane at $k$ generic points. Then we can choose an orthoginal basis (with respect to the cup product) $H, E_1, \cdots, E_k$ of $H^2(M;\mathbb{Z})$ such that $H^2 = 1, E_i^2 = -1$ for each $i=1,\cdots,k$. Then my question is,
For a given class $C = aH + b_1E_1 + \cdots b_kE_k \in H^2(M;\mathbb{Z})$, how can we check whether $C$ is a symplectic class or not? (I mean, how can we know there exists a symplectic form which represents $C$?). For example, if $k=2$, is there a symplectic form which represents a class $2H - E_1 - E_2$?
I'd really appriciate for your any comment. Thank you in advance.
-
## 1 Answer
This answer is rewritten and include more details
First of all I highly recommend you the article of Paul Biran From Symplectic Packing to Algebraic Geometry and Back available on the page http://www.math.tau.ac.il/~biranp/Publications/Pubications.html , especially theorem 3.2.
Your question basically asks "what is the symplectic cone of $\mathbb CP^2$ blown up in a finite number of points?" This question was answered by Paul Biran (check theorem 3.2. from the above article), though the answer is not 100% explicit. Also, it is known that the symplectic cone of $\mathbb CP^2$ blown up in up to $9$ points coincides with the Kahler cone if the points are chosen so that the resulting surface has only $-1$ curves (in particular it is Fano if the number of points is at most $8$). This permits one to answer your last question (that is done below). In fact Kahler cones of Fano surfaces rather classical objects and all basic questions about them can be answered.
I would like to add that from a certain conjecture from algebraic geometry - Harbourne-Hirschowitz conjecture, it follows that the Kahler cone of $\mathbb CP^2$ blown up in a very generic collection of points coincides with its symplectic cone (Literately this conjecture says the following : any integral curve with negative self-intersection on the blow-up of $\mathbb CP^2$ at a set of points in very general position is a smooth rational curve with self-intersection $−1$. In order to deduce the statement that the symplectic cone coincides with the Kahler one you have to use SW theory). Habourine-Hirschwitz conjecture is open even for $\mathbb CP^2$ blown up in $10$ points, and the famous Nagata conjecture is a partial case of it.
Now let us answer the last bit of the question. The class $2H-E_1-E_2$ is not symplectic. The symplectic cone of $\mathbb CP^2$ blown up in $2$ points coincides with the Kahler cone of $\mathbb CP^2$ blown up in two points. And $H-E_1-E_2$ is a rational $-1$-curve on this surface, while $(2H-E_1-E_2)\cdot (H-E_1-E_2)=0$, so $2H-E_1-E_2$ is not ample, and hence not symplectic.
-
Thank you very much!! I always appriciate your help :) – YCho May 22 2011 at 13:22
You are very welcome :) ! – Dmitri May 22 2011 at 13:36
@Dmitri: I didn't think the question was that involved: Isn't it true that if $C$ is a symplectic class, then $C^2>0$ (the symplectic class is a volume form) and $C\cdot X>0$ for any complex curve $X$ (since every complex curve is a symplectic submanifold). So $C$ has to satisfy $C^2=a^2-\sum b_i^2 >0$ and $aA-\sum b_iB_i>0$ when $X=AH+\sum_i B_iE_i$ is represented by any complex curve. Maybe I'm missing something obvious. – Paul May 22 2011 at 16:44
OK, I get it, its the converse that is tricky. – Paul May 22 2011 at 20:05
Paul, yes, indeed $C\cdot X$ must be positive, the point is that if you we blow up $\mathbb CP^2$ in $n\ge 9$ generic points the number of rational $-1$-curves is infinite, and the structure of this set infinite is quite involved. If you blow up $\mathbb CP^2$ in $8$ points the number of $-1$ curves is around 250 (I am lazy now to make the precise calculation) – Dmitri May 22 2011 at 23:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417574405670166, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/44732/is-string-theory-formulated-in-flat-or-curved-spacetime/44738
|
# Is String Theory formulated in flat or curved spacetime?
String Theory is formulated in 10 or 11 (or 26?) dimensions where it is assumed that all of the space dimensions except for 3 (large) space dimensions and 1 time dimension are a compact manifold with very small dimensions. The question is what is assumed about the curvature of the large 3 space and 1 time dimensions? If these dimensions are assumed to be flat, then how is String Theory ever able to reproduce the equations of General Relativity which require curved space time in the presence of mass-energy (of course, the actual source term for General Relativity is the stress energy tensor).
On the other hand if String Theory is formulated on a general curved space-time with an unknown metric (usually signified by $g_{\mu\nu}$) how do the equations of General Relativity that puts constraints on $g_{\mu\nu}$ arise from string theory?
It is well known that General Relativity requires a spin-2 massless particle as the "force mediation" particle (similar to the photon as the spin-1 massless force mediation particle of electromagnetism). It is also well know that String Theory can accommodate the purported spin-2 massless particle as the oscillation of a closed string. But how does this graviton particle relate to the curvature of the large dimensions of space-time?
I am aware that " How does String Theory predict Gravity? " is somewhat similar to this question, but I do not think it actually contains an answers to this question so please don't mark it as a duplicate question. I would especially appreciate an answer that could be understood by a non-theoretical ("String Theory") physicist - hopefully the answer would be at a level higher than a popular non-mathematical explanation. In other words, assume the reader of the answer understands General Relativity and particle physics, but not String Theory.
Update from Comment to Clarify: If you start with flat space then $g_{\mu\nu}$ isn't the metric tensor since you assumed flat space. If you start with arbitrarily curved space why would and how could you prove that the components of the graviton give you the metric tensor? I am interested in the strongly curved space case since that is where GR differs the most from Newtonian gravity. In flat space you could sort of consider weak Newtonian gravity to be the result of the exchange of massless spin-2 particles. But strong gravity needs actual space curvature to be equivalent to GR.
-
Considering quantum gravity to be a gauge theory too, the graviton (which can not be avoided in ST) is described by the metric tensor field which gives the curvature of space time and couples to the energy-momentum tensor which takes the role of the charge. That is how I imagine it to work, at least in situations with not too strong gravitational fields. – Dilaton Nov 21 '12 at 2:57
1
@dilaton, I understand the graviton cannot be avoided. Your comment maybe kinda makes sense. But if you start with flat space then $g_{\mu\nu}$ isn't the metric since you assumed flat space. If you started in arbitrary curved space why would and how could you prove that the components of the graviton are actually the metric tensor? I guess I am interested in the strongly curved space since that is where GR differs the most from Newtonian gravity. In flat space you could consider weak Newtonian gravity from exchange of massless particles. But strong gravity needs space curvature. – FrankH Nov 21 '12 at 4:01
This doesn't really have anything to do with string theory. Your question is more a long the lines of how can a linearized field theory look like a full nonlinear geometric theory? The answer and procedure takes quite a few pages to write down, but mathematically its actually very much like what happens with nonabelian gauge theory where you can ask a similar question. eg "how can gluons be quanta". In the case of gravity, unfortunately the whole story has been obfuscated by a group of people who confuse themselves needlessly (possibly for historical reasons) but it really is quite simple – Columbia Nov 21 '12 at 6:58
1
– Neo Nov 21 '12 at 7:43
## 1 Answer
String theory may be considered as a framework to calculate scattering amplitudes (or other physically meaningful, gauge-invariant quantities) around a flat background; or any curved background (possibly equipped with nonzero values of other fields) that solves the equations of motion. The curvature of spacetime is physically equivalent to a coherent state (condensate) of closed strings whose internal degrees of freedom are found in the graviton eigenstates and whose zero modes and polarizations describe the detailed profile $g_{\mu\nu}(X^\alpha)$.
Einstein's equations arise as equations for the vanishing of the beta-functions – derivatives of the (continuously infinitely many) world sheet coupling constants $g_{\mu\nu}(X^\alpha)$ with respect to the world sheet renormalization scale – which is needed for the scaling conformal symmetry of the world sheet (including the quantum corrections), a part of the gauge symmetry constraints of the world sheet theory. Equivalently, one may realize that the closed strings are quanta of a field and calculate their interactions in an effective action from their scattering amplitudes at any fixed background. The answer is, once again, that the low-energy action is the action of general relativity; and the diffeomorphism symmetry is actually exact. It is not a surprise that the two methods produce the same answer; it is guaranteed by the state-operator correspondence, a mathematical fact about conformal field theories (such as the theory on the string world sheet).
The relationship between the spacetime curvature and the graviton mode of the closed string is that the former is the condensate of the latter. They're the same thing. They're provably the same thing. Adding closed string excitations to a background is the only way to change the geometry (and curvature) of this background. (This is true for all of other physical properties; everything is made out of strings in string theory.) On the contrary, when we add closed strings in the graviton mode to a state of the spacetime, their effect on other gravitons and all other particles is physically indistinguishable from a modification of the background geometry. Adjustment of the number and state of closed strings in the graviton mode is the right and only way to change the background geometry. See also
http://motls.blogspot.cz/2007/05/why-are-there-gravitons-in-string.html?m=1
Let me be a more mathematical here. The world sheet theory in a general background is given by the action $$S = \int d^2\sigma\,g_{\mu\nu}(X^\alpha(\sigma)) \partial_\alpha X^\mu(\sigma)\partial^\alpha X^\nu(\sigma)$$ It is a modified Klein-Gordon action for 10 (superstring) or 26 (bosonic string theory) scalar fields in 1+1 dimensions. The functions $g_{\mu\nu}(X^\alpha)$ define the detailed theory; they play the role of the coupling constants. The world sheet metric may always be (locally) put to the flat form, by a combination of the 2D diffeomorphisms and Weyl scalings.
Now, the scattering amplitudes in (perturbative) string theory are calculated as $$A = \int {\mathcal D} h_{\alpha\beta}\cdots \exp(-S)\prod_{i=1}^n \int d^2\sigma V_i$$ We integrate over all metrics on the world sheet, add the usual $\exp(-S)$ dependence on the world sheet action (Euclideanized, to make it mathematically convenient by a continuation), and insert $n$ "vertex operators" $V_i$, integrated over the world sheet, corresponding to the external states.
The key thing for your question is that the vertex operator for a graviton has the form $$V_{\rm graviton} = \epsilon_{\mu\nu}\partial_\alpha X^\mu (\sigma)\partial^\alpha X^\nu(\sigma)\cdot \exp(ik\cdot X(\sigma)).$$ The exponential, the plane wave, represents (the basis for) the most general dependence of the wave function on the spacetime, $\epsilon$ is the polarization tensor, and each of the two $\partial_\alpha X^\mu(\sigma)$ factors arises from one excitation $\alpha_{-1}^\mu$ of the closed string (or with a tilde) above the tachyonic ground state. (It's similar for the superstring but the tachyon is removed from the physical spectrum.)
Because of these two derivatives of $X^\mu$, the vertex operator has the same form as the world sheet Lagrangian (kinetic term) itself, with a more general background metric. So if we insert this graviton into a scattering process (in a coherent state, so that it is exponentiated), it has exactly the same effect as if we modify the integrand by changing the factor $\exp(-S)$ by modifying the "background metric" coupling constants that $S$ depends upon.
So the addition of the closed string external states to the scattering process is equivalent to not adding them but starting with a modified classical background. Whether we include the factor into $\exp(-S)$ or into $\prod V_i$ is a matter of bookkeeping – it is the question which part of the fields is considered background and which part is a perturbation of the background. However, the dynamics of string theory is background-independent in this sense. The total space of possible states, and their evolution, is independent of our choice of the background. By adding perturbations, in this case physical gravitons, we may always change any allowed background to any other allowed background.
We always need some vertex operators $V_i$, in order to build the "Fock space" of possible states with particles – not all states are "coherent", after all. However, you could try to realize the opposite extreme attitude, namely to move "all the factors", including those from $\exp(-S)$, from the action part to the vertex operators. Such a formulation of string theory would have no classical background, just the string interactions. It's somewhat singular but it's possible to formulate string theory in this way, at least in the cubic string field theory (for open strings). It's called the "background-independent formulation of the string field theory": instead of the general $\int\Psi*Q\Psi+\Psi*\Psi*\Psi$ quadratic-and-cubic action, we may take the action of string field theory to be just $\int\Psi*\Psi*\Psi$ and the quadratic term (with all the kinetic terms that know about the background spacetime geometry) may be generated if the string field $\Psi$ has a vacuum condensate. Well, it's a sort of a singular one, an excitation of the "identity string field", but at least formally, it's possible: the whole spacetime may be generated purely out of stringy interactions (the cubic term), with no background geometry to start with.
-
2
+1 good look answer – Neo Nov 21 '12 at 7:54
2
Wow, this is such a very nice answer and I will have to read the TRF article linked to too :-). In addidion to nicely explain how gravity really works, this post efficiently proves any claims, often thrown around in popular science magazines that ST is no good because it is not background independent etc, wrong. I like this answer a lot, +1 and stared – Dilaton Nov 21 '12 at 8:10
1
Thank for your interest, folks! Lurscher: in the background-independent formulation, the "physical" vacuum state with the ordinary regular metric tensor etc. is a condensate of many strings in a very complicated state. The only state without any particle excitations is the state with the values of all fields set to zero, i.e. $\Psi=0$ in this case. Everything else has some quanta, by definition. They're zero-momentum quanta but they're there. So $\Psi=0$ in the background-independent variables is a different state - different vacuum, if you wish - than the normal one. – Luboš Motl Nov 21 '12 at 15:51
1
The string field theory action is a functional integral over all shapes of half-strings, $\int {\mathcal D}X(\sigma)$. It's surely not just an integral over the spacetime, it also includes all the integrals over the nonzero modes. Please, this is not a blog entry about string field theory and a full introduction to basics of string theory doesn't belong to any answers on SE, anyway. Please find some intro to string field theory if you want to understand it. – Luboš Motl Nov 21 '12 at 15:53
1
Interesting, It will take me a long time digest every line of it. As I understand from what I read, varying the background metric in the path integral has no relevance in string theory. Is it so? – Prathyush Nov 21 '12 at 17:35
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269992709159851, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/12330/calculating-angles-for-tetrahedral-molecular-geometry
|
# Calculating angles for tetrahedral molecular geometry
Let's say I have a molecular like CH3F (i.e. fluoromethane), and I'm able to measure the angle (theta) between the C-H bonds. Provided (theta) what is the angle between the C-F bond and the C-H bonds?
-
## 1 Answer
Use a coordinate system with the C at the origin and the F on the $z$ axis, and one of the H's in the $xz$ plane. Let $\alpha$ be the angle between the C-F and C-H bonds and $\theta$ be the angle between two C-H bonds. The Cartesian components of unit vectors in the directions of the 3 H's are $$(\sin\alpha,0,\cos\alpha),(\sin\alpha\cos 2\pi/3,\sin\alpha\sin 2\pi/3,\cos\alpha),(\sin\alpha\cos 4\pi/3,\sin\alpha\sin 4\pi/3,\cos\alpha).$$ The dot product of any two of these should equal $\cos\theta$: $$\cos\theta=\sin^2\alpha\cos 2\pi/3+\cos^2\alpha=-{1\over 2}\sin^2\alpha+\cos^2\alpha={3\over 2}\cos^2\alpha-{1\over 2}.$$ The solution is $$\cos\alpha=\pm\sqrt{1+2\cos\theta\over 3}.$$ The negative root is the relevant one here, since we know $\alpha$ is obtuse.
If you happen to remember that the angle for methane is $\theta=\cos^{-1}(-1/3)$, you can check that this makes sense: the above formula gives $\alpha=\theta$ in that case, as it should.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472289085388184, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=592619
|
Physics Forums
## Quadratic Reciprocity
If a is a quadratic nonresidue of the odd primes p and q, then is the congruence $x^2 \equiv a (\text{mod } pq)$ solvable?
Obviously, we want to evaluate $\left( \frac{a}{pq} \right)$. I factored a into its prime factors and used the law of QR and Euler's Criterion to get rid of the legendre symbols needed to evaluate $\left( \frac{a}{pq}\right)$. I don't believe that this helped, though, because I get that it is conditionally solvable, which I don't think is possible from the way the question is worded. (To be exact, I concluded that if a has only one prime factor, then it is unsolvable unless it is 2. It is solvable for every other case.)
Any help is appreciated.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Hi, Joe, I'd go straight at the definition: if there is a solution x to the congruence modulo pq, then $x^2 - a = kpq$ for some integer k; but then the same x would solve the congruences mod p or mod q.
Quote by joeblow If a is a quadratic nonresidue of the odd primes p and q, then is the congruence $x^2 \equiv a (\text{mod } pq)$ solvable? Obviously, we want to evaluate $\left( \frac{a}{pq} \right)$. I factored a into its prime factors and used the law of QR and Euler's Criterion to get rid of the legendre symbols needed to evaluate $\left( \frac{a}{pq}\right)$. I don't believe that this helped, though, because I get that it is conditionally solvable, which I don't think is possible from the way the question is worded. (To be exact, I concluded that if a has only one prime factor, then it is unsolvable unless it is 2. It is solvable for every other case.) Any help is appreciated.
"a is not a quadratic residue modulo p" means "there don't exist integers x,k s.t. $x^2=a+pk$.
From here it follows that if a is not a quad. res. modulo p, q then it can't be a quad. res. modulo pq, since then $x^2=a+rpq = a+(rp)q\Longrightarrow$ a is a quad. res. mod q, against the given data.
DonAntonio
Tags
quadratic, reciprocity
Thread Tools
| | | |
|--------------------------------------------|----------------------------|---------|
| Similar Threads for: Quadratic Reciprocity | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 0 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 0 |
| | Linear & Abstract Algebra | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944190263748169, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/281254/question-on-cech-cohomology
|
# Question On Cech Cohomology
In section 16 of the topology book of Bott and Tu, there is a path fibration $\Omega S^2 \to PS^2 \to S^2$. The $E_2$ page of the spectral sequence of this fibration is $$E_2^{p,q}=H^p(S^2,H^q(\Omega S^2)).$$
This is a Cech Cohomology of $S^2$ with values in $H^q(\Omega S^2)$. My question is why all columns in $E_2$ except p=0 and p=2 are zero, why could we show this by using the universal coefficient theorem of singular cohomology? Thanks!
Edit: I know how to calcute Cech Coholomogy of a manifold with a good cover, but this seems to be the direct limit of groups. Since we don't know whether is $H^q(\Omega S^2)$ free we can't get $E_2=H^p(S^2) \otimes H^q(\Omega S^2)$, so how do we get the result?
-
2
$S^2$ can be given a CW complex structure, so Cech cohomology with constant coefficient is naturally isomorphic to Singular cohomology with that coefficient. – Sanchez Jan 18 at 7:47
@Sanchez Thank you! :) – Jiangnan Yu Jan 18 at 8:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193121790885925, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/103005/list
|
Return to Answer
1 [made Community Wiki]
From Rick Kenyon's open problem list:
What are the minimal number of squares needed to tile an $a \times b$ rectangle?
Kenyon showed the correct order is $\log a$ assuming $a/b$ is bounded with $b \leq a$. However, there is plenty of room for improvement in the constant factor, and an exact formula seems far, far away.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959793210029602, "perplexity_flag": "middle"}
|
http://nrich.maths.org/5589/solution
|
nrich enriching mathematicsSkip over navigation
### Roots and Coefficients
If xyz = 1 and x+y+z =1/x + 1/y + 1/z show that at least one of these numbers must be 1. Now for the complexity! When are the other numbers real and when are they complex?
### Target Six
Show that x = 1 is a solution of the equation x^(3/2) - 8x^(-3/2) = 7 and find all other solutions.
### 8 Methods for Three by One
This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different? Which do you like best?
# Sextet
##### Stage: 5 Challenge Level:
Oliver from Olchfa School and Simon from Elizabeth College, Guernsey both proved a general property of the sequence, namely that each term is the difference of the two previous terms. From this they found that the sequence is a repeating cycle of six values. You may like to consider more general sequences $a, b, b-a, ...$ with the property that each term is the difference of the two previous terms and investigate whether such sequences are always cyclical.
This is Oliver's solution:
Let $A_1 = x + {1\over x} = 1$ and note that $x$ has no real solutions. Let $A_n = x^n+ {1\over x^n}$.
We have $A_0 = 1 + 1 = 2$.
For $A_2 = x^2+ {1\over x^2}$, as $(x + {1\over x})^2 = x^2 + 2 + {1\over x^2}= 1$ so $A_2 = A_1 - A_0 = 1 - 2 = -1$.
For $A_3 = x^3+ {1\over x^3}$, since $A_2 = x^2 + {1\over x^2} = (x^2 + {1\over x^2})(x + {1\over x}) = x^3 + x + {1\over x} + {1\over x^3} = A_3 + A_1$, therefore $A_3 = A_2 - A_1 = -1 -1 = -2$.
In general $A_{n-1} = x^{n-1} + {1\over x^{n-1}} = (x^{n-1} + {1\over x^{n-1}})(x + {1\over x}) = x^n + x^{n-2} + {1\over x^n} + {1\over x^{n-2}} = A_n + A_{n-2}$, therefore $A_n = A_{n-1} - A_{n-2}$.
From $A_0=2$ and $A_1=1$ we can generate the whole sequence of $A_n$ as follows: 2, 1, -1, -2, -1, 1, 2, 1, -1, ... We can see that the sequence is a repeating pattern of 2, 1, -1, -2, -1, 1 for successive values of $n$ with a period of 6.
Rupert from Wales High School noted that:
$x^3= -1$ and hence that $x^4 = -x$, $x^5 = -x^2$ and $x^6 = -x^3 = 1$. If $x + {1\over x}=1$ then $x^2-x + 1 = 0$ and $x\neq -1$ so $(x+1)(x^2 - x +1)=x^3 +1 =0$. Hence $x$ is the cube root of $-1$ so $x^4 = -x$ and $x^7 = x$. From this it is easy to show that $x^n + {1\over x^n}$ takes the values 1, -1, -2, -1, 1, 2, 1, -1, .... cyclically.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438022971153259, "perplexity_flag": "head"}
|
http://cms.math.ca/10.4153/CMB-2008-044-8
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
Abstract view
# On the Maximal Spectrum of Semiprimitive Multiplication Modules
Read article
[PDF: 108KB]
http://dx.doi.org/10.4153/CMB-2008-044-8
Canad. Math. Bull. 51(2008), 439-447
Published:2008-09-01
Printed: Sep 2008
• Karim Samei
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF PostScript
## Abstract
An $R$-module $M$ is called a multiplication module if for each submodule $N$ of $M$, $N=IM$ for some ideal $I$ of $R$. As defined for a commutative ring $R$, an $R$-module $M$ is said to be semiprimitive if the intersection of maximal submodules of $M$ is zero. The maximal spectra of a semiprimitive multiplication module $M$ are studied. The isolated points of $\Max(M)$ are characterized algebraically. The relationships among the maximal spectra of $M$, $\Soc(M)$ and $\Ass(M)$ are studied. It is shown that $\Soc(M)$ is exactly the set of all elements of $M$ which belongs to every maximal submodule of $M$ except for a finite number. If $\Max(M)$ is infinite, $\Max(M)$ is a one-point compactification of a discrete space if and only if $M$ is Gelfand and for some maximal submodule $K$, $\Soc(M)$ is the intersection of all prime submodules of $M$ contained in $K$. When $M$ is a semiprimitive Gelfand module, we prove that every intersection of essential submodules of $M$ is an essential submodule if and only if $\Max(M)$ is an almost discrete space. The set of uniform submodules of $M$ and the set of minimal submodules of $M$ coincide. $\Ann(\Soc(M))M$ is a summand submodule of $M$ if and only if $\Max(M)$ is the union of two disjoint open subspaces $A$ and $N$, where $A$ is almost discrete and $N$ is dense in itself. In particular, $\Ann(\Soc(M))=\Ann(M)$ if and only if $\Max(M)$ is almost discrete.
Keywords: multiplication module, semiprimitive module, Gelfand module, Zariski topolog
MSC Classifications: 13C13 - Other special types
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7862619161605835, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/36431?sort=oldest
|
eigenvalues of edge regular graphs
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In graph theory, an edge regular graph is defined as follows. Let G = (V,E) be a regular graph with v vertices and degree k. G is said to be edge regular if there is also integer λ such that:
Every two adjacent vertices have λ common neighbors.
A graph of this kind is sometimes said to be an er(v,k,λ).
I want know about eigenvalues of edge regular graph, how can we find eigenvalue of this graph?
-
It is clear that one eigenvalue is k. I would not expect that the parameters (v,k,λ) alone determine the other eigenvalues. – Tsuyoshi Ito Aug 23 2010 at 11:41
2 Answers
In the event that every non adjacent pair has a fixed number $\mu$ of common neighbors this is a strongly regular graph srg$(v,k,\lambda,\mu)$ for which the 3 distinct eigenvalues and their multiplicities are known: http://en.wikipedia.org/wiki/Strongly_regular_graph (in this case the graph is diameter 2 or a disjoint union of isomorphic complete graphs.) An erg with k=2 and $\lambda=0$ is a disjoint union of cycles (none of length 3) and could have a wide range of eigenvalues. If you want connected then with v=12 k=3 and $\lambda=0$ you could have a wide variety of graphs obtained by connecting each point to three others without creating triangles (such as the skeleton of a dodecahedron, or of a hexagonal prism) all would have 3 as the unique largest eigenvalue but the rest of the spectrum could be many things.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For the special case when every two non-adjacent vertices have exactly c1 or c2 neighbours, there are some eigenvalue bounds in my paper:
http://www.tandfonline.com/doi/abs/10.1080/03081080600867210
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273307919502258, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/120555-optimization-time-dependent-constraints-need-help-please.html
|
# Thread:
1. ## optimization with time dependent constraints, need help please~
Dear all,
I am trying to obtain the optimal solution from constraints that are represented in time function:
Minimize F = 100Ppv + 2Pdg
subject to:
$Ppv(t) + Pdg(t) \geq PL(t)$
$\sum Ppv(t) \leq 0.4 \times \sum PL(t)$
t is from 1 to 24 and
PL is given as below:
$PL(t) = 2 for 1\leq t \leq 6$
$PL(t) = 8 for 7\leq t \leq 18$
$PL(t) = 4 for 19\leq t \leq 24$
This is an electrical system optimization problem that need to find the optimal power supply from the solar panel (PV) and the diesel generator (dg) with a given load demand for 24 hours.
The first constraint is to ensure both power from pv and dg will meet the load demand at anytime. The second constraint is the total supply from the pv is limited to 40% of load demand only. The objective is to minimize the cost of this system. with the optimal Ppv and Pdg.
I have problem solving this question as most of the maths reference book examples show only decision variable x1, x2...etc that is time independent.
Your kind assistance is appreciated. Thank you
2. Originally Posted by lpy
Dear all,
I am trying to obtain the optimal solution from constraints that are represented in time function:
Minimize F = 100Ppv + 2Pdg
subject to:
$Ppv(t) + Pdg(t) \geq PL(t)$
$\sum Ppv(t) \leq 0.4 \times \sum PL(t)$
t is from 1 to 24 and
PL is given as below:
$PL(t) = 2 for 1\leq t \leq 6$
$PL(t) = 8 for 7\leq t \leq 18$
$PL(t) = 4 for 19\leq t \leq 24$
This is an electrical system optimization problem that need to find the optimal power supply from the solar panel (PV) and the diesel generator (dg) with a given load demand for 24 hours.
The first constraint is to ensure both power from pv and dg will meet the load demand at anytime. The second constraint is the total supply from the pv is limited to 40% of load demand only. The objective is to minimize the cost of this system. with the optimal Ppv and Pdg.
I have problem solving this question as most of the maths reference book examples show only decision variable x1, x2...etc that is time independent.
Your kind assistance is appreciated. Thank you
There is something wrong with this question. First you do not say how the time dependednce of F is to be dealt with. Also are you sure that you can demand any power you want from the solar panel at any hour?
As it stands it looks like three independednt optimisation problems.
CB
3. ## Optimization with time dependent constraints
Hi, thank you for the reply.
The pv power is available during day time only and the rest of the time output from the panel is zero and the summation of this pv power is limited to supply a portion of the load demand.
Do you have any suggestion where can I find a reference or example with time dependent constraints? Thanks.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355834126472473, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87276/an-equivalence-relation-on-the-power-set-of-the-plane
|
## An equivalence relation on the power set of the plane.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $R\subseteq\mathbb{R}^2$. Consider the set of all "horizontal sections" $H_R =$ {$Rb|b\in\mathbb{R}$} where $Rb=${$a\in\mathbb{R} | (a,b)\in R$}. Similarly consider the set of "vertical sections" of $R$, $V_R =${$aR|a\in\mathbb{R}$} where $aR=${$b\in\mathbb{R} | (a,b)\in R$}. Now define the equivalence relation on $\wp (\mathbb{R^2})$ such that $R \sim S$ if, and only if, $H_R=H_S$ and $V_R=V_S$.
1. Do you have any reference to this equivalence relation or a similar one?
2. What connections does it have to topology?
3. As an example, ¿can you describe the equivalence class of a disk?
Of course this can be generalized to any set of binary relations, but I want to understand it in the case of the plane.
-
## 1 Answer
The equivalence class of the closed unit disk `$ \{(x,y): x^2 + y^2 \le 1 \}$` consists of sets `$S = \{(x,y) \in [-1,1] \times [-1,1]: |y| \le f(|x|)\}$` where $f$ is a decreasing homeomorphism from $[0,1]$ onto $[0,1]$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8907372355461121, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=87875&page=3
|
Physics Forums
Thread Closed
Page 3 of 4 < 1 2 3 4 >
## A proof of RH using quantum physics...:)
i can,t prove it :( :( :( :( :( i can not prove the existence of the potential even with the variational principle for Schroedinguer equation:
$$J[\phi]=\int_{-\infty}^{\infty}dx(V(x)\phi^{2}/2-(\hbar^{2}/4m)(D\phi)^{2})$$ with the condition $$\int_{-\infty}^{\infty}dx|\phi|^{2}=Constant$$
we don,t prove anything,in fact i think that in math you sometimes MUST make assumptions and after that prove or disprove them..at least in physics is what we do.
I can,t prove if the potential V exists or not only that if exist must be continous.:( but i have a question for you how would you "disprove" that the potential does not exist?..
Recognitions: Homework Help Science Advisor i don't have to. you have to prove it exists if you are claiming a proof of the riemann hypothesis. making assumptions you can't prove and then deducing a theorem is mostly useless since usually you have now got to show something harder than your original claim. of course that isn't always the case. if you can show the converse is true (ie the RH implies that there is a potential) then you have created an "equivalent" problem. you hve not either explained why your assumption is easier to prove (ineed, you've not proven it), or shown that it is an equivalenet proble. Looks like the million dollars is safe after all.... when will you get tired of claiming a proof for something you barely comprehend? and when will you shut up about this conspiravcy by snobbish mathematicians? If I, someone who knows no nubmer theory or quantum physics, can spot the errors in your idea whcih is not written in a manner that conforms with any of the basic requirements of publication, what makes you think that anyone else should bother to read it and even then to reply to your submission? the errors are easy to spot and they will dismiss such a submission out of hand as they ought to. incidentally, can you disprove that there are infinitely many arithmetic progressions of primes with any common difference? If not then I claim i've just solved the twin prime conjecture. see how easy it is? I'm sure i can write down a couple of "trivial" observations whcih, if true, would net me about \$6,000,000 (US) from the clay instutite. of course i could never prove any of them is true, but according to you that isn't important, is it?
an "approximate" solution to the potential is putting $$\psi=exp(iW/\hbar)$$ (1) as its WKB solution (this solution works fine for "big" masses for example m=1pound) $$W=\int[2m(E_{n}-V)]^{1/2}$$ or differentiating (1) we would get the potential: $$(-\hbar^{2})\frac{d\psi}{\psi}=2m(E_{n}-V)dt$$ from this we can get the potential and substitute in our second order differential equation so we get a new "approximate" equation in the form: $$a\frac{d^{2}\psi}{dt}+b(1/\psi)(\frac{d\psi}{dt})^{2}+c\psi=0$$ then you now can apply existence theorem to prove that the $$\psi$$ exist...but if the Psi functions exist by the expression (1) the potential also exists... I really hate mathematicians they help physicist of course but are always with the nasty rigour,rigour and rigour....as Fourier once said (sorry for the quote if is not exact) "they prefer a good building with a good entrance and a good appearance rather than the building is useful", the extreme rigour is bad it doesn,t help to the development of mathematics...
Quote by eljose I really hate mathematicians they help physicist of course but are always with the nasty rigour,rigour and rigour....as Fourier once said (sorry for the quote if is not exact) "they prefer a good building with a good entrance and a good appearance rather than the building is useful", the extreme rigour is bad it doesn,t help to the development of mathematics...
LOL!! I am really not sure, how a mathematician would react to that particular statement (IANAMathematician), but you know there are reasons why physics works they way it works and why mathematics works the way it works. In terms of logic, physics works using inductive logic (infact most science works in inductive logic), which makes sense since we are observers of certain facts and try to "induce" generalisations from those facts and make further predictions. In other words, its a bottom up approach, we start with the leaf and try to reach the roots (the roots in case of physics being convergence towards a theory of everything). Mathematics, doesnt and shouldnt work using inductive logic. This is because any non-sense in mathematics only generates more non-sense and there is no way to cross check things until some good amount of time has been lost. This is not certainly the case with physics, where predictions can be verified and a hypothesis can be overthrown or accepted based on results. However, this does not stop mathematicians from thinking in an inductive way and thats the source of their creativity in most cases. Infact, most abstractions in mathematics were realised in an inductive way nonetheless, they were pinned down with axioms and rigorous arguments. If it werent so, we would have never seen the face of applied mathematics at any point of time in history.
As for fourier's statement, i am not sure whether he said it or not, but really if he did say it and in the context in which we are talking, then he, i am afraid, was certainly wrong. I wouldnt mind saying this to him, even if he were standing in front of me right now. As an engineer, i respect him, but this statement only shows his short sightedness towards the world of mathematics.
The point of the whole post is "leave mathematics alone, its a great tool for many real developments, but how mathematics itself develops is well left to those who know it".
-- AI
Recognitions:
Homework Help
Science Advisor
Quote by eljose I really hate mathematicians they help physicist of course but are always with the nasty rigour,rigour and rigour....as Fourier once said (sorry for the quote if is not exact) "they prefer a good building with a good entrance and a good appearance rather than the building is useful", the extreme rigour is bad it doesn,t help to the development of mathematics...
Extreme rigour? My God, all we're asking you to do is justify your claim. Since you want to be a physicist in this go on give me some evidence that your claim is true (this actually is not impossible since the zeroes of the zeta function are related to spectra in QM). However, since you want to use the word proof you must accept that you must meet th current burden of proof.
Quoting Fourier out of context doesn't help you. There are times in physics when certain mathematical requirements are ignored, eg convergence of series, stating that a function equals its fourier series, arbitrarily truncating infinite divergent sequences, using intuition for what ought to be true and so on. Indeed mathematicians even do this to give simplified explanations of phenomena. HOwever, just as a physicist mustmake sure his loose reasoning fits with any evidence, so must a mathematician makes sure he can do it rigourously as well. I would hazard that this is what Fourier was talking about. Not that wild speculation and false claims were better than rigour but that sometimes it is better to play around with things and ignore the minor details that will generally take care of themselves.
i have justified my claim two post above yours....take the solution $$\psi=exp(iW/\hbar)$$ then you have that the potential is proportional to: $$(a/\psi^{2})(\frac{d\psi}{dt})^{2}+b$$with a and b constants including the energies E_{n} add this quadratic term to the SE and apply existence theorem...then you see that the $$\psi$$ exists so the potential function will also exist...unless you are again my approximate deduction...well if you think my approach is wrong due again to the nasty rigour i will say that WKB approach is valid for m=1 (although not exact) and that SE is exact for whatever real mass.. I have also send my manuscript to several physics journals to see what they,ve got to answer me $$V=(a/\psi^{2})(\frac{d\psi}{dt})^{2}+b$$ (this comes from WKB approach).. $$W=[2m(E_{n}-V]^{1/2}$$ is related to the Zeta function as the Energies E_{n} are the zeroes of the Z(1/2+is) as you can see the W function is related to the eigenfunctions $$\psi$$ and the potential V so you can form in this case an "approximate" differential equation (non-linear) in the form: $$c1\frac{d^{2}\psi}{dx}+c2(a/\psi^{2})(\frac{d\psi}{dt})^{2}+c3\psi=0$$(3) where here i have used that WKB approach and SE equation then apply existence theorem to this approximate equation to get that the $$\psi$$ will exist and as the potential is realted by WKB approach to the $$\psi$$ functions you get that also the potential MUST exist.....
Recognitions: Homework Help Science Advisor And W is what? remember we aren't all physicists. Since your post doesn't even mention zeta at all we are left wondering what the hell W has to do with the zeroes of the zeta function.
As an example we can always say that for a WKB approach the function always exist: let be the WKB idfferential equation: $$\ey``+b(x)y=0$$ with e<<<<<<<<1 for example e=10^{-34} then its WKB solution is: $$y=exp(\int[b(x)]^{1/2})$$ $$(y`/y)^{2}=b(x)$$ then substitute this solution into the original equation for the b(x) function we would get $$\epsilon(y``+(y`)^{2}/y)=0$$ from this last equation and the existence theorem you would get that the y function exist for whatever b(x) in the approach WKB but if y exist then b(x) also exists. $$y=exp(\int[b(x)]^{1/2})$$ $$(y`/y)^{2}=b(x)$$ $$\ey``+b(x)y=0$$ $$e(y``+(y`)^{2}/y)=0$$
Recognitions: Homework Help Science Advisor and what has all that got to do with creatin anythgin with the spectrum the nontrivial zeroes of teh zeta function? still can't see any mention of zeta in that at all. not that any of your latex works. If you can't simply and clearly starting from the basics explain why you can create a potential V(x) such that the eigenvalues of some differential operator are exactly the non-trivial zeroes of the zeta function, and that this operator is hermitian, and that thus the zeroes do indeed al have real part 1/2 you should not bother posting. That is all that is asked of you: to explain what you claim to have shown.
for any potential V this includes the one that provides the roots of the zeta function as its "energies" we can use the WKB approach so $$\psi=Exp(iW/\hbar)$$ (1) then we take a look to our existence equation $$F(x,\psi)=AV(x)\psi+B\psi$$ from equation (1) we would get that $$V(x)=c+d/(\psi)^{2}(\frac{d\psi}{dx})^{2}$$ then we know substitute our expresssion for the potential in F(x,y)... $$F(x,\psi,d\psi/dx)=A(c\psi+d/(\psi)(\frac{d\psi}{dx})^{2})+B\psi$$ now if we apply the existence theorem we would get that F and its partial derivative respect to $$\psi$$ are continous (the WKB solution is never 0) so for any potential (including this that gives the zeros of Riemann zeta function) we get that the eigenfunctions exists.. and if $$\psi$$ exists also $$V(x)=(\hbar^{2}/2m)\frac{d^{2}\psi}{dx^{2}}+E_{n}$$ from SE equation...we have just proved that the functions exist.. and of course the potential is unique as also the Psi functions depends on n and Energies are a functions of n E_{n}=f(n) as you can see matt i have given an expression for the potential in terms of the second derivative of the Psi function and the roots of the Riemann zeta functions E_{n},usgin the existence theorem we prove the Psi functions exists and using SE equation we prove that the potential can be written in terms of these functions... The L^2(R)) functions are $$\psi=Sen(W/\hbar)$$ where we have called $$W=\int(2m(E_{n}-V))^{1/2}$$
Recognitions: Homework Help Science Advisor that is meaningless. what is psi? what is W? for god's sake are you incapable of explaining anything? what existence theorem (state it in full) what makes you think the relevant functions satisfy the htpothesis of the existence theorem that you've delcined to state properly in this thread. HOw can an existence theorem whcih surely can only tell you if psi exists GIVEN a V tell you that V exists? what now is f as ub E_n=f(n)? what you've said is given psi, then V exists, given V exists then psi exists. that is circular logic and is complete bollocks.
Recognitions:
Homework Help
Science Advisor
to illustrate the circularity:
[QUOTE=eljose]for any potential V this includes the one that provides the roots of the zeta function as its "energies" we can use the WKB approach so $$\psi=Exp(iW/\hbar)$$ (1){quote]
so we're assuming that V exists, why does V exists? anyway, V apparently exists so we can deduce the psi exists
then we take a look to our existence equation $$F(x,\psi)=AV(x)\psi+B\psi$$ from equation (1) we would get that $$V(x)=c+d/(\psi)^{2}(\frac{d\psi}{dx})^{2}$$ then we know substitute our expresssion for the potential in F(x,y)... $$F(x,\psi,d\psi/dx)=A(c\psi+d/(\psi)(\frac{d\psi}{dx})^{2})+B\psi$$ now if we apply the existence theorem we would get that F and its partial derivative respect to $$\psi$$ are continous (the WKB solution is never 0) so for any potential (including this that gives the zeros of Riemann zeta function) we get that the eigenfunctions exists.. and if $$\psi$$ exists also $$V(x)=(\hbar^{2}/2m)\frac{d^{2}\psi}{dx^{2}}+E_{n}$$ from SE equation...we have just proved that the functions exist..
so now youre concluding that since psi exists then V exists. but the only reaosn you thought V existed was becuase the psi exists.
see, that is circular.
so
1. why is there a V such that the spectrum of the differential operator is exactly the zeroes of the zeta function? you have not explained this. AGAIN.
NO,with the existence theorem i have proved that the Psi function exist....and from the existence of Psi we deduce the existence of the potential... you will agree that you can express $$F(x,\psi,d\psi/dx)=A(c\psi+d/(\psi)(\frac{d\psi}{dx})^{2})+B\psi$$ by using the WKB function [/tex]\psi=exp(iW/\hbar)[/tex] and another quote you have the integral equation for the potential... $$\Int_C[2m(E_{n}-V(x)]^{1/2}=2\pi(n+1/2)\hbar$$ from this you could deduce the existence of the potential couldn,t you?...as we know that the "energies" or roots of the Riemann function exists where C is a line between two points a and b where E=V..now apply the eixstence theorem for integral equations and you get that the potential will exist in fact our integral equation is of the form: $$\int_{a}^{b}dxK(n,f(x))=g(n)$$ where g(n) is (n+1/2)hbar
Recognitions: Homework Help Science Advisor eljose, I have no idea what the WKB function is, as i said before. psi is a function that satisfies a differential operator. the V(x) is part of the definition of the differential operator. thus the V must exist and satisfy certain properties before you can conclude psi exists. hencve you cannot use the existence of psi to deduce the existence of V(x) by rearranging a differential equation. that is what you have said you are doing. you might not intend that but that is because you comminucation skills are not sufficioent to write mathematics in english and expect people to understand you. that is an observation, not a criticism. either prove V exists or show why does psi exist in a way that is independent of V(x)
Recognitions: Gold Member Homework Help Science Advisor Are you talking about WKB-approximations, eljose?
Yes Arildno i,m talking about WKB approach as it can be done for mass m=1 so $$\hbar^{2}$$=10^{-68} (small enough isn,t it).... then by using Bohr.Sommerfeld quantization formula (valid for WKB) you get that the potential must satisfy an integral equation of the form: $$\int_{a}^{b}dxK(n,V(x))=g(n)$$ from this integral equation we could obtain numerical valours for the potential V(x) and "construct" it (unles Mr. Matt Grime the super-mathematician defender of the extreme rigour in math disagrees)
Recognitions: Homework Help Science Advisor I do not deny that given a set of complex numbers satisfying certain properties that one can construct a potential with these and only these as the as its spectrum. HOwever, I have yet to see any stated reason why this will work for the zeta function, and that from it one can clconlude the Riemann hypothesis. approximate solutions are not accepetable. opinion of what is and isn't rigorous is completely irrelevant, jose. since you are claiming to do maths you must accpet the standards of mathematics. if we were taking "suggestive" reasons then of course RH is true since we know it is emprically true for such a ragne of s that it cannot help but be true for all s. but that is different from proving it. you are the person who keeps claiming to have a proof. if you dont' like what that really entails tehn stop abusign the word.
Thread Closed
Page 3 of 4 < 1 2 3 4 >
Thread Tools
| | | |
|---------------------------------------------------------------|--------------------------------|---------|
| Similar Threads for: A proof of RH using quantum physics...:) | | |
| Thread | Forum | Replies |
| | General Physics | 2 |
| | Quantum Physics | 27 |
| | Quantum Physics | 3 |
| | Quantum Physics | 28 |
| | Forum Feedback & Announcements | 44 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 45, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547037482261658, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/73046/turning-a-measurable-function-to-a-bijection/73047
|
## Turning a measurable function to a bijection
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f:(0,1)\rightarrow (0,1)$ be a borel measurable function such that for every $y$ in $(0,1)$ , $f^{-1}(y)$ is a borel set and $\mu(f^{-1}(y))=0$ and also $\mu (f((0,1)))=1$ where $\mu$ is the lebesgue measure . Is it possible to build a function $g:A\rightarrow A$ where $A\subseteq[0,1]$ is a borel set and $\mu (A)=1$, such that $g$ is a bimeasurable bijection and $g|_A=f|_A$?
-
## 1 Answer
No, consider $f(x) = 2x \mod 1$.
-
Are there any additional conditions on f that will make it true? – BBB Aug 18 2011 at 9:57
All the conditions I can think of are essentially tautological. – Tapio Rajala Aug 18 2011 at 10:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196188449859619, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/9134/list
|
## Return to Question
2 fixed up the formatting in a hacky way, \newline doesn't seem to work?
Thinking of arbitrary tensor products of rings, $A=\otimes_i A_i$ ($i\in I$, an arbitrary index set), I have recently realized that $Spec(A)$ should be the product of the schemes $Spec(A_i)$, a priori in the category of affine schemes, but actually in the category of schemes, thanks to the string of equalities (where $X$ is a not necessarily affine scheme)
$$Hom_{Schemes} (X, Spec(A))= Hom_{Rings}(A,\Gamma(X,\mathcal O))=\prod_ {i\in I}Hom_{Rings}(A_i,\Gamma(X,\mathcal O))=\prod_ O))$$
$$=\prod_ {i\in I}Hom_{Schemes}(X,Spec(A_i))$$
Since this looks a little too easy, I was not quite convinced it was correct but a very reliable colleague of mine reassured me by explaining that the correct categorical interpretation of the more down to earth formula above is that the the category of affine schemes is a reflexive subcategory of the category of schemes. (Naturally the incredibly category-savvy readers here know that perfectly well, but I didn't at all.)
And now I am stumped: I had always assumed that infinite products of schemes don't exist and I realize I have no idea why I thought so!
Since I am neither a psychologist nor a sociologist, arguments like "it would be mentioned in EGA if they always existed " don't particularly appeal to me and I would be very grateful if some reader could explain to me what is known about these infinite products.
1
# Arbitrary products of schemes don't exist, do they?
Thinking of arbitrary tensor products of rings, $A=\otimes_i A_i$ ($i\in I$, an arbitrary index set), I have recently realized that $Spec(A)$ should be the product of the schemes $Spec(A_i)$, a priori in the category of affine schemes, but actually in the category of schemes, thanks to the string of equalities (where $X$ is a not necessarily affine scheme)
$$Hom_{Schemes} (X, Spec(A))= Hom_{Rings}(A,\Gamma(X,\mathcal O))=\prod_ {i\in I}Hom_{Rings}(A_i,\Gamma(X,\mathcal O))=\prod_ {i\in I}Hom_{Schemes}(X,Spec(A_i))$$
Since this looks a little too easy, I was not quite convinced it was correct but a very reliable colleague of mine reassured me by explaining that the correct categorical interpretation of the more down to earth formula above is that the the category of affine schemes is a reflexive subcategory of the category of schemes. (Naturally the incredibly category-savvy readers here know that perfectly well, but I didn't at all.)
And now I am stumped: I had always assumed that infinite products of schemes don't exist and I realize I have no idea why I thought so!
Since I am neither a psychologist nor a sociologist, arguments like "it would be mentioned in EGA if they always existed " don't particularly appeal to me and I would be very grateful if some reader could explain to me what is known about these infinite products.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9648294448852539, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/91583?sort=votes
|
## The rank of a symmetric space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to work a theorem on a article who deals with the rank one symmetric spaces.
i looked up the definition of symmetric spaces of rank one, but I did not find a satisfactory definition then what is the meaning of rank, intuitively and mathematically? please if anybody already worked with rank one symmetric spaces..
-
What is unsatisfactory with the definition on Wikipedia? – Yemon Choi Mar 19 2012 at 0:51
1
the definition on Wiki is a technical definition: "is the maximum dimension of a subspace of the tangent space (to any point) on which the curvature is identically zero", i need more meaningful definition (intuitively meaning) deeper, maybe another equivalence of that definition. – Abdelmajid Khadari Mar 19 2012 at 1:02
i feel like this definition is not, very practical. – Abdelmajid Khadari Mar 19 2012 at 1:04
1
A minor addition to Alex's answer: The same geometric characterization goes through in the case of compact symmetric spaces, only the assertion about (totally-geodesic) flat submanifolds becomes local and the curvature of the manifold is $\ge 0$, with curvature $>0$ iff the symmetric space has rank 1. – Misha Mar 20 2012 at 11:28
## 2 Answers
First the algebraic definition. A (non-compact) symmetric space is of the form $G/K$, where $G$ is a (non-compact) semisimple Lie Group defined over $\mathbb{R}$, and $K$ is a maximal compact subgroup of $G$.
Then the rank of a symmetric space is the dimension of the "maximal $\mathbb{R}$-split torus", i.e. the maximal dimension of an abelian diagonalizable over $\mathbb{R}$ subgroup of $G$.
The geometric meaning is that the rank is the dimension of the maximal flat submanifold of the symmetric space. If the rank is $1$, then the maximal flats are geodesics, and the symmetric space turns out to be negatively curved.
If the rank is larger then one, then the symmetric space is only non-positively curved. However, higher-rank symmetric spaces have spectacular rigidity properties (e.g. Margulis superrigidity, arithmeticity and the normal subgroup property come to mind).
There are only three families of rank 1 symmetric spaces,
1) hyperbolic $n$-space, corresponding to the Lie group $SO(n,1)$.
2) complex hyperbolic $n$-space, corresponding to the Lie group $SU(n,1)$.
3) quaternionic hyperbolic $n$-space, corresponding to the Lie group $Sp(n,1)$.
There is also one exceptional example:
4) the Cayley upper half plane, corresponding to the Lie group $F_4^{-20}$.
The spaces 3) and 4) have some but not all of the rigidity properties of higher rank (in particular, superrigidity and arithmeticity, but not the normal subgroup property).
-
1
thank you very much Pr. Alex Eskin, i find this answer very helpful, and i accept your answer. – Abdelmajid Khadari Mar 19 2012 at 1:13
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This question reminds me of when I was a graduate student. At some point Gelfand asked me "What is the rank of a symmetric space" and I just spat back the usual definition, something like what Matrix found in Wikipedia. Gelfand shook his head as if I had said something really stupid and proceeded to explain:
Euclidean space, hyperbolic space, complex projective space (and so on) are rank one. Why? Because if you have two pairs of points and the distance between them is the same, then there is an isometry that takes one pair of points to the other. ONE invariant is all you need to determine whether two pairs of points are the same up to isometries.
The Grassmannian of two-planes in ${\mathbb R}^4$ has rank two : you need two invariants to determine if two pairs of points are equivalent up to isometry. Take two planes in four-space passing through the origin. Draw a circle with center zero in one plane. Project it orthogonally onto the second plane. You get an ellipse, but you cannot compare it to the circle because it lives on a different plane so project it back to the first plane. The minor and major axes of your ellipse (with respect to the circle) are two invariants that are preserved by any isometry of the pair of planes. Conversely if you have two pairs of planes that have the same two invariants, then there is an isometry of the Grassmannian that takes one pair of planes to the other.
I went back home and the uninsightful book I was reading on symmetric spaces went back to the library the next day.
-
1
Wow. But where do we find beautiful definitions or explanations like this? – Deane Yang Mar 19 2012 at 10:55
3
To be fair, though, this interpretation of rank is explained carefully and precisely in Helgason's "Differential Geometry, Lie Groups, and Symmetric Spaces", where he explains, in terms of Weyl chambers, exactly how the invariants arise. – Robert Bryant Mar 19 2012 at 15:23
2
@Robert: Thanks for the reference! If I remember correctly, Helgason is more interested in developing Cartan's theory of symmetric spaces so it is natural that it does not give you this sort of insight up front. However, is there any place where the theory is developed from a metric viewpoint? There are a few pages in Busemann's Geometry of Geodesics, but really very little. – alvarezpaiva Mar 19 2012 at 18:24
2
@alvarezpavia: Try Eberlein's book "Geometry of nonpositively curved manifolds" and/or Ballmann, Gromov, Schroeder "Manifolds of nonpositive curvature". What Gelfand told you is just the Cartan decomposition: Your symmetric space is $X=G/K$, where $G$ is a reductive group and $K$ maximal compact. Cartan decomposition: $G=KA_+K$, where $A_+$ is Weyl chamber. Thus, if $o\in X$ is fixed by $K$, then every orbit $Kx$ in $X$ intersects $A_+$ exactly once, at a point $y=c(x)$. Thus, the pairs of points $(o,x)$ are parameterized by $y\in A_+$. Dimension of $A_+$ is the rank of the space $X$. QED – Misha Mar 20 2012 at 7:50
2
@alvarezpavia: By the way, if you think of the 2-point invariant of pairs of points $(p,x)$ in rank $n$ symmetric space that Gelfand told you about, as "vector-valued distance" on $X$ (which is the ordinary distance if $n=1$) then you obtain some interesting and useful geometry on $X$ with "triangle inequalities" that generalize the ordinary ones, etc. – Misha Mar 20 2012 at 7:55
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9096207618713379, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/15680/only-head-on-collision/15695
|
Only Head On collision?
Let us consider two spheres A and B. Suppose they are interacting with each other (In broad sense one can say they are colliding). Let for the time being refer to "striking by coming in contact as collision". Now suppose the sphere B collided in such a way that its direction of motion is not along the line joining their centeres, in short it was not a head on collision. Then can this collision be an elastic one or is it so that there can be an elastic collision if and only if it is a head on collision? I hope the answer provider or commentator will justify his answer with reasons.
-
Welcome to Physics.SE! The problem of glancing elastic collisions is treated exactly in most introductory textbooks, so yes they are possible. Could you clarify what you are confused about? – dmckee♦ Oct 13 '11 at 17:14
A little billiard-table physics should make the issues clear. – Mike Dunlavey Oct 13 '11 at 21:53
3 Answers
Head on collision is not required for elastic collision. Or the collision you described above can be an elastic collision.
To be an elastic collision, the momentum and kinetic energy should both be conserved, that is: assume the velocity for sphere 1 and 2 are $\vec{v_1}$ and $\vec{v_2}$ accordingly, and the direction of neither of them is along the direction of the line joining their centers, and assume the velocity after the collision are $\vec{v_1'}$ and $\vec{v_2'}$, then the elastic collision requires:
$m_1\vec{v_1} +m_2\vec{v_2} = m_1\vec{v_1'}+m_2\vec{v_2'}$
and
$\frac{1}{2}m_1v_1^2 + \frac{1}{2}m_2v_2^2 = \frac{1}{2}m_1v_1'^2 + \frac{1}{2}m_2 v_2'^2$
apply some initial conditions: like the angle between two velocities etc, you can solve out the equation of motion of these two spheres after collision.
So, to sum up, if you want tell whether a collision is elastic or not, you just need to verify whether the motion of these spheres satisfy the two equations above. Hope this solves your problem.
-
You haven't explicitly stated the nature of the contact between the spheres (i.e. is it frictionless?)
If we have ideal frictionless contact between the spheres, then yes the collision can be elastic regardless of whether it is head-on or glancing.
In the real world, with friction, then it does matter whether it is head on or not. For glancing contact, there will be slippage of the contact point during the collision, resulting in some energy dissipation. Also, this glancing contact will torque the spheres, transferring translational kinetic energy into rotational energy about their own centers of mass.
-
you had written inelastic where the context meant elastic in the second paragraph and I edited it. – anna v Oct 14 '11 at 5:25
Elastic collision only implies that the collision conserves momentum as well as energies(in the form of kinetic energy) i.e. $$m_1u_1+m_2u_2 = m_1v_1+m_2v_2$$ and $$\frac{1}{2}m_1u_1^2 + \frac{1}{2}m_2u_2^2 = \frac{1}{2}m_1v_1^2 + \frac{1}{2}m_2v_2^2$$
Where as in inelastic collision, only momentum is conserved while the energy( as in kinetic energy) is not conserved, which is lost as heat, sound, light etc.
It doesn't matter if the collision is head on or oblique for the collision to be classified as elastic or inelastic.
regards,
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408194422721863, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/88733/metric-graphs-and-curvature
|
## Metric graphs and curvature
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a positively weighted connected simple graph with bounded degree $X$ . Denote by $d(x,y)$ the weight on the edge with endpoints $x$ and $y$. Suppose we have the following compatibility axioms:
1. If $x_0,x_1,\ldots,x_{n-1},x_n$ and $y_0,y_1,\ldots,y_{n-1},y_n$ are two shortest paths in the unlabeled graph connecting the same points (i.e. $x_0=y_0$ and $x_n=y_n$), then $$\sum d(x_{i-1},x_i)=\sum d(y_{i-1},y_i)$$
2. If $x_0,x_1,\ldots,x_{n-1},x_n$ is a shortest path in the unlabeled graph connecting $x$ to $y$ and $y_0,y_1,\ldots,y_{m-1},y_m$, with $m>n$, is another path connecting $x$ to $y$, then
$$\sum d(x_{i-1},x_i)<\sum d(y_{i-1},y_i)$$
Define a metric on the set of vertices just adding the various weights that you encounter moving along a shortest path. Suppose this metric is locally finite. The two axioms above say that this metric is well-defined and that it is in some sense compatible with the graph structure: the shortest-paths are geometrically the same and the distance is additive only along shortest paths.
General question: Has somebody studied these objects?
In particular, I am interested in the following questions:
Question: Suppose that our graph (without labels) is a tree with positive isoperimetric constant. Is it still true that it does not have bi-lipschitz embeddings (with the new metric) into a Hilbert space?
Also, let $\delta_1(X)$ be the best nonnegative constant, if exists, such that every side of a geodesic triangle is contained in the $\delta_1$-neigborhood of the other two sides. (If you like quasi-isometry, let $\delta$ be the infimum of all $\delta_1(Y)$ when $Y$ runs over the quasi-isometric class containing $X$). I am very tempted to say that $X$ has curvature bounded above by $-\frac{1}{\delta(X)}$.
Question: Has been this notion studied before? More specifically, what happens if I take, as graph, the 1-skeleton of a very good triangulation of a compact negatively curved metrizable manifold with labels given by the induced metric?
Thank you in advance,
Valerio
-
## 1 Answer
I suspect there is some terminology confusion here. Without the axioms 1 and 2, what you define is just a 1-dimensional polyhedral space (they are usually called metric graphs). Or, more precisely, the set of vertices of a metric graph with the induced distance. The "weights" are usually referred to as "edge lengths".
The axioms impose an additional requirement that the shortest paths of the "weighted" metric are the same as those of the "unweighted" one. This is a very unusual requirement, and I wonder where it comes from. By the way, it is not satisfied in the triangulation example from your last question.
Concerning bi-Lipschitz embeddings: if the edge lengths are bounded away from 0 and infinity, then there is no difference from the unit lengths (the two metrics are bi-Lipschitz equivalent). And if they are not bounded, then any tree (I mean the set of vertices) can be bi-Lipschitz embedded into $\mathbb R$. Assign the $k$-th edge the length $10^k$, then the distance between any two vertices is dominated by the longest edge between them. Now mark one of the vertices as the origin and, for every vertex $x$, let $f(x)$ be the distance from $x$ to the origin. Then $f$ is a bi-Lipschitz map to $\mathbb R$.
I wonder what you mean by quasi-isometry (in your curvature bound definition). The notion of quasi-isometry I am used to allows arbitrary bi-Lipshitz rescaling, so the supremum and infimum of $\delta$ do not make sense. What you define is Gromov's $\delta$-hyperbolicity. This property is preserved under quasi-isometries, but the actual value of $\delta$ is not.
As for you last question, I presume that you are talking about the lift of the triangulation to the universal cover of the manifold (otherwise it is just a finite graph). In this case the graph is certainly Gromov-hyperbolic because it is quasi-isometric to the universal cover itself (with its negatively curved metric).
-
Thank you very much for interesting answer. Some questions/comments: 1) it comes from the fact that I want to try to prove some results for locally finite spaces and I am starting from a simpler case, but more general than graphs. I have modeled those axioms thinking about a "travelling salesman graph": graphs whose vertices are cities and the edges are weighted with the distances. 2) many thanks for the example of the tree. In fact I have proved a partial result where I allow that the distances go to zero, but they can diverge only in a controlled way (subexponential more or less); 3) I put – Valerio Capraro Feb 18 2012 at 21:11
the quasi-isometries just because, in case of Cayley graph, maybe one wants to allow certain quasi-isometries (those coming from changing the generating set of the group). But OK, it's not necessary. I know that I am defining just Gromov's hyperbolicity, but I am wondering whether or not looking at the problem in terms of curvature can be of some utility. I see that the graph of a triangulation may not be satisfy those axioms, but the definition of curvature makes sense also without those axioms. I am wondering if there is a nice relation between the curvatures of the manifold and the graph, – Valerio Capraro Feb 18 2012 at 21:20
in particular I am thinking about some convergence (or maybe a controlled error) when $n\rightarrow\infty$ and $T_n$ is the $n$-th barycentric subdivision of a fixed triangulation $T$. – Valerio Capraro Feb 18 2012 at 21:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938019871711731, "perplexity_flag": "head"}
|
http://medlibrary.org/medwiki/Birational_transformation
|
# Birational transformation
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
In algebraic geometry, the goal of birational geometry is to determine when two algebraic varieties are isomorphic outside lower-dimensional subsets. This amounts to studying mappings which are given by rational functions rather than polynomials; the map may fail to be defined where the rational functions have poles.
## Birational maps
A rational map from one variety (understood to be irreducible) X to another variety Y, written as a dashed arrow X – → Y, is defined as a morphism from a nonempty open subset U of X to Y. By definition of the Zariski topology used in algebraic geometry, a nonempty open subset U is always the complement of a lower-dimensional subset of X. Concretely, a rational map can be written in coordinates using rational functions.
A birational map from X to Y is a rational map f: X – → Y such that there is a rational map Y – → X inverse to f. A birational map induces an isomorphism from a nonempty open subset of X to a nonempty open subset of Y. In this case, we say that X and Y are birational, or birationally equivalent. In algebraic terms, two varieties over a field k are birational if and only if their function fields are isomorphic as extension fields of k.
A special case is a birational morphism f: X → Y, meaning a morphism which is birational. That is, f is defined everywhere, but its inverse may not be. Typically, this happens because a birational morphism contracts some subvarieties of X to points in Y.
We say that a variety X is rational if it is birational to affine space (or equivalently, to projective space) of some dimension. Rationality is a very natural property: it means that X minus some lower-dimensional subset can be identified with affine space minus some lower-dimensional subset. For example, the circle with equation x2 + y2 − 1 = 0 is a rational curve, because the formulas
$x=\frac{2\,t}{1+t^2}$
$y=\frac{1-t^2}{1+t^2}\,,$
define a birational map from the affine line to the circle. (Explicitly, the inverse map sends (x,y) to 2(1-y)/x.)
More generally, a smooth quadric (degree 2) hypersurface X of any dimension n is rational, by stereographic projection. (For X a quadric over a field k, we have to assume that X has a k-rational point; this is automatic if k is algebraically closed.) To define stereographic projection, let p be a point in X. Then we define a birational map from X to the projective space Pn of lines through p by sending a point q in X to the line through p and q. This is a birational equivalence but not an isomorphism of varieties, because it fails to be defined where q = p (and the inverse map fails to be defined at those lines through p which are contained in X).
## Minimal models and resolution of singularities
Every algebraic variety is birational to a projective variety. So, for the purposes of birational classification, we can work only with projective varieties, and this is usually the most convenient setting.
Much deeper is Hironaka's 1964 theorem on resolution of singularities: over a field of characteristic 0 (such as the complex numbers), every variety is birational to a smooth projective variety. Given that, we can concentrate on classifying smooth projective varieties up to birational equivalence.
In dimension 1, if two smooth projective curves are birational, then they are isomorphic. But that fails in dimension at least 2, by the blowing up construction. By blowing up, every smooth projective variety of dimension at least 2 is birational to infinitely many "bigger" varieties, for example with bigger Betti numbers.
This leads to the idea of minimal models: can we find a unique simplest variety in each birational equivalence class? The modern definition is that a projective variety X is minimal if the canonical line bundle KX has nonnegative degree on every curve in X; in other words, KX is nef. It is easy to check that blown-up varieties are never minimal.
This notion works perfectly for algebraic surfaces (varieties of dimension 2). In modern terms, one central result of the Italian school of algebraic geometry from 1890-1910, part of the classification of surfaces, is that every surface X is birational either to a product P1 × C for some curve C or to a minimal surface Y. The two cases are mutually exclusive, and Y is unique if it exists. When Y exists, it is called the minimal model of X.
## Birational invariants
Main article: Kodaira dimension
At first, it is not clear how to show that there are any algebraic varieties which are not rational. In order to prove this, we need to build up some birational invariants of algebraic varieties.
One useful set of birational invariants are the plurigenera. The canonical bundle of a smooth variety X of dimension n means the line bundle of n-forms,
$\,\!K_X = \Omega^n_X,$
which is the nth exterior power of the cotangent bundle of X. For an integer d, the dth tensor power of KX is again a line bundle. For d ≥ 0, the vector space of global sections H0(X,KXd) has the remarkable property that a birational map f: X – → Y between smooth projective varieties induces an isomorphism H0(X,KXd) ≅ H0(Y,KYd).
For d ≥ 0, define the dth plurigenus Pd as the dimension of the vector space H0(X,KXd); then the plurigenera are birational invariants for smooth projective varieties. In particular, if any plurigenus Pd with d > 0 is not zero, then X is not rational.
A fundamental birational invariant is the Kodaira dimension, which measures the growth of the plurigenera Pd as d goes to infinity. The Kodaira dimension divides all varieties of dimension n into n+1 types, with Kodaira dimension -∞, 0, 1, ..., or n. This is a measure of the complexity of a variety, with projective space having Kodaira dimension -∞. The most complicated varieties are those with Kodaira dimension equal to their dimension n, called varieties of general type.
More generally, for any natural summand E(Ω1) of the rth tensor power of the cotangent bundle Ω1 with r ≥ 0, the vector space of global sections H0(X,E(Ω1)) is a birational invariant for smooth projective varieties. In particular, the Hodge numbers hr0 = dim H0(X,Ωr) are birational invariants of X. (Most other Hodge numbers hpq are not birational invariants, as we see by blowing up.)
The fundamental group π1(X) is a birational invariant for smooth complex projective varieties.
The "Weak factorization theorem", proved in 2002 by Abramovich, Karu, Matsuki, and Włodarczyk, says that any birational map between two smooth complex projective varieties can be decomposed into finitely many blow-ups or blow-downs of smooth subvarieties. This is important to know, but it can still be very hard to determine whether two smooth projective varieties are birational.
## Minimal models in higher dimensions
Main article: Minimal model program
A projective variety X is called minimal if the canonical bundle KX is nef. For X of dimension 2, it is enough to consider smooth varieties in this definition. In dimensions at least 3, we have to allow minimal varieties to have certain mild singularities, for which KX is still well-behaved; these are called terminal singularities.
That being said, the minimal model conjecture would imply that every variety X is either covered by rational curves or birational to a minimal variety Y. When it exists, Y is called a minimal model of X.
Minimal models are not unique in dimensions at least 3, but any two minimal varieties which are birational are very close. For example, they are isomorphic outside subsets of codimension at least 2, and more precisely they are related by a sequence of flops. So the minimal model conjecture would give strong information about the birational classification of algebraic varieties.
The conjecture was proved in dimension 3 by Mori (1988). There has been great progress in higher dimensions, although the general problem remains open. In particular, Birkar, Cascini, Hacon, and McKernan (2010) proved that every variety of general type over a field of characteristic zero has a minimal model.
## Uniruled varieties
Main article: Rational varieties
Another part of the minimal model conjecture would say a lot about the birational classification of varieties covered by rational curves, known as uniruled varieties. These do not have a minimal model, but the conjecture is that every uniruled variety is birational to a Fano fiber space. This leads to the problem of the birational classification of Fano fiber spaces and (as the most interesting special case) Fano varieties. By definition, a projective variety X is Fano if the anticanonical bundle KX* is ample. Fano varieties can be considered the algebraic varieties which are most similar to projective space.
In dimension 2, every Fano variety (known as a Del Pezzo surface) over an algebraically closed field is rational. A major discovery in the 1970s was that starting in dimension 3, there are many Fano varieties which are not rational. In particular, smooth cubic 3-folds are not rational by Clemens-Griffiths (1972), and smooth quartic 3-folds are not rational by Iskovskikh-Manin (1971). Nonetheless, the problem of determining exactly which Fano varieties are rational is far from solved. For example, it is not known whether there is any smooth cubic hypersurface in Pn+1 with n ≥ 4 which is not rational.
## Birational automorphism groups
Algebraic varieties differ widely in how many birational automorphisms they have. Every variety of general type is extremely rigid, in the sense that its birational automorphism group is finite. At the other extreme, the birational automorphism group of projective space Pn over a field k, known as the Cremona group Crn(k), is large (in a sense, infinite-dimensional) for n ≥ 2. For n = 2, we know at least that the complex Cremona group Cr2(C) is generated by the "quadratic transformation"
[x,y,z] ↦ [1/x, 1/y, 1/z]
together with the group PGL(3,C) of automorphisms of P2, by Max Noether and Castelnuovo. By contrast, the Cremona group in dimensions n ≥ 3 is very much a mystery: no explicit set of generators is known.
Iskovskikh-Manin (1971) showed that the birational automorphism group of a smooth quartic 3-fold is equal to its automorphism group, which is finite. In this sense, quartic 3-folds are far from being rational, since the birational automorphism group of a rational variety is enormous. This phenomenon of "birational rigidity" has since been discovered in many other Fano fiber spaces.
## References
• Birkar, Caucher; Cascini, Paolo; Hacon, Christopher D.; McKernan, James (2010), "Existence of minimal models for varieties of log general type", 23 (2): 405–468, arXiv:math.AG/0610203, doi:10.1090/S0894-0347-09-00649-3, MR 2601039
• Clemens, C. Herbert; Griffiths, Phillip A. (1972), "The intermediate Jacobian of the cubic threefold", (The Annals of Mathematics, Vol. 95, No. 2) 95 (2): 281.356, doi:10.2307/1970801, ISSN 0003-486X, JSTOR 1970801, MR0302652
• Griffiths, Phillip; Harris, Joseph (1978). Principles of Algebraic Geometry. John Wiley & Sons. ISBN 0-471-32792-1 [Amazon-US | Amazon-UK].
• Hartshorne, Robin (1977). Algebraic Geometry. Springer-Verlag. ISBN 0-387-90244-9 [Amazon-US | Amazon-UK].
• Iskovskih, V. A.; Manin, Ju. I. (1971), "Three-dimensional quartics and counterexamples to the Lüroth problem", Matematicheskii Sbornik, Novaya Seriya 86: 140.166, doi:10.1070/SM1971v015n01ABEH001536, MR0291172
• Kollár, János; Mori, Shigefumi (1998), Birational Geometry of Algebraic Varieties, Cambridge University Press, ISBN 0-521-63277-3 [Amazon-US | Amazon-UK]
• Mori, Shigefumi (1988), "Flip theorem and the existence of minimal models for 3-folds", (American Mathematical Society) 1 (1): 117–253, ISSN 0894-0347, MR924704
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Birational transformation", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Birational_transformation
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044228196144104, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/165284-composite-functions-question.html
|
# Thread:
1. ## Composite Functions Question
Hope I've posted this question in the right place- I think I did.. But anyway.
Let a be a positive number, let f : [2,∞) → R, f (x) = a − x and let g: (−∞, 1] → R,
g(x) = x^2 + a. Find all values of a for which f ◦ g and g ◦ f both exist.
I've no clue how to do this could someone show me?
2. This question is all about domain.
$g\circ f$ demands that $f(t)$ is in the domain of $g$.
Whereas, $f\circ g$ demands that $g(t)$ is in the domain of $f$.
Can you see that $a=3$ works BUT $a=0$ does not?
Now work on it.
3. Yes.. but where do you get the figure 3 from? You don't just randomly try out numbers do you ?
4. No, he is not suggesting that you "try numbers at random" but he is giving two examples so you can see what is happening. We are given that the domain of f is not "all real numbers" but "all real numbers greater than or equal to 2". In order that f(g(x)) exist, we must have $g(x)\ge 2$. That says $x^2+ a\ge 2$ where, remember, x must be less than or equal to 1. What must a be so that $g(1)= 1+ a\ge 2$?
In order that g(f(x)) exist, f(x) must be less than or equal to 1. That is, $f(x)= a- x\le 1$. But x must be greater than or equal to 2. What must a be so that $f(2)= a- 2\le 1$?
5. Oh now it makes sense. So a = 2 or 3.
Thanks guys
6. It also could be 2.5.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406543374061584, "perplexity_flag": "middle"}
|
http://www.sagemath.org/doc/reference/combinat/sage/combinat/rigged_configurations/bij_type_D.html
|
# Bijection classes for type $$D_n^{(1)}$$¶
Part of the (internal) classes which runs the bijection between rigged configurations and KR tableaux of type $$D_n^{(1)}$$.
AUTHORS:
• Travis Scrimshaw (2011-04-15): Initial version
TESTS:
```sage: KRT = TensorProductOfKirillovReshetikhinTableaux(['D', 4, 1], [[2,1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import KRTToRCBijectionTypeD
sage: bijection = KRTToRCBijectionTypeD(KRT(pathlist=[[-2, 3]]))
sage: TestSuite(bijection).run()
```
class sage.combinat.rigged_configurations.bij_type_D.KRTToRCBijectionTypeD(krt)¶
Bases: sage.combinat.rigged_configurations.bij_type_A.KRTToRCBijectionTypeA
Specific implementation of the bijection from KR tableaux to rigged configurations for type $$D_n^{(1)}$$.
This inherits from type $$A_n^{(1)}$$ because we use the same methods in some places.
doubling_map()¶
Perform the doubling map of the rigged configuration at the current state of the bijection.
This is the map $$B(\Lambda) \hookrightarrow B(2 \Lambda)$$ which doubles each of the rigged partitions and updates the vacancy numbers accordingly.
TESTS:
```sage: KRT = TensorProductOfKirillovReshetikhinTableaux(['D', 4, 1], [[4,1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import KRTToRCBijectionTypeD
sage: bijection = KRTToRCBijectionTypeD(KRT(pathlist=[[-1,-4,3,2]]))
sage: bijection.cur_path.insert(0, [])
sage: bijection.cur_dims.insert(0, [0, 1])
sage: bijection.cur_path[0].insert(0, [2])
sage: bijection.next_state(2)
sage: bijection.ret_rig_con
-2[ ]-2
(/)
(/)
(/)
sage: bijection.cur_dims
[[0, 1]]
sage: bijection.doubling_map()
sage: bijection.ret_rig_con
-4[ ][ ]-4
(/)
(/)
(/)
sage: bijection.cur_dims
[[0, 2]]
```
halving_map()¶
Perform the halving map of the rigged configuration at the current state of the bijection.
This is the inverse map to $$B(\Lambda) \hookrightarrow B(2 \Lambda)$$ which halves each of the rigged partitions and updates the vacancy numbers accordingly.
TESTS:
```sage: KRT = TensorProductOfKirillovReshetikhinTableaux(['D', 4, 1], [[4,1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import KRTToRCBijectionTypeD
sage: bijection = KRTToRCBijectionTypeD(KRT(pathlist=[[-1,-4,3,2]]))
sage: bijection.cur_path.insert(0, [])
sage: bijection.cur_dims.insert(0, [0, 1])
sage: bijection.cur_path[0].insert(0, [2])
sage: bijection.next_state(2)
sage: test = bijection.ret_rig_con
sage: bijection.doubling_map()
sage: bijection.halving_map()
sage: test == bijection.ret_rig_con
True
```
next_state(val)¶
Build the next state for type $$D_n^{(1)}$$.
TESTS:
```sage: KRT = TensorProductOfKirillovReshetikhinTableaux(['D', 4, 1], [[2,1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import KRTToRCBijectionTypeD
sage: bijection = KRTToRCBijectionTypeD(KRT(pathlist=[[-2, 3]]))
sage: bijection.cur_path.insert(0, [])
sage: bijection.cur_dims.insert(0, [0, 1])
sage: bijection.cur_path[0].insert(0, [3])
sage: bijection.next_state(3)
sage: bijection.ret_rig_con
-1[ ]-1
-1[ ]-1
(/)
(/)
```
class sage.combinat.rigged_configurations.bij_type_D.RCToKRTBijectionTypeD(RC_element)¶
Bases: sage.combinat.rigged_configurations.bij_type_A.RCToKRTBijectionTypeA
Specific implementation of the bijection from rigged configurations to tensor products of KR tableaux for type $$D_n^{(1)}$$.
doubling_map()¶
Perform the doubling map of the rigged configuration at the current state of the bijection.
This is the map $$B(\Lambda) \hookrightarrow B(2 \Lambda)$$ which doubles each of the rigged partitions and updates the vacancy numbers accordingly.
TESTS:
```sage: RC = RiggedConfigurations(['D', 4, 1], [[4, 1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import RCToKRTBijectionTypeD
sage: bijection = RCToKRTBijectionTypeD(RC(partition_list=[[],[],[],[1]]))
sage: bijection.cur_partitions
[(/)
, (/)
, (/)
, -1[ ]-1
]
sage: bijection.doubling_map()
sage: bijection.cur_partitions
[(/)
, (/)
, (/)
, -2[ ][ ]-2
]
```
halving_map()¶
Perform the halving map of the rigged configuration at the current state of the bijection.
This is the inverse map to $$B(\Lambda) \hookrightarrow B(2 \Lambda)$$ which halves each of the rigged partitions and updates the vacancy numbers accordingly.
TESTS:
```sage: RC = RiggedConfigurations(['D', 4, 1], [[4, 1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import RCToKRTBijectionTypeD
sage: bijection = RCToKRTBijectionTypeD(RC(partition_list=[[],[],[],[1]]))
sage: test = bijection.cur_partitions
sage: bijection.doubling_map()
sage: bijection.halving_map()
sage: test == bijection.cur_partitions
True
```
next_state(height)¶
Build the next state for type $$D_n^{(1)}$$.
TESTS:
```sage: RC = RiggedConfigurations(['D', 4, 1], [[2, 1]])
sage: from sage.combinat.rigged_configurations.bij_type_D import RCToKRTBijectionTypeD
sage: bijection = RCToKRTBijectionTypeD(RC(partition_list=[[],[1,1],[1],[1]]))
sage: bijection.next_state(1)
-2
sage: bijection.cur_partitions
[(/)
, (/)
, (/)
, (/)
]
```
#### Previous topic
Bijection classes for type $$A_n^{(1)}$$
#### Next topic
Designs and Incidence Structures
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7087486982345581, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/177338/generalized-change-of-variables-theorem
|
# Generalized Change of Variables Theorem?
Is there a generalized form of the differentiable change of variables theorem for Lebesgue integrals? That is, if we consider the well known change of variables theorem: If $\phi : X \rightarrow X$ is a diffeomorphism of open sets in $\mathbb{R}^n$, $X \subseteq \mathbb{R}^n$ is measurable, and $f : X \rightarrow \mathbb{R}$ is measurable, then:
$$\int_X f(y) dy = \int_X f(\phi(x)) d(\phi(x)) = \int_X f(\phi(x)) |\det D\phi(x)| dx$$
I'de like to weight between some countable set of (simple) diffeomorphic mappings instead of just one, that is, let $\Phi = \{\phi_i \ | \ i \in \mathbb{N}, |\det D\phi_i(x)| = 1\}$. Additionally, I'de like to weight between these transformations as a convex-combination, so I define a weighting function, $w : X \times \mathbb{N} \rightarrow \mathbb{R}$, where: $\sum_i w(\phi_i^{-1}(y),i)$ = 1. Then, I'de like to show:
$$\int_X \sum_i w(x,i) f(\phi_i(x)) dx = \int_X f(x) dx$$
My attempt at a proof is: \begin{align} \int_X \sum_i w(x,i) f(\phi_i(x)) dx & = \sum_i \int_X w(x,i) f(\phi_i(x)) dx\\ & = \sum_i \int_X w(x,i) f(\phi_i(x)) |\det D\phi_i(x)|dx\\ & = \sum_i \int_X w(\phi_i^{-1}(y),i) f(y) dy\\ & = \int_X f(y) \sum_i w(\phi_i^{-1}(y),i) dy \\ & = \int_X f(y) dy \end{align}
Trouble is, I'm not all that familiar with measure theory and I would need to show that my weighting function is measurable in order to invoke the single-mapping change of variables theorem mentioned above. Perhaps I cannot do this without being more explicit about what this function actually is, but at the same time, it's just a simple weight vector, normalized in some unique way, over a countable set it would be nice if I could say something at this level of generality. Also, perhaps it makes more sense to start at the case where $\Phi$ is finite, which is fine, but I have the same issues with the proof with this assumption.
-
I could see some sort of relation between the Borel measure and differential structure giving you the measurable conditions you need. Then again requiring the weight functions to be measurable would not be an undue burden on this sort of theory. – Nate Iverson Jul 31 '12 at 23:26
yes, I could assume that $w$ is measurable for each $i$, the trouble is every time I want to use a different weighting function I would need to prove it is measurable. It would just be nice if I could say something a little more general about the type of weighting functions that would guarantee this. – anonymous_21321 Aug 1 '12 at 4:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521761536598206, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/170513-find-basis-print.html
|
# find the basis
Printable View
• February 7th 2011, 07:00 PM
Taurus3
find the basis
Let L be the line spanned by v_1=[1, 1, 1]. Find a basis {v_2, v_3} for the plane perpendicular to L, and verify that B= {v_1,, v_2, v_3} is a basis for R3.
Let ProjL denote the projection onto the line L. Find the matrix B for ProjL with respect to the basis B.
Can you just help me get started with the problem. I'll do the rest on my own.
• February 7th 2011, 07:48 PM
TheEmptySet
Quote:
Originally Posted by Taurus3
Let L be the line spanned by v_1=[1, 1, 1]. Find a basis {v_2, v_3} for the plane perpendicular to L, and verify that B= {v_1,, v_2, v_3} is a basis for R3.
Let ProjL denote the projection onto the line L. Find the matrix B for ProjL with respect to the basis B.
Can you just help me get started with the problem. I'll do the rest on my own.
Since $v_1$ is perpendicular to the plane you want to span it is the planes normal vector.
Since the equation of the plane must pass though the origin it will have the form
$x+y+z=0$
Can you finish from here?
• February 7th 2011, 08:07 PM
Taurus3
ummm......actually no. Sorry. I mean I know how to find the eigenvalues and then the basis. But this question is weird.
• February 7th 2011, 08:19 PM
TheEmptySet
Quote:
Originally Posted by Taurus3
ummm......actually no. Sorry. I mean I know how to find the eigenvalues and then the basis. But this question is weird.
We don't need eigenvectors or eigenvalues.
We need to find two vectors that span the above plane. e.g we need to find the basis for the null space of this matrix.
$\begin{bmatrix} 1 & 1 & 1& 0\end{bmatrix}$
If it helps you can think of it as this matrix with added rows of zeros.
$\begin{bmatrix} 1 & 1 & 1& 0 \\ 0 & 0 & 0& 0 \\ 0 & 0 & 0& 0 \\ \end{bmatrix}$
Since this is already in reduced row form We know that we have two free parameters. Let $z=t,y=s$ then
$x+t+s=0 \iff x =-t-s$
So the basis of the nullspace is
$\begin{pmatrix} x \\ y \\ z\end{pmatrix} = \begin{pmatrix} -t-s \\ s \\ t\end{pmatrix} = \begin{pmatrix} -t \\ 0 \\ t\end{pmatrix}+ \begin{pmatrix} -s \\ s \\ 0\end{pmatrix} = \begin{pmatrix} -1 \\ 0 \\ 1\end{pmatrix}t+\begin{pmatrix} 0\\ -1 \\ 1\end{pmatrix}s$
You can verify that the two above vectors are perpendicular to the first.
Can you finish from here?
• February 7th 2011, 09:03 PM
Taurus3
So for my 2nd question, would it just be B= [(1, 1, 1), (0, -1, 1), (-1, 0, 1)]?
• February 8th 2011, 05:42 AM
TheEmptySet
Quote:
So for my 2nd question, would it just be B= [(1, 1, 1), (0, -1, 1), (-1, 0, 1)]?
No that is the matrix of the linear transformation from the standard basis to your new basis $B$.
Hint: In your new basis $T(v_1)=v_1$ and $T(v_2)=T(v_2)=0$ So what is the matrix of this transformation?
• February 8th 2011, 02:40 PM
Taurus3
this is what I don't get. How do you know T(v1)=v1 and that T(v2)=0?
• February 8th 2011, 04:39 PM
TheEmptySet
Quote:
Originally Posted by Taurus3
this is what I don't get. How do you know T(v1)=v1 and that T(v2)=0?
Since $\displatstyle T(\vec{x})=\text{proj}_{v_1}\vec{x}=\frac{\vec{v_1 }\cdot \vec{x}}{||v_1||^2}\vec{v_1}$
This is the definition of projecting one vector onto another. This will verify the above "claim".
All times are GMT -8. The time now is 03:15 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100183844566345, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/96250-derivatives-tangent-lines.html
|
# Thread:
1. ## Derivatives and tangent lines
I'm having some trouble on these questions. On other questions for this I was using derivatives to get the answers but on these I don't know how to apply it. Can someone explain how to use derivatives to find these answers?
1. Find the equation of the tangent line to the curve given by y=2xe^x. At (0,0).
2. Find the exact points (x,y) on the curve y=x ln x where the tangent line is horizontal.
3. Find the exact point (x,y) on the curve y= sin^2 x, 0<x<pie where the tangent is horizontal.
2. Originally Posted by goldenroll
I'm having some trouble on these questions. On other questions for this I was using derivatives to get the answers but on these I don't know how to apply it. Can someone explain how to use derivatives to find these answers?
1. Find the equation of the tangent line to the curve given by y=2xe^x. At (0,0).
2. Find the exact points (x,y) on the curve y=x ln x where the tangent line is horizontal.
3. Find the exact point (x,y) on the curve y= sin^2 x, 0<x<pie where the tangent is horizontal.
1. $y'=2xe^x+2e^x\Rightarrow{m}=2$ at $x=0$.
Then the line tangent at (0,0) is given by $(y-0)=2(x-0)$.
2. Horizontal $\Rightarrow{y'}=0$. Then $y'=1+\ln{x}=0\Rightarrow{x}=\frac{1}{e}$
Write the line in the same manner as the first problem I laid out.
3. I leave this to you
3. Originally Posted by goldenroll
I'm having some trouble on these questions. On other questions for this I was using derivatives to get the answers but on these I don't know how to apply it. Can someone explain how to use derivatives to find these answers?
1. Find the equation of the tangent line to the curve given by y=2xe^x. At (0,0).
The derivative is the slope of the tangent line. The tangent line at (0,0) is y= m(x- 0)+ 0= mx where m is the derivative. What is the derivative of $y= 2xe^x$? What is that when x= 0?
2. Find the exact points (x,y) on the curve y=x ln x where the tangent line is horizontal.
A line is horizontal when its slope is 0. What is the derivative of x ln x? Set that equal to 0 and solve for x.
3. Find the exact point (x,y) on the curve y= sin^2 x, 0<x<pie where the tangent is horizontal.
Same as (2).
4. Oh so on the first one am i using the chain rule? since its 2xe^x + 2e^x and then I apply the formula for finding a equation of a line.
5. Originally Posted by goldenroll
Oh so on the first one am i using the chain rule? since its 2xe^x + 2e^x and then I apply the formula for finding a equation of a line.
No, not quite. You use the product rule. IE: $\frac{d}{dx}[u\cdot{v}]=u\cdot{v'}+v\cdot{u'}$
In this case, $u=2x$, and $v=e^x$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305121898651123, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/1685?sort=oldest
|
Range of a Certain Linear Operator
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the following hermitian form on the sobolev space H^1(I), of an interval I: g(u,v):= \int_I (du/dt dv/dt - \rho(t) u v)dt, where \rho is a nice bounded function on I. Riesz representation theorem gives us a bounded linear operator A on the Hilbert space H^1(I) such that (Au,v) = g(u,v), where (,) is the inner product of H^1(I). The question is: can you find sufficient conditions on \rho for A to have a dense range?
-
1 Answer
This looks a bit like it could be a H/W exercise, but having started to type something up I might as well give most of it. I'm assuming all your functions are real-valued, since otherwise your form isn't hermitian.
Start by noting that a hermitian, bounded linear operator on Hilbert space has dense range if and only if it's injective. (Let $v\in H$: then $v$ is orthogonal to $Tu$ for all $u$ if and only if $Tv=T^*v$ is orthogonal to all $u$, i.e. if and only if $Tv=0$.)
So your operator $A$ has dense range if and only if it is injective. In particular (again, assuming that we're talking about real-valued H^1(I) here), if \rho is negative a.e. and not identically zero, then the only solution of $Au=0$ is $u=0$ (just consider $g(u,u)$ ).
Conversely, suppose that \rho is positive on some sub-interval $[a,b]$. Then I think we can find $u\in H^1(I)$ which is supported on $[a,b]$ and is not identically zero, such that $(du/dt)^2 - \rho(t)u(t)^2=0$ almost everywhere. (Hint: just try doing it!)
Depending on how nice your function \rho is supposed to be, that almost answers your question. If you're merely requiring it to be integrable then I'd have to think a bit more on this.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368004202842712, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/97173-permutation-involving-restriction.html
|
# Thread:
1. ## Permutation involving restriction
Decide in how many ways the letters of the word ABRACADABRA can be arranged in a row if C, R and D are not to be together.
My text says the answer is 78,624 ways, but I don't know how they arrived at this.
2. Instead of considering AB and CDR as seperate letters, consider them as 1 and 0 (not actual numbers). And then consider the possible ways you can arrange the four 0's about 11 spots (drawing a diagram might help).
3. Originally Posted by Stroodle
Decide in how many ways the letters of the word ABRACADABRA can be arranged in a row if C, R and D are not to be together.
My text says the answer is 78,624 ways, but I don't know how they arrived at this.
There are several problems with the wording of this question.
What do you think "if C, R and D are not to be together" means?
Does it mean CRD in that order? Or is it any order like DCR or RDC?
Problem: There are two R's.
I spent some time trying to find a reading that gives the given answer.
But I cannot. I may be wrong or I don't understand what "if C, R and D are not to be together" means.
4. Yep. I can't work it out either. I copied the question word for word from my text book, but as you can see my text isn't very good.
I just got a copy of the solutions today and it says:
$\frac{11!}{5!2!2!}-\frac{9!3!}{5!2!2!}=78624$
But I still can't work out where they got the $\frac{9!3!}{5!2!2!}$ from.
5. I assumed that "if C, R and D are not to be together" meant that those three letters must not occur consecutively (in any order). With that interpretation, I made the answer 74,424, as follows.
Total number of arrangements of letters A(×5), B(×2), R(×2), C, D is $\frac{11!}{5!2!2!} = 83,160$.
For each of 3! orderings of C,D,R, number of arrangements of the collection A(×5), B(×2), R, {CDR} is $\frac{9!}{5!2!}$, for a total of $6\times1512 = 9072$.
So we must subtract 9072 from 83,160. But there has been some double counting: the sequence RCDR has been counted as (RCD)R and also as R(CDR), and similarly for RDCR. The number of arrangements of the set A(×5), B(×2), {RCDR} is $\frac{8!}{5!2!}$, so we must add $2\times168=336$ to get the final result 83,160 – 9072 + 336 = 74,424.
6. Awesome. That makes sense. Thanks heaps for that. And thanks for spending the time to help on on such an ambiguous question Plato. Really appreciate it.
Strange that both the back of the textbook and the separate worked solutions have the same incorrect answer...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561302661895752, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/111133/function-defined-by-infinite-series?answertab=active
|
# Function defined by infinite series
A function $f$ is defined as follow:
$$f(x)=\sum_{n=1}^{\infty}\frac{b_{n}}{(x-a_{n})^{2}+b_{n}^{2}}\;\;, x\in \mathbb R$$
where $(a_{n}, b_{n})$ are points in the $xy$-plane, $b_{n}>0$ for all $n$. When is the function $f(x)$ bounded away from zero? that is $f(x)\geq a>0$, for some $a>0$, for all $x\in \mathbb R$. I believe that this would somehow depend on the points $(a_{n}, b_{n})$, for example if $\{b_{n}\}$ converges to zero or not, but I cannot find out how!
Thanks
*EDIT: I still don't know when $f(x)$ will be bownded away from zero! *
Edit: $f(x)$ is bounded above by some constant, say $C>0$.
-
1
Instead of "strictly positive" you can say: "bounded away from zero". – GEdgar Feb 20 '12 at 0:51
Yes, thanks! I fixed it. – Chelsea Feb 20 '12 at 1:10
## 3 Answers
Here's one in the opposite direction. Note that for any $c > 0$ the curve $y/(x^2 + y^2) = c$ is a circle in the upper half plane, tangent to the $x$ axis at the origin. If there is some $r > 0$ such that every circle of radius $r$ with centre on the line $y=r$ contains at least one $(a_n, b_n)$, then $f(x)$ is bounded below.
-
How about $$\lim_{x\to-\infty}\sum_{n = 0}^{\infty} \frac{1}{(x-n)^2 + 1} = 0$$
-
1
And somewhat more generally, if there is $x_0$ such that all $a_n \ge x_0$ and $\sum_{n=1}^\infty \frac{b_n}{(x_0 - a_n)^2 + b_n^2} < \infty$, then $\lim_{x \to -\infty} \sum_{n=1}^\infty \frac{b_n}{(x-a_n)^2 + b_n^2} = 0$ so in that case your sum is not bounded away from $0$. – Robert Israel Feb 20 '12 at 5:42
@Robert Israel: We can assume that $f(x)$ is bounded above, but does the $b_{n}$'s play any role here? – Chelsea Feb 20 '12 at 13:27
How do you see that the limit is zero? – Chelsea Feb 21 '12 at 19:57
@Chelsea: take $x<-k$, $k\in\mathbb{N}$. Then $\sum_{n = 0}^{\infty} \frac{1}{(x-n)^2 + 1}\leq\sum_{n = k}^{\infty} \frac{1}{n^2 + 1}$. – Martin Argerami Feb 25 '12 at 16:29
@Chelsea: Note that if $x<x_0$, then the smaller $x$ the smaller $f(x)$ (because every summand gets smaller). If you want it $f(x)$ to be smaller than $\varepsilon$, then you find $N$ such that $\sum_{n=N}^\infty \frac{b_n}{(x_0-a_n)^2 + b_n^2} < \varepsilon/2$ and look for $x$ so close to $-\infty$, that the beginning of the series gets smaller than $\varepsilon/2$. – savick01 Feb 25 '12 at 16:43
show 1 more comment
All summands are strictly positive, so if the series converges (or if you accept $+\infty$ as strictly positive) the sum is strictly positive.
-
So this means that the case $\lim_{x\rightarrow x_{0}} f(x)=0$ is not possible for any $x_{0}\in \mathbb R$?! – Chelsea Feb 20 '12 at 0:42
Of course; again, for any summand the limit is strictly positive. – Robert Israel Feb 20 '12 at 0:53
1
Changing "strictly positive" to "bounded away from zero" changes the question completely. – Robert Israel Feb 20 '12 at 4:41
My fault! I meant bounded away from zero. – Chelsea Feb 20 '12 at 17:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9087862968444824, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/40246/matrices-hermitian-and-unitary/40248
|
# Matrices (Hermitian and Unitary)
Hey! I have some short proofs I’m quite stuck on below. I know the definitions but get stuck on how to use them to prove what’s required. If possible, can you please explain how to apply the definition to get these proofs so that I can try again? Thanks!
Q1. Show that if there exists a unitary matrix P such that $P*AP = D$ where $P*$ is the conjugate transpose of $P$ and $D$ is a diagonal matrix then $A$ is a normal matrix. Show that the columns of $P$ form an orthornormal basis of $Cn$ if and only if $P$ is unitary.
A1. If $P$ is unitary, $PP*=P*P=I$. A is normal if $AA*=A*A$. I'm not sure where to go from there.
Q2. Show that eigenvectors of a Hermitian matrix corresponding to distinct eigenvalues are orthogonal.
A2. Hermitian if $A=A*$. Eigenvalues of $A$ are given by the equation: $Av=(\lambda)v$. Again, what do I do with this?
Q3. Show that the eigenvalues of a Hermitian matrix are real, and that the eigenvectors corresponding to different eigenvalues are orthogonal.
A2. Similar to the above.
Q4. What is meant to say that a matrix is unitarily diagonalisable? Prove that if $P$ is a unitary matrix then all of the eigenvalues of $P$ have modulus equal to one. Further, prove that the column vectors of $P$ form an orthornormal set (with respect to the Euclidean inner product).
A3. Is it unitarily diagonalisable when it can be expressed as a matrix with orthornormal vectors? I’m not sure about what the rest of the question even means! :(
-
## 1 Answer
Here are partial answers to your question:
Q'1. Show that if there exists a unitary matrix P such that P*AP = D where P* is the conjugate transpose of P and D is a diagonal matrix then A is a normal matrix.
A'1. We have $P^*AP=D$. By applying $()^*$ to both sides, we get $P^*A^*P=D^*$.
Now, $(P^*AP)(P^*A^*P)=DD^*$. On the other hand, $(P^*A^*P)(P^*AP)=D^*D$. But $DD^*=D^*D$, since $D$ is diagonal. Hence $(P^*AP)(P^*A^*P) = (P^*A^*P)(P^*AP)$. Therefore $(P^*AA^*P) = (P^*A^*AP)$. Lastly, premultiply both sides by $P$ and postmultiply both sides by $P^*$ to get $AA^*=A^*A$.
Q3. Show that eigenvectors of a Hermitian matrix are real and that eigenvectors corresponding to distinct eigenvalues are orthogonal.
A'3. First, let $\lambda \in \mathbb{C}$ be an eigenvalue with associated eigenvector $v \neq 0$. Look at the inner product $\langle v,v \rangle$. We have:
$\lambda \langle v,v \rangle = \langle Av,v \rangle = \langle v,A^*v \rangle = \langle v,Av \rangle = \langle v, \lambda v \rangle = \overline{\lambda} \langle v,v \rangle$.
Since $\langle v,v \rangle \neq 0$, we must have $\lambda = \overline{\lambda}$.
This shows that the eigenvalues of a Hermitian matrix are real.
Now let $\lambda$ and $\mu$ be distinct eigenvalues, with associated eigenvectors $v$ and $w$ respectively.
Look at the inner product $\langle v,w \rangle$. We have:
$\lambda \langle v,w \rangle = \langle \lambda v,w \rangle = \langle Av,w \rangle = \langle v,A^*w \rangle = \langle v,Aw \rangle = \langle v,\mu w \rangle = \mu \langle v,w \rangle$
But $\lambda \neq \mu \implies \langle v,w \rangle =0$.
Q'4. Prove that if P is a unitary matrix then all of the eigenvalues of P have modulus equal to one.
A'4. Let $v$ be an eigenvalue of $P$ with associated eigenvector $v \neq 0$. Then:
$\langle v,v \rangle = \langle PP^*v,v \rangle = \langle Pv,Pv \rangle = \langle \lambda v, \lambda v \rangle = \lambda \overline{\lambda} \langle v,v \rangle$.
Hence $(1-\lambda \overline{\lambda})\langle v,v \rangle=0$. Since $\langle v,v \rangle \neq 0$, $\lambda \overline{\lambda} = 1 \implies |\lambda|^2=1 \implies |\lambda|=1$.
-
Much appreciated!!!! :) I will try these again and get back to you if im still stuck. – user4645 May 20 '11 at 11:21
The only question I have about the above is how you went from <Av,w> to <v,A*w>? I understand all the other steps! :) – user4645 May 23 '11 at 10:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9032688736915588, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/05/24/cardinals-and-ordinals-as-categories/?like=1&source=post_flair&_wpnonce=5b12459335
|
# The Unapologetic Mathematician
## Cardinals and Ordinals as Categories
We can import all of what we’ve said about cardinal numbers and ordinal numbers into categories.
For cardinals, it’s actually not that interesting. Remember that every cardinal number is just an equivalence class of sets under bijection. So, given a cardinal number we can choose a representative set $S$ and turn it into a category $\mathcal{S}$. We let ${\rm Ob}(\mathcal{S})=S$ and just give every object its identity morphism — there are no other morphisms at all in this category. We call this sort of thing a “discrete category”.
For ordinal numbers it gets a little more interesting. First, remember that a preorder is a category with objects the elements of the preorder, one morphism from $a$ to $b$ if $a\leq b$, and no morphisms otherwise.
It turns out that (small) categories like this are exactly the preorders. Let $\mathcal{P}$ be a small category with a set of objects $P$ and no more than one morphism between any two objects. We define a preorder relation $\leq$ by saying $a\leq b$ if $hom_\mathcal{P}(a,b)$ is nonempty. The existence of identities shows that $a\leq a$, and the composition shows that if $a\leq b$ and $b\leq c$ then $a\leq c$, so this really is a preorder.
If now we require that no morphism (other than the identity) has an inverse, we get a partial order. Indeed, if $a\leq b$ and $b\leq a$ then we have arrows $f:a\rightarrow b$ and $g:b\rightarrow a$. We compose these to get $g\circ f:a\rightarrow a$ and $f\circ g:b\rightarrow b$. These must be the respective identities because there’s only the one arrow from any object to itself. But we’re also requiring that no non-identity morphism be invertible, so $a$ has to be the same object as $b$.
Now we add to this that for every distinct pair of objects either $\hom_\mathcal{P}(a,b)$ or $\hom_\mathcal{P}(b,a)$ is nonempty. They can’t both be — that would lead to an inverse — but we insist that one of them is. Here we have total orders, where either $a\leq b$ or $b\leq a$.
Finally, a technical condition I haven’t completely defined (but I will soon): we require that every nonempty “full subcategory” of $\mathcal{P}$ have an “initial object”. This is the translation of the “well-ordered” criterion into the language of categories. Categories satisfying all of these conditions are the same as well-ordered sets.
Okay, that got a bit out there. We can turn back from the theory and actually get our hands on some finite ordinals. Remember that every finite ordinal number is determined up to isomorphism by its cardinality, so we just need to give definititions suitable for each natural number.
We define the category $\mathbf{0}$ to have no objects and no morphisms. Easy.
We define the category $\mathbf{1}$ to have a single object and its identity morphism. If we take the identity morphisms as given we can draw it like this: $\bullet$.
We define the category $\mathbf{2}$ to have two objects and their identity morphisms as well as an arrow from one object to the other (but not another arrow back the other way). Again taking identities as given we get $\bullet\rightarrow\bullet$.
Things start to get interesting at $\mathbf{3}$. This category has three objects and three (non-identity) morphisms. We can draw it like this:
$\begin{matrix}\bullet&\rightarrow&\bullet\\&\searrow&\downarrow\\&&\bullet\end{matrix}$
The composite of the horizontal and vertical arrows is the diagonal arrow.
Try now for yourself drawing out the next few ordinal numbers as categories. Most people get by without thinking of them too much, but it’s helpful to have them at hand for certain purposes. In fact, we’ve already used $\mathbf{2}$!
Now to be sure that this does eventually cover all natural numbers. We can take our ${0}$ to be the category $\mathbf{0}$. Then for a successor “function” we can take any category and add a new object, its identity morphism, and a morphism from every old object to the new one. With a little thought this is the translation into categories of the proof that well-ordered finite sets give a model of the natural numbers, and the same proof holds true here.
### Like this:
Posted by John Armstrong | Category theory
## 6 Comments »
1. [...] kinds of morphisms, subobjects, and quotient objects Last week I was using the word “invertible” as if it was perfectly clear. Well, it should be, more or [...]
Pingback by | May 29, 2007 | Reply
2. [...] Remember the category with one object and its identity morphism. We draw it as , taking the identity as given. Let’s also consider a category with two objects and two non-identity morphisms, one going in either direction: . Call the objects and , and the four arrows , , , , where goes from to . Then we have and . That is, the two non-identity arrows are inverses to each other, and the two objects are thus isomorphic. [...]
Pingback by | May 30, 2007 | Reply
3. [...] initial object of an ordinal number considered as a category is the least element of the [...]
Pingback by | June 10, 2007 | Reply
4. [...] category I’m interested in is the ordinal . Remember that this consists of the objects and , with one non-identity arrow . We can make this [...]
Pingback by | August 15, 2007 | Reply
5. “we require that every “full subcategory” of \mathcal{P} have an “initial object””
Every non-empty full subcategory probably?
Comment by nikita | December 20, 2007 | Reply
6. Ah, yes.. Of course you’re right.
Comment by | December 20, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305671453475952, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/192431/finding-e-bigl-overliney2-bigm-overliney-vphantomy2-bigr-by-basus
|
# Finding $E\Bigl(\overline{Y^2}\Bigm|\overline{Y\vphantom{Y^2}}\Bigr)$ by Basu's theorem?
Suppose $Y_1,\ldots,Y_n$ are a random sample of normal distribution $\mathcal{N}(\mu,1)$. If $\overline{Y^2}=\displaystyle\frac{1}{n}\sum_{i=1}^n Y_i^2$, how can I find $E\Bigl(\overline{Y^2}\Bigm|\overline{Y\vphantom{Y^2}}\Bigr)$ by Basu's theorem?
-
Crossposted: stats.stackexchange.com/q/35846/2970 – cardinal Sep 8 '12 at 19:09
– cardinal Sep 8 '12 at 19:15
## 1 Answer
By Basu's theorem $\frac{1}{n-1}\sum_{i=1}^n (Y_i- \bar{Y})^2$ is independent of $\bar{Y}$. Hence $$\mathbb{E} \left(\frac{1}{n-1}\sum_{i=1}^n (Y_i- \bar{Y})^2 \mid \bar{Y} \right) = \mathbb{E} \left(\frac{1}{n-1}\sum_{i=1}^n (Y_i- \bar{Y})^2 \right)$$ Develop both sides. You expression will appear on the left-hand side.
-
This will actually work for $N(\mu,\sigma^2)$ if you consider the family $\{N(\mu,\sigma^2)\,:\,\mu\in\mathbb{R}\}$ with $\sigma^2$ fixed. For that family, the sample mean is sufficient and the residual sum of squares is ancillary. – Michael Hardy Sep 7 '12 at 17:14
Using Basu for this is serious overkill (I realize that's what the question is asking for, though). It should really only be seen as a simple example to understand what Basu's theorem says and not as a serious means of proof of the desired fact. :-) – cardinal Sep 8 '12 at 19:11
1
@cardinal : serious overkill is too soft to describe this ;) – vanna Sep 8 '12 at 19:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352129697799683, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/118471-relations-question-print.html
|
# Relations Question
Printable View
• December 4th 2009, 08:38 AM
dre
Relations Question
Let A be the set {a, b, c, d, e, 1, 2, 3, 4, 5, x, y, z, w}
How many different reflexive relations are there and how many different symmetric relations are there? I know the total would be 2^196 because of the power set rule but how can I figure out how many reflexive and symmetric relations there are?
• December 4th 2009, 09:05 AM
Plato
Quote:
Originally Posted by dre
Let A be the set {a, b, c, d, e, 1, 2, 3, 4, 5, x, y, z, w}
How many different reflexive relations are there and how many different symmetric relations are there?
The set of pairs $\Delta _A = \left\{ {\left( {\alpha ,\alpha } \right):\alpha \in A} \right\}$ is known a the diagonal from the table of ordered pairs.
There are $14^2$ pairs in that table so $\left| {\Delta _A } \right| = 14$
Therefore there are $14^2-14$ off diagonal pairs.
Any reflexive relation on $A$ must contain ${\Delta _A }$.
So any reflexive relation is the union of ${\Delta _A }$ with any subset of off diagonal pairs.
How many of those are there?
Any symmetric relation on $A$, $\mathcal{S}$, has the property that $\mathcal{S}=\mathcal{S}^{-1}$.
There are $\frac{(14)(15)}{2}$ pairs either on the diagonal or above it.
Any subset of those pairs corresponds to a symmetric relation.
Just take that subset and unite it with its inverse.
How many are there?
• December 5th 2009, 07:46 AM
oldguynewstudent
Quote:
Originally Posted by Plato
The set of pairs $\Delta _A = \left\{ {\left( {\alpha ,\alpha } \right):\alpha \in A} \right\}$ is known a the diagonal from the table of ordered pairs.
There are $14^2$ pairs in that table so $\left| {\Delta _A } \right| = 14$
Therefore there are $14^2-14$ off diagonal pairs.
Any reflexive relation on $A$ must contain ${\Delta _A }$.
So any reflexive relation is the union of ${\Delta _A }$ with any subset of off diagonal pairs.
How many of those are there?
Any symmetric relation on $A$, $\mathcal{S}$, has the property that $\mathcal{S}=\mathcal{S}^{-1}$.
There are $\frac{(14)(15)}{2}$ pairs either on the diagonal or above it.
Any subset of those pairs corresponds to a symmetric relation.
Just take that subset and unite it with its inverse.
How many are there?
Thank you for this explanation, but I am still confused about something in ROSEN: I understand that the number of subsets from A with n elements is $2^n$, I can see that the diagonal of the cartesian product will contain n elements and that all other elements will total $n^2$ - n. But I don't see why the number of reflexive relations equal $2^q$ where q = $n^2$ - n ? Why isn't the number of reflexive relations just equal to $2^r$ where r = $n^2$?
Thanks
• December 5th 2009, 08:08 AM
Plato
Quote:
Originally Posted by oldguynewstudent
Thank you for this explanation, but I am still confused about something in ROSEN: I understand that the number of subsets from A with n elements is $2^n$, I can see that the diagonal of the cartesian product will contain n elements and that all other elements will total $n^2$ - n. But I don't see why the number of reflexive relations equal $2^q$ where q = $n^2$ - n ? Why isn't the number of reflexive relations just equal to $2^r$ where r = $n^2$?
If you will give me the exact reference in Rosen's book I will look at it.
As for your other concern, if $|A|=n$ then how many subsets of $A\times A$ contain $\Delta_A$?
Do you understand that $\left|(A\times A)\setminus \Delta_A\right|=n^2-n?$
How many subsets of $A\times A\setminus \Delta_A$ are there?
Uniting any of those with $\Delta_A$ we get a reflexive relation on $A$
• December 5th 2009, 09:07 AM
oldguynewstudent
I think I get it now
Quote:
Originally Posted by Plato
If you will give me the exact reference in Rosen's book I will look at it.
As for your other concern, if $|A|=n$ then how many subsets of $A\times A$ contain $\Delta_A$?
Do you understand that $\left|(A\times A)\setminus \Delta_A\right|=n^2-n?$
How many subsets of $A\times A\setminus \Delta_A$ are there?
Uniting any of those with $\Delta_A$ we get a reflexive relation on $A$
Thanks for your patience. We skipped the combinatorics chapters. The reference is on p 525 6th edition, example 16. So because there are n members in the diagonal we have by the product rule these n combined with the n - 1 other (rows or columns in the matrix) to give n(n - 1) ways to combine. Then we raise 2 to that power to get the number of relations.
I still have to think it over further to solidify the concept. But at least now I can follow the reasoning.
Happy Holidays! (Pizza)
All times are GMT -8. The time now is 04:16 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 48, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396694898605347, "perplexity_flag": "head"}
|
http://terrytao.wordpress.com/tag/padded-trees/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘padded trees’ tag.
## Random Martingales and localization of maximal inequalities
7 December, 2009 in math.CA, math.MG, paper | Tags: Assaf Naor, Hardy-Littlewood maximal inequality, padded trees, ultrametric spaces | by Terence Tao | 8 comments
Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in generalisations of the classical Hardy-Littlewood maximal inequality
for any absolutely integrable ${f: {\mathbb R}^n \rightarrow {\mathbb R}}$, where ${B(x,r)}$ is the Euclidean ball of radius ${r}$ centred at ${x}$, and ${|E|}$ denotes the Lebesgue measure of a subset ${E}$ of ${{\mathbb R}^n}$. This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to Calderón-Zygmund theory. A similar inequality in fact holds with the Euclidean norm replaced by any other convex norm on ${{\mathbb R}^n}$.
The exact value of the constant ${C_n}$ is only known in ${n=1}$, with a remarkable result of Melas establishing that ${C_1 = \frac{11+\sqrt{61}}{12}}$. Classical covering lemma arguments give the exponential upper bound ${C_n \leq 2^n}$ when properly optimised (a direct application of the Vitali covering lemma gives ${C_n \leq 5^n}$, but one can reduce ${5}$ to ${2}$ by being careful). In an important paper of Stein and Strömberg, the improved bound ${C_n = O( n \log n )}$ was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement ${C_n = O(n)}$ obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that ${C_n \rightarrow \infty}$ in the case of the ${\ell^\infty}$ norm, and in fact in an even more recent preprint of Aubrun, the lower bound ${C_n \gg_\epsilon \log^{1-\epsilon} n}$ for any ${\epsilon > 0}$ has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that ${C_n}$ is in fact uniformly bounded in this case.
Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound ${C_n = O(n \log n)}$ is extremely general, applying to a wide class of metric measure spaces obeying a certain “microdoubling condition at dimension ${n}$“; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general localisation principle, which roughly speaking asserts that in order to prove a maximal inequality over all scales ${r \in (0,+\infty)}$, it suffices to prove such an inequality in a smaller range ${r \in [R, nR]}$ uniformly in ${R>0}$. It is this localisation which ultimately explains the significance of the ${n \log n}$ growth in the Stein-Strömberg result (there are ${O(n \log n)}$ essentially distinct scales in any range ${[R,nR]}$). It also shows that if one restricts the radii ${r}$ to a lacunary range (such as powers of ${2}$), the best constant improvees to ${O(\log n)}$; if one restricts the radii to an even sparser range such as powers of ${n}$, the best constant becomes ${O(1)}$.
Read the rest of this entry »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 35, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9143880605697632, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.