url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/revisions/30726/list
|
## Return to Answer
3 added 58 characters in body
From a physicist's perspective I think that the latter part Ryan's answer really goes to the heart of the matter. The point is that the VAST majority of physical phenomena are purely local. Consider for example General Relativity. An observer existing for a finite time will probe a finite patch of spacetime. To describe what he sees he solves Einstein's equation:
$R_{\mu \nu}-\frac{1}{2}Rg_{\mu \nu}\sim T_{\mu \nu}$
where in the above $g$ is the metric, $R_{\mu \nu}$ its Ricci curvature and $T$ is a tensor field describing the distribution of energy and matter in spacetime. This is a local differential equation, and since the observer sees a small patch, for the most part he could care less whether the global structure of spacetime is $\mathbb{R}^{4}$ or any other smooth four-manifold.
A crucial point is that unlike derived physical equations, like say the heat equation, equations of fundamental physics (General Relativity, Electrodynamics, Quantum Field Theory, String Theory) are invariant under the Lorentz group of symmetries. This means that, for reasonable physical matter and energy distributions, there is a $finite$ signal propagation speed (the speed of light) and thus far away properties of the differentiable structure of spacetime take a very long time to have local consequences for any fixed observer.
2 Edited to account for Willie's comment.; deleted 5 characters in body
From a physicist's perspective I think that the latter part Ryan's answer really goes to the heart of the matter. The point is that the VAST majority of physical phenomena are purely local. Consider for example General Relativity. An observer existing for a finite time will probe a finite patch of spacetime. To describe what he sees he solves Einstein's equation:
$R_{\mu \nu}-\frac{1}{2}Rg_{\mu \nu}\sim T_{\mu \nu}$
where in the above $g$ is the metric, $R_{\mu \nu}$ its Ricci curvature and $T$ is a tensor field describing the distribution of energy and matter in spacetime. This is a local differential equation, and since the observer sees a small patch, for the most part he could care less whether the global structure of spacetime is $\mathbb{R}^{4}$ or any other smooth four-manifold.
A crucial point is that unlike derived physical equations, like say the heat equation, equations of fundamental physics (General Relativity, Electrodynamics, Quantum Field Theory, String Theory) are invariant under the Lorentz group of symmetries. This means that there is a $finite$ signal propagation speed (the speed of light) and thus far away properties of the differentiable structure of spacetime take a very long time to have local consequences for any fixed observer.
1
From a physicist's perspective I think that the latter part Ryan's answer really goes to the heart of the matter. The point is that the VAST majority of physical phenomena are purely local. Consider for example General Relativity. An observer existing for a finite time will probe a finite patch of spacetime. To describe what he sees he solves Einstein's equation:
$R_{\mu \nu}-\frac{1}{2}Rg_{\mu \nu}\sim T_{\mu \nu}$
where in the above $g$ is the metric, $R_{\mu \nu}$ its Ricci curvature and $T$ is a tensor field describing the distribution of energy and matter in spacetime. This is a local differential equation, and since the observer sees a small patch, for the most part he could care less whether the global structure of spacetime is $\mathbb{R}^{4}$ or any other smooth four-manifold.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174919128417969, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/81012/a-simple-stopping-time-problem/93809
|
A simple stopping time problem.
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This should be rather standard so I hope somebody with a good background in probability theory would give me a quick solution or a reference.
We are given a threshold positive integer $T>0$. Let $a_1=1$ and for all $k$ with probability one half set $a_k=3a_{k-1}$ or else $a_k=2a_{k-1}$. We will stop the process at smallest time $\tau$ when $a_{\tau} \geq T$. We would like to compute the constant $c$ defined to be,
$E[ \sum_{i=1}^{\tau} a_i ] = c T + o(T)$
Could you estimate $c$ ?
-
3 Answers
From the way you ask, I conclude that you can prove that the limit exists (which by itself is by no means trivial), so I'll just show how to compute it under this assumption.
Let $v(t)$ be $\frac 1t$ times the expectation in question if we stop after we exceed $t>0$ (not necessarily an integer). Then $v(t)=\frac 1t$ for `$0<t<1$` and $v(t)=\frac 1t+\frac 12[v(t/2)+v(t/3)]$ for $t\ge 1$. Now let $F(s)=\int_1^\infty t^{-s}v(t)\frac{dt}{t}$. Using the recurrence, we get that for every $s>0$, $$F(s)=\frac 1{s+1}+\frac 12\left(\int_{1/2}^1 \frac 1t t^{-s}\frac{dt}t+\int_{1/3}^1 \frac 1t t^{-s}\frac{dt}t\right)+\frac 12(2^{-s}+3^{-s})F(s)$$ The limit we are interested in is the same as $\lim_{s\to 0+}sF(s)$. Putting all terms with $F(s)$ to one side, dividing, and passing to the limit, we get $\frac{5}{\log 6}$, which differs from Will's heuristic answer a bit. I cannot say that I really understood his post but it is quite fascinating that he was somehow right with $\log 6$ in the denominator :).
I apologize for computational mistakes in the original post.
-
Could you comment on how you would establish the existence of the limit ? – Nick B. Nov 16 2011 at 0:57
I think some factors of $t$, etc., may be missing from the working shown here. When I apply the method given above I find $c=5/\log 6=2.79055$, which is in good agreement with an experimental value of $2.79 \pm 0.01$. – David Moews Nov 16 2011 at 1:30
yes I think that is not a $1$ ; rather it is a $\frac{1}{s+1}$ but beside that I don't any possible trivial calculation problem. do you ? – Nick B. Nov 16 2011 at 1:31
1
Fedja: I got the same answer using a purely probabilistic arguments. I'll try to write it up later... – Ori Gurel-Gurevich Nov 16 2011 at 5:21
2
please do, Ori ! – Nick B. Nov 16 2011 at 5:52
show 3 more comments
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think it is possible to find the result using renewal theory. Indeed, the process $(\ln(a_i))$ is a random walk with i.i.d. increments ($\ln(2)$ or $\ln(3)$ with probability $1/2$). The renewal theorem will tell you the structure of the walk when it jumps over a large time (here $\ln(T)$). More precisely when $T \to \infty$ the jump that goes over $\ln(T)$ is a size biaised version of the orignal jump measure, i.e. $\ln(3)$ with probability $\ln(3)/\ln(6)$ and $\ln(2)$ with probability $\ln(2)/\ln(6)$. Furthermore, knowing this jump, the actual position of $\ln(T)$ is uniform in the jump. Easy calculations (if correct) then yield $E[a_\tau] = 3T/\ln(6)$ and $E[a_{\tau-1}]=7T/(6 \ln(6))$. But going down from $a_{\tau-1}$ is easy (the walk is asymptotically the reversed version) and we can compute $E[a_{\tau-1}+ a_{\tau-2}+...]= 12/7 E[a_{\tau-1}]$.
-
I don't believe it. You have $a_k = X_k a_{k-1}$, with$X = 3$or $2$ etc since $log(X)$ actually has positive expectation we'll have $a_k \approx e^{k\mu}$ and $\tau$ no worse than about log(T).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355209469795227, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=1144389
|
Physics Forums
## piece wise continuity
I have a question that asks to show that f(x)=x^2(sin[1/x]) is piecewise continuous in the interval (0,1). I need to show that I partition the interval in to finite intervals and the function is continous within the subintervals and have discontinuities of te first type at the endpoints. I tried using any multiple of pi. That doesn't work. Any hints?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help Is that: $$x^2 \sin \left \frac{1}{x} \right$$ ? Because that function is just continuous in (0,1), so it's obviously piecewise continuous. Or does [1/x] mean the integer part of 1/x? If so, then it seems like the natural thing to do is divide up (0,1) into the regions where [1/x] takes distinct values (ie, (1/2,1],(1/3,1/2], etc.).
$$x^2 \sin \left \frac{1}{x} \right$$ Is the correct function, which you indicated. The exact question is as follows: A function f is called piecewise continuous (sectionally continuous) on an interval (a,b) if there are finitely many point a = x(sub o) < x(sub 1) < ... < x(sub n) = b such that (a) f is continuous on each subinterval x(sub o) < x < x(sub 1) , x(sub 1) < x < x(sub 2) , ... , x(sub n-1) < x < x(sub n), and (b) f has discontinuities of the first kind at the point x(sub o),x(sub 1), ... , x (sub n). The function f(x) need not be defined at the points x(sub o),x(sub 1), ... , x (sub n). Show that the following functions are piecewise continuous: f(x) = $$x^2 \sin \left \frac{1}{x} \right$$ , 0 < x < 1
Recognitions:
Homework Help
## piece wise continuity
Right, and do you see why that is continuous in the ordinary sense (or piecewise continuous with a trivial partition of (0,1))?
It seems like the question wants a further partition of (0,1). The left limit doesn't exist here so I'm confused as to why the function has a discontinutiy of type 1 on the trivial interval (0,1)
Recognitions: Homework Help Well the left limit does exist, but this doesn't matter because 0 isn't in your range. The fact is there are no discontinuities anywhere. A function doesn't have to have any discontinuities to be piecewise continuous. The definition you gave above is a little awkward, but it doesn't rule out the possibility that the number of discontinuities is zero (which is finite), and this is certainly allowed.
Maybe I don't understand the definition of the limit then. As we approach 0 the lim sup from the right is not the same as the lim inf from the right, therefore the limit from the right does not exist.
Recognitions: Homework Help What do you get as the limsup and liminf?
0 for both. My bad.
Thread Tools
| | | |
|--------------------------------------------|-------------------------------|---------|
| Similar Threads for: piece wise continuity | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 7 |
| | Calculus & Beyond Homework | 1 |
| | Introductory Physics Homework | 3 |
| | Linear & Abstract Algebra | 2 |
| | Quantum Physics | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923973798751831, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/166158-partial-fractions.html
|
# Thread:
1. ## Partial Fractions
I know how to do them when there is only linear factors in the denominator but when it has a repeated linear factor and a single linear factor then I get confused.
In my book it has this example (I will provide a picture since it's too long to type)
I can understand where it splits it up into A,B and C but when I get confused is when they do
$x=1$ and let $x=-1$
Are they just pulling random numbers from no where or something? How does that part work?
2. Originally Posted by jgv115
$x=1$ and let $x=-1$
Are they just pulling random numbers from no where or something? How does that part work?
Any numbers will work to solve a system for A,B,C but notice x=1,-1 will cancel terms and make life easier.
3. Plugging in numbers is a standard way of solving for unknown constants.
For example, if two functions f and g are equal, then f(x) = g(x) for all x.
Now, in this example, consider f(x) = 2x + 10
and g(x) = A(x - 1)^2 + B(x + 1)(x - 1) + C(x + 1)
We know f(x) = g(x).
This means that it has to be the case that f(1) = g(1) and f(-1) = g(-1), which is what the problem does when it sets x = 1 and x = -1.
The truth is, the book could've chosen any values x = 10, x = 20, x = 0.5, x = pi, ... and solved, and the solution for A, B, C will be the same.
But why did they choose x = 1 and x = -1?
Look at the expression for g(x) = A(x - 1)^2 + B(x + 1)(x - 1) + C(x + 1), if you could choose any value of x to solve for A, B, and C, what would you pick?
x = 1 gets rid of A and B completely leaving only C
x = -1 gets rid of B and C completely leaving only A
These are 2 values that are easy to solve with, so the book chooses them.
4. What they are doing is choosing values of $\displaystyle x$ that will make two of the terms $\displaystyle 0$, thus eliminating two terms and making it possible to solve for the third. Since this is an equation, you can substitute whatever values of $\displaystyle x$ you like as long as you substitute for every $\displaystyle x$ on both sides.
Since you have $\displaystyle A(x - 1)^2$, to make this $\displaystyle 0$ you need to choose $\displaystyle x = 1$, and since you have $\displaystyle C(x + 1)$, to make this $\displaystyle 0$ you need to choose $\displaystyle x = -1$. Also since you have $\displaystyle B(x - 1)(x + 1)$, choosing either $\displaystyle x = -1$ or $\displaystyle x = 1$ makes it $\displaystyle 0$.
5. oh!!!!!!!!!!!! alright
ok for this question:
$\frac {2x+3}{(x-3)^2}$
How would I set it up?
Would it be
$\frac {A}{(x-3)} + \frac {B}{(x-3)^2}$?
6. If $x = 3$
$2(3) + 10 = A(3 - 1)^2 + B(3 + 1)(3 - 1) + C(3 + 1)$
The A and B disappear only for $x = 1$
The book jumps a bit in this step.
7. Yes, you setup the problem as:
$\frac {2x+3}{(x-3)^2} = \frac {A}{(x-3)} + \frac {B}{(x-3)^2}$
8. NICE!! I GOT IT
If I take the common denominator I get
$2x+3 = A(x-3)+B$
Then let x =3 so B=9
Then I just let x = 1 and sub B in and I got the right answer!
Yay thanks guys I'm so happy
9. When both your LHS and RHS of the equations still have denominators, you can't choose a value that will make any of your denominators zero i.e x=3 when all your denominators are x-3. This would then lead to an undefined value.
If you were in a rush and careless, you might end up leaving those terms out and this could lead to serious miscalculations of your end-result.
10. Yes, what dd86 says is true. The reason you can multiply and then use values for x that would've made denominators zero is quite subtle.
Instead of getting into the details, let me just say:
It is always a good idea to check your answers by plugging them back into your original problem. Even if you are unsure about the correctness of a method, you can have confidence in the answer by checking it with the original problem.
11. Originally Posted by jgv115
Would it be
$\frac {A}{(x-3)} + \frac {B}{(x-3)^2}$?
Yep..
12. mm.. anyone care to explain why you have to write it like that? Or is it just a rule?
13. It is the rule for repeated linear factors.
If you choose otherwise the equation won't balance.
14. Not sure whether it's a rule. It's more like a process to me. A process that saves time and gives you and idea how to solve it.
But the numerators do differ depending on what your denominator is. So you have to be careful with that.
Why don't you try reading up on the cover-up rule for partial fractions? It might help you save some time also. However, I'm not sure whether the examiner in your school would accept it if you used it in an exam...
15. Originally Posted by jgv115
mm.. anyone care to explain why you have to write it like that? Or is it just a rule?
You might expect to write it as $\displaystyle \frac{A}{x - 3} + \frac{B}{x - 3}$. But notice there is a common denominator, so this simplifies to $\displaystyle \frac{A+B}{x-3}$, which does not have the required denominator $\displaystyle (x-3)^2$.
In order to get the required denominator, you need to write $\displaystyle \frac{A}{x - 3} + \frac{B}{(x - 3)^2}$ or $\displaystyle \frac{Ax+B}{(x-3)^2}$. The easiest method is to have the numerator be a polynomial of one less degree than the denominator.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549762606620789, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/73123-2-questions-parametric-implicit-differntiations.html
|
# Thread:
1. ## 2 questions on parametric and implicit differntiations
Ok the curve has parametric equations:
x = cos t y = sin 2t, 0 ≤ t < 2pi
a) Find an expression for dy/dx in terms of parameter t
b) Find the values of the parameter t at the points where dy/dx = 0
An example in my book has confused me and so i was wondering if somebody could show me how to work through these two questions.
Thanks
2. Originally Posted by sharp357
the curve has parametric equations:
x = cos t y = sin 2t, 0 ≤ t < 2pi
a) Find an expression for dy/dx in terms of parameter t
b) Find the values of the parameter t at the points where dy/dx = 0
thanks
a) note that $\frac{dy}{dx} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}}$
b) set $\frac{dy}{dx}$ found in (a) equal to 0 and solve for t.
3. Originally Posted by sharp357
Ok the curve has parametric equations:
x = cos t y = sin 2t, 0 ≤ t < 2pi
a) Find an expression for dy/dx in terms of parameter t
First find $y'$ then find $x'$ and to get $\tfrac{dy}{dx}$ form the fraction $\frac{y'}{x'}$.
b) Find the values of the parameter t at the points where dy/dx = 0
You want to solve $\frac{dy}{dx} = 0 \implies \frac{y'}{x'} = 0 \implies y' = 0$.
Can you finish?
4. ok i differentiated so dy/dx = cos 2t / - sin t
just wondering if this part is right?
5. $y = \sin(2t)$
chain rule ...
$\frac{dy}{dt} = 2\cos(2t)$
$\frac{dx}{dt} = -\sin{t}$ is correct
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7664024829864502, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/80262-how-prove-cube-root-5-not-rational-number.html
|
# Thread:
1. ## How to prove that cube root of 5 is not rational number
whats the easiest way to do this? i have never done this type of problem before
2. Originally Posted by NeedHelp18
whats the easiest way to do this? i have never done this type of problem before
the easiest way?
consider the roots of the polynomial: $x^3 - 5 = 0$
Now apply the rational roots theorem. What can you come up with?
Alternatively, prove this lemma: If $5 | x^3$, then $5 | x$.
then apply the technique used here in post #2
it is a longer, more sophisticated proof, but it is the conventional one, and certainly one you should know, for historical and aesthetic reasons at least
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109605550765991, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/38623/quantizing-first-class-constraints-for-open-algebras-can-hermiticity-and-noncom?answertab=active
|
# Quantizing first-class constraints for open algebras: can Hermiticity and noncommutativity coexist?
An open algebra for a collection of first-class constraints, $G_a$, $a=1,\cdots, r$, is given by the Poisson bracket $\{ G_a, G_b \} = {f_{ab}}^c[\phi] G_c$ classically, where the structure constants are functions of the dynamical degrees of freedom, $\phi$. When quantizing a gauge theory, a physical state $|\psi\rangle$ has to satisfy the first-class constraints $\widehat{G}_a |\psi\rangle = 0$. From this, one can easily see $[\widehat{G}_a, \widehat{G}_b]|\psi\rangle = 0$. In the quantum version of the theory, the Poisson equation has to be replaced by an operator commutator equation. In general, ${\widehat{f}_{ab}^c}[\widehat{\phi}]$ doesn't commute with $\widehat{G}_c$. One possibility is the right hand side of the equation for the commutator of two constraints is ordered so that the constraint $\widehat{G}_c$ is always on the right in the operator product. However, the resulting product will be nonhermitian in general due to noncommutativity. The commutator of two Hermitian operators is always antihermitian. So, this means the first-class constraint operators have to be nonhermitian. If we want the constraint operators to be hermitian, we require $[\widehat{G}_a, \widehat{G}_b] = i O(\widehat{f}_{ab}{}^c[\widehat{\phi}]\widehat{G}_c)$ where $O$ is some form of operator ordering. However, this operator ordering will in general contain some terms which don't annihilate $|\psi\rangle$ in general because $\widehat{G}_c$ won't always be on the right. How does one get around this?
-
## 3 Answers
I) Let us reformulate OP's question(v1) as
How can hermiticity$^1$ be maintained for the gauge algebra $$\tag{1} [\hat{G}_a , \hat{G}_b ] ~=~ i\hbar~\hat{G}_c ~\hat{f}^{c}{}_{ab}$$ of first-class operator constraints $\hat{G}_a$, if the structure operators$^2$
$$\tag{2} \hat{f}^{c}{}_{ab}~=~f^{c}{}_{ab}(\hat{q}^i,\hat{p}_j)$$ depend on the phase space operators $\hat{q}^i$ and $\hat{p}_j$?
(Note that on the r.h.s. of eq.(1), we let the operator $\hat{G}_c$ stand to the left of the operator $\hat{f}^{c}{}_{ab}$. This is done for purely conventional reasons to follow Ref. 1. This rearrangement just means that we should work with physical bras $\langle \psi |$ rather than physical kets $|\psi \rangle$, which is an equivalent formulation.)
II) Our first point is that the gauge algebra operator identity (1) is just the first in a (possible infinite) tower of operator consistency relations. E.g. the structure operators (2) should satisfy a Jacobi-like operator identity, which in turn involves a new set of higher structure operators, and so forth.
It turns out that the most systematic approach is to recast the gauge symmetry (1) in the Batalin–Fradkin–Vilkovisky (BFV) formalism, which is a generalization of the Hamiltonian BRST method from Yang-Mills theory to arbitrary first-class$^3$ systems (1), even so-called reducible gauge algebras.
The main object in BFV theory is a fermionic BRST charge operator$^4$
$$\tag{3} \hat{Q} ~=~ \hat{G}_a ~\hat{\cal C}^a +\frac{1}{2}\hat{\bar{\cal P}}^c~\hat{f}^{c}{}_{ab}~\hat{\cal C}^b\hat{\cal C}^a +\ldots$$
that squares to zero
$$\tag{4} \hat{Q}^2~=~0.$$
We would for brevity obviously have to leave out a lot of details here, but let us mention that $\hat{\cal C}$ and $\hat{\bar{\cal P}}$ are ghosts and ghost-momenta, which carry ghost number $+1$ and $-1$, respectively. The BRST charge operator $\hat{Q}$ is required to have ghost number $+1$. The gauge algebra (1) is encoded as one of the first operator relations in a (possible infinite) tower of operator relations that are hidden inside the nilpotency condition (4).
The upshot is that the unitarity of the theory is essentially implemented by (among other conditions) requiring Hermiticity of the BRST charge
$$\tag{5} \hat{Q}^{\dagger}~=~ \hat{Q}.$$
Eq. (5) dictates to a large extend what kind of Hermiticity/reality structure that one should impose on the system. In general, these Hermiticity/reality structure conditions will interrelate between the first-class operator constraints $\hat{G}_a$, the structure operators (2), the higher structure operators, etc, cf. Ref. 1.
References:
1. I.A. Batalin and E.S. Fradkin, Operatorial quantization of dynamical systems subject to constraints. A further study of the construction, Annales de l'institut Henri Poincaré (A) Physique théorique, 49 (1988) 145. The pdf and djvu files are available here.
$^1$ We will ignore subtleties with unbounded operators, domains, selfadjoint extensions, etc., in this answer.
$^2$ A semantical side remark: The notion of an open gauge algebra is traditionally a notion in Lagrangian formalism, where the gauge algebra is then broken off-shell. In general, it is less straightforward to identify in the Hamiltonian language, if a gauge systems (1) corresponds to an open gauge algebra in the Lagrangian formalism, or if it doesn't.
$^3$ The BFV formalism has since been further developed to deal with second-class constraints.
$^4$ Expansions of $\hat{Q}$ with other operator orderings (Weyl ordering, Wick ordering, etc.) in the ghost sector are possible, see e.g. section 6 in Ref. 1 for further details. The BRST charge operator $\hat{Q}$ is in principle allowed to depend on $\hbar$.
-
The answers given basically boils down to drop the Hermiticity condition for $\hat{G}_a$. OK. Let's say classically, the Poisson bracket goes as $\{G_a,G_b\}=f_{ab}{}^c G_c$, and after quantization, we require that this translates into $\left[ \hat{G}_a, \hat{G}_b\right]=i\hat{f}_{ab}{}^c \hat{G}_c$ with the operator product on the right hand side taken in precisely this order. The difficulty is, only very particular choices for the operator product ordering for $\hat{G}_a$ can lead to this form of operator ordering on the right hand side. More accurately, maybe we shouldn't think of it as an ordering product prescription as much as a particular choice of $\hbar$ deformation in quantization. In general, for open algebras, it's going to be very hard to find a $\hbar$ deformation with this property. How does one go about finding a deformation with this property?
-
The correct answer makes use of BRST. In short, $\widehat{G}_a$ is nonhermitian in general. Let me explain. In BRST, we augment the gauge and matter fields with ghost fields $\widehat{c}^a$, $\widehat{b}_b$ which satisfy the canonical anticommutation relations $\{\widehat{c}^a, \widehat{c}^b\} = \{\widehat{b}_c, \widehat{b}_d\} = 0$ and $\{ \widehat{c}^a, \widehat{b}_b\}=\delta^a_b$. In addition, we require both ghost fields to be Hermitian. This means the ghost sector has to have an indefinite norm. Define the total ghost number operator as $\widehat{N}_{gh}\equiv \widehat{c}^a\widehat{b}_a$. There is a fermionic operator $\widehat{\Omega}$ with ghost number $+1$, is Hermitian, and is quadratically nilpotent $\widehat{\Omega}^2=0$.
Expand $\widehat\Omega$ as $$\widehat{\Omega} = \widehat{c}^a \widehat{G}_a + \frac{1}{2!}\widehat{c}^a\widehat{c}^b \widehat{b}_c \widehat{f}_{ab}{}^c + \frac{1}{3!2!}\widehat{c}^a\widehat{c}^b\widehat{c}^c\widehat{b}_d\widehat{b}_e\widehat{f}_{abc}{}^{de} + \dots$$ where the $\widehat{G}, \widehat{f}$ operators contain no ghost factors. It's important to observe that $\left(\widehat{c}^a\widehat{c}^b\widehat{b}_c\right)^\dagger = -\widehat{c}^a\widehat{c}^b\widehat{b}_c -\delta^a_c\widehat{c}^b +\delta^b_c\widehat{c}_a$. So, the condition $\widehat{\Omega}^\dagger = \widehat{\Omega}$ translates into infinitely many relations starting with $$\widehat{G}_a=\widehat {G}_a^\dagger-\frac{1}{2}\widehat{f}_{ba}{}^{b}{}^\dagger+\frac{1}{2}\widehat{f}_{ab}{}^{b}{}^\dagger+\dots\,.$$ Anyway, you see the constraints $\widehat{G}_a$ are no longer Hermitian in general.
A physical state satisfies $\widehat{\Omega}|\psi\rangle=0$. If this state has zero ghost number, this reduces to the first class constraint $\widehat{G}_a|\psi\rangle =0$.
It's interesting to observe the special case of quantum gravity in the ADM formalism. There, we have Hamiltonian constraints and diffeomorphism constraints, and they form an open algebra. If we define the extended Hamiltonian as $\widehat{H}^* = \int d^3x\,\{\widehat{b}(x),\widehat{\Omega}\}$ where $\widehat{b}(x)$ is the ghost operator associated with time diffeomorphisms at the spatial point $x$, then the extended Hamiltonian operator is nonhermitian! Replacing it with $\int d^3x\,\{\widehat{ N}(x)\widehat{b}(x),\widehat{\Omega}\}$ where $\widehat{N}(x)$ is some gauge-fixing lapse field operator doesn't change this fact at all.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918113648891449, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/53230/when-driving-uphill-why-cant-i-reach-a-velocity-that-i-would-have-been-able-to?answertab=oldest
|
# When driving uphill why can't I reach a velocity that I would have been able to maintain if I started with it?
Consider these two situations when driving on a long straight road uphill:
1. Starting at a high velocity $v_h$, which the car is able to maintain.
2. Starting at a lower velocity $v_l$, and then trying to reach $v_h$ while driving uphill.
In my experience I've noticed that in case 2 it is very hard, and sometimes impossible, to reach the velocity $v_h$, even though if the car had started in that velocity it would have been able to maintain it. This observation was confirmed by another person I know.
If we do a simple analysis of the problem assuming the engine outputs some fixed power $P$, it seems that there should be no problem reaching $v_h$.
Is there something in the inner workings of the car (like the transmission or fuel injection for example) that would make it harder than expected to accelerate uphill?
For simplicity let's assume the car is always in the same gear.
-
I'd say that the newtonian-gravity is not relevant. – yohBS Feb 7 at 15:16
I agree, @Qmechanic added this tag. – Joe Feb 7 at 15:32
For an infinitely long uphill you would be able to reach $v_h$ eventually. The question becomes then, why does the distance needed to reach $v_h$ increases "exponentially" with the uphill angle $\theta$. – ja72 Feb 7 at 17:38
@ja72 - I'm not sure I would be able to reach $v_h$ eventually (in the answers below there are some possible reasons why I might not be able) – Joe Feb 7 at 21:11
Here is the macro reason why it will reach $v_h$. The car will always accelerate until it reaches its terminal velocity which I assume is greater than $v_h$. Note that the top speed is less the more the incline. This is acceleration is low, but positive. The governing equation is $a=\frac{P}{m\,v}-\beta v^2-g\,\sin\theta$. – ja72 Feb 8 at 0:03
show 2 more comments
## 6 Answers
Short, short version: It's complicated.
Slightly longer version:
Internal combustion engines have at least two relevant performance characteristics: power and torque. Furthermore the maximum attainable values for both are functions of the current engine speed (RPM).
Acceleration will cease if the current requirement for either power or torque equal the engine's maximum value at the current speed.
Both the power and torque curves (as a function of RPM) start low, rise steadily and eventually turn over and drop off. The requirements for power and torque are both monotonically increasing, which means that there must be a speed where the power requirements curve crosses the power curve. At that speed acceleration drops to zero.
Likewise, there must be a place where the torque requirement crosses the available torque and again, you can't accelerate further from there.
These two crosses can come at different engine speeds.
The result is that you can may be able to maintain a speed that you can not accelerate up to.
Full version:
The full answer to your question would require knowing the relevant curves for your engine as well as the gross vehicular weight, the current effective gear ration between the engine and the road, the slope of the road, the effective rolling friction and the car's drag coefficient. Which is why I'm not going to try to do the full version.
-
I don't understand. Having fixed the gear the car is in, does the power depend on anything else except the RPM? If not then what's the meaning of saying there is a maximum attainable value of the power for a given RPM? – Joe Feb 6 at 20:02
1
@Joe: Every engine has a performance curve of torque vs. RPM and power vs. RPM. Both curves have peaks, after which they fall off with increasing RPM. Depending on the gear you're in, it's possible to have a stable speed (on the slope) which you might not be able to get to from a slower speed without being able to use just the right gear ratio. That's why trucks have transmissions with so many ratios. Otherwise, they could get into a climb where the lower gear would over-rev, and the next higher gear would lug. – Mike Dunlavey Feb 6 at 21:26
@Joe The power and torque also depend on the throttle position and on air flow decisions made by the computer, but if you keep the pedal hard down it will dwell near the max on both curves, so it is fair to use those values for your calculations. – dmckee♦ Feb 6 at 22:09
I understand that this can get very complicated, but a lot of times even in complicated problems there is some simple principle that catches the main effect. @yohBS describes such a principle in his answer, but I'm not completely sure it's correct. So I'm accepting his answer for now, but I'm still open to criticism on this. – Joe Feb 8 at 9:08
@Joe I've expanded a bit on the implications of having two figures of merit. One of the big goals of engine design is to get relative flat power and torque curves, and modern cars are much better this way than old ones. – dmckee♦ Feb 8 at 14:29
It takes force (power from the engine) to accelerate to reach the higher speed. F=ma. When maintaining the desired speed, no additional power is needed from the engine (other than to overcome drag from air resistance.)
-
But the power required to maintained $v_h$ is higher than the one required for $v_l$, so if the engine outputs that power when it's going $v_l$ it already contains the additional power which is supposed to make it accelerate to $v_h$. – Joe Feb 6 at 19:18
1
His question is why can some car maintain a speed $v_1$ up a hill if they hit it fast enough, but if they hit the bottom more slowly can't rise above some other speed $v_2 < v_1$. (And I've had a couple of cars for which this was true). – dmckee♦ Feb 8 at 14:32
I'm modifying my answer. The focus remains on the fuel system.
As far as I can tell the other answers currently provided are essentially "invariant with respect to the direction (i.e. angle) of gravity". With this, I mean that they would apply just as much to the situation where we would reach a flat stretch after having gone downhill. (We tilt the whole landscape until the road uphill becomes flat.)
Assuming that this invariance is not to be observed in experiment, I gather that something significantly associated with acceleration and/or jerk in the car must be variant with respect to the direction of gravity. The only thing I can think of is the fuel system, in particular the fact that, when going uphill, the fuel tank sits relatively lower, and the conjecture that in that situation is it either more difficult or impossible to get a maximal flow of fuel going.
The difference between "very hard" and "impossible" would be explained by the uncertain/unspecified position of the throttle around the start of the climb. (The exact moment where you would push down the throttle completely: still on the flat stretch or already uphill.) If you'd throttle early, you would increase flow easier.
(NB: This does not apply to overloaded or underpowered vehicles, but they wouldn't be able to maintain a high velocity in scenario 1 anyway.)
-
– dmckee♦ Feb 8 at 14:57
– Gugg Feb 8 at 15:25
@Gugg, please see my explanation of being on either side of the torque peak. That explains the difference between hard and impossible. – Sankaran Feb 15 at 17:45
As the velocity increases, power has to stay the same, so the acceleration must decrease. Here's my analysis:
So, say you're on some slope, we can say $c$ meters vertical rise for every $b$ meters traveled along the road.
You travel at speed $v$ meters per second, along the road, so you travel $v \frac{c}{b}$ meters per second vertically. Potential energy is just $mgh$, or in other words, you have an energy of $mg$ joules, per meter. If we multiply the vertical speed with the potential energy required per meter, we get a power of $mgv\frac{c}{b}$ joules per second. If the car is accelerating, that's not the only power the engine has to fill, it also has to fill the increase in kinetic energy. Kinetic energy is $\frac{1}{2} m v^2$. Differentiating with respect to time, we find that the power needed is $\frac{1}{2} m 2 v \frac{dv}{dt}=mva$, where $a$ is the acceleration of the car. So the total power that the engine has to exert is $mva+mgv\frac{c}{b}=mv(a+g\frac{c}{b})$ joules per second.
So, to plug in numbers, lets say the car weighs $1500$kg, is travelling at $18$ meters per second (40mph), and isn't accelerating at all. Lets say there's a $20^\circ$ grade, so that $\frac{c}{b}=\sin(20^\circ)$, and of course $g=9.81$ meters per second per second. Then the formula gives that the power needed is $90590.9$ joules per second, or 121 horsepower. (that's a pretty steep grade though)
If you're going at $30$ on the same hill, but are accelerating at $2$ meters per second per second, you get $144591$ joules per second, or $194$ horsepower.
and the formula shows that as the velocity increases, for power to stay the same, $a$ must also decrease. Depending on what values you plug in, you can definitely switch it up, and wind up with a case where the power needed to accelerate is LESS than the power needed to maintain your current speed. but this depends on which numbers you plug in. It is still the case that to maintain constant power, your acceleration must decrease as your velocity increases.
-
His question is why can some car maintain a speed $v_1$ up a hill if they hit it fast enough, but if they hit the bottom more slowly can't rise above some other speed $v_2 < v_1$. (And I've had a couple of cars for which this was true). – dmckee♦ Feb 8 at 14:32
You're right of course! But I think what I addressed in the post (even though it's simple kinematics) helps clarify a reason it would be difficult, though not impossible (the equation doesn't imply it would ever be impossible w/ constant $P$). ("sometimes impossible" was the question, so, this would cover the other cases, right? So it's relevant!) – NeuroFuzzy Feb 8 at 15:38
Here's my guess:
As you know, internal combustion engines burn fuel. The power output of the engine is a function of both the current RPM and the amount of fuel you inject. But there's a catch: the engine can burn a limited amount of fuel per cycle, and therefore the higher the RPM, the more fuel can be combusted. So (as @dmckee stated), at a given gear the maximum power is an increasing function of the RPM.
Since the gear is fixed, the RPM is a linear function of velocity. Therefore, when you enter the slope at a low velocity, the car is at a low RPM, and the maximal power that your engine can provide is smaller than what it could provide if you'd enter the slope at a higher velocity. This is why you can not reach the terminal velocity that you could have maintained if you started with it in the first place.
BTW, what people do in such situations is kicking down a gear. This keeps you at the same velocity but at a higher RPM, thus increasing the available power that your engine can supply.
Anyway, I'd recommend that you buy a better car.
-
1
So far this seems like the best answer – Joe Feb 7 at 15:36
"at a given RPM the maximum power is an increasing function of the RPM" Only for so long, once the engine is moving fast enough the exhaust valve opens before burning finishes and the power curve turns over. Worse, in some engines (especially those with variable valve timing, the power curve may have more than one local maximum). It's all very ugly from a pencil and paper analysis point of view. In any case, torque matters too, and similar consideration apply to the torque curve. – dmckee♦ Feb 7 at 15:40
Also, although this may fully explain "impossible" (which, according to the OP, only happens sometimes), it doesn't seem to explain, or even allow for, "[merely] very hard". – Gugg Feb 7 at 16:01
@dmckee I agree, but I think the car is not yet at this regime. – yohBS Feb 7 at 19:01
@Gugg In my experience, the phenomena is pretty common. Also, there are surely many other factors involved (the computer regulating fuel injection, exauhst, wind/friction dissipation...) so this simplified model does not capture the difference between "hard" and "impossible". – yohBS Feb 7 at 19:03
show 2 more comments
Here is my take on it. It is somewhat similar to what many others have said but I will try to explain in greater detail, so bear with me in terms of length.
Power from engine First let us understand power-torque relationship for an engine. The working fluid of the engine (expanding combustion gases) apply a torque on the crankshaft. This torque varies with in-cylinder pressure and is net positive (useful) only in the power stroke (one of the four strokes in a 4-stroke engine in most cars). However, let us assume that we have a average net positive torque coming from the engine. This assumption is ok since most automobiles have multiple cylinders that are phase shifted so that at any time at least one or more cylinder are in power stroke.
So lets call this torque $\tau$. The work done by the engine will be $\tau\theta$, or in other words, the average power from the engine will be: \begin{align*} &P_e= \tau_{e}N_{e}2\pi \end{align*}
Power asked for by the wheels The load on the car comes from 1) friction on the road 2) air resistance 3) $mgsin\theta$ to climb a hill at a slope of $\theta$. Since the car does not revolve around its center of mass but only translates, all the load on the car (friction, wind, gravitational body forces) can be considered as some combined torque that is overcome by the torque applied by the engine. We can further analyse this using free body diagram of wheels, but that a different discussion to be had later. Essentially for a constant velocity car climbing say fixed inclined slope and a fixed frictional/air load, the torque demand is fixed and equal to the resistive load. Lets call this $\tau_w$ (wheel). Next lets us assume we are going at a constant velocity of our choice that translates to a wheel RPM (rev/min) of $N_w$. Don't worry I will get to an accelerating car later. For now: \begin{align*} &P_w= \tau_wN_w2\pi \end{align*} Since energy is not accumulated in the car's drive-train (assuming the engine parts don't heat up much after warming up) the power produced by the engine is consumed at the wheels. Again this assumption is fairly accurate since most of the fuel energy either goes to the wheels or leaves as exhaust from the engine, other effects such as frictional heating in the transmission fluid etc are negligibly small \begin{align*} P_w&=P_e\\ \Rightarrow \tau_eN_e&=\tau_wN_w \end{align*}
Now lets say the car just got on the slope and you want to maintian the same velocity as before so you slam the accelerator here is what happens. The new load is some $\tau_w$ and the speed you want is $N_w$ so you are asking the engine to deliver $N_w\tau_w$. If you are in a fixed gear $N_e = GN_w$ where $G$ is the gear ratio. Hence the engine has to deliver a torque of \begin{align*} \tau_{e,\; desired} = \frac{\tau_wN_w}{GN_w} \end{align*} So, not only you have a desired power, you also desire fixed engine RPM (or at least hope to stabilize at). Essentially you are asking for a desired torque.
Let us see what the engine can give us
Engine Load-Map
A typical engine map looks like the figure below (hand drawn so excuse me for wobbly curves). For now ignore the red circles $A$ and $B$. Torque comes from the work output given by the combustion gases. So the max torque curve (for any RPM) is when you are putting most fuel (diesel engine) or least throttle (gasoline engine). The curve drawn in the figure is the max torque for each speed. Even this curve has a peak value at some RPM. As you change engine speed initially you are doing good, i.e., increasing air flow speeds into the engine (that allows higher mass of air into cylinder due to greater rate of suction/pressure drop) allowing higher thermodynamic work output and higher torque, but after a certain speed the engine breathing efficiency (volumetric efficiency) goes down, then the cylinder is gasping for air (usually the flow chokes in the intake valve I think). So at high speed the volumetric efficiency goes down,there is less air, you can burn less fuel (even if press full throttle) to keep emissions within limits, and torque from the engine goes down. The power on the other hand keeps going up because it is a product of speed and torque. The increase in speed means more power strokes per unit time even if each stroke gives less torque. So the power peaks almost at max engine RPM.
Now the problem Lets study what happens when you are trying to climb a hill. Let's say you started climbing the slope and the desired engine speed (for a fixed vehicle velocity you want to hold constant and fixed gear) corresponds to red circle A. You press the gas pedal the engine gives torque $\tau_A$. If the load is such that $\tau_A<\tau_{desired}$ the vehicle will decelerate and $N_e$ will go down. This will make the $\tau_A$ to go down further (you move left on the torque curve) and you will not be able to keep speeds up. In this situation usually you will gear shift to allow engine to rev up more relative to the wheels, but if you are at constant gear you will not be able to accelerate.
Instead if you were at red circle $B$ and $\tau_B<\tau_{desired}$ at first the engine will try to slow down but its torque output will go up now the engine will stabilize to your desired acceleration. So if you started at the correct side of the torque curve speed-wise you can accelerate to your desired velocity.
Gear changes only allow us to jump to the right engine speed to be able to pull this off. Of course this is assuming that your engine is sufficiently powered in the first place. Otherwise you will go over the hump backwards and then decelerate further. So the power has to work out!
Non-issues Most cars of today have a fairly powerful engine and are heavy enough that minor things like fuel tank weight, whether the engine is in the front or back, etc are not an issue. A typical sedan is like 1250-1500 kg, only five heavy ppl and a full trunk of luggage can seriously load the car, not its own peripherals. Furthermore engine electronics (with electronic fuel injection, high pressure fuel rail etc) precision solenoid injectors, etc., are robust enough that fuel supply system is never a limiting factor. Engine peripherals are not that underpowered or lossy. Of course you never have a vapor-lock in fuel lines.
It is just a torque-vs-rpm issue, that comes from how well we can breathe and do combustion. Even throttling loses in gasoline engines have been reduced with better acoustically-tuned intake manifolds, refined valve timing, etc. Torque-curves are becoming flatter and flatter. Also one rarely ever goes to max-power situations (around 6000 rpm for typical sedan engine).
I hope I have addressed all issues still unanswered. Like I said I am not saying anything new just explaining it in more detail. There is something about invariance to inverting gravity. That is just a matter of decreasing load. If you come down a hill very fast and you don't brake, well you will gain speed. You will step off the gas, now your engine torque will come down from the max torque curve. Then it depends on how the engine speed torque stabilizes on a lower torque curve (i.e., if you in highest gear and your load requirement is very low) technically your consuming very little fuel in the engine but it is so fast that it will try to run away (highest rpm) but I am sure there are ways to prevent that, and lose power in engine friction (essentially high speed engine braking) but in most cases you will probably brake much much earlier!
-
I was looking at my hand-drawn power curve and I realized it looks very linear at low speeds. This not true since both torque is not constant with speed. So use the figure only to understand what goes up and what goes and don't worry about the slopes being right! – Sankaran Feb 13 at 18:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555856585502625, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/50023/independence-of-p-np/50047
|
## Independence of P = NP?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let's suppose P = NP is independent (of ZFC). Then there is a model of ZFC in which there is a polynomial time algorithm for SAT. But suppose this algorithm is correct, wouldn't this algorithm exist in the standard model? In the end an algorithm is a number. My question is: How can it be that there exists a polynomial time algorithm for SAT in a model of ZFC and yet P = NP is unprovable? In other words, how can P = NP be independent?
-
10
Zirui: It is my impression from what you write that you do not understand well the difference between sets and their representation in models of set theory. If your model is not an $\omega$-model, (meaning, if its version of the natural numbers is not isomorphic to the true natural numbers), I am not sure I understand what you mean by "the algorithm is correct", unless you mean that the algorithm exists in $V$ (so P=NP), and the model is correct about the algorithm being polynomial time and solving SAT. Why don't you begin by explaining this part better? – Andres Caicedo Dec 21 2010 at 4:03
5
@Zirui: Your confusion in this question is the same as your earlier question regarding CH: mathoverflow.net/questions/28806/… Andres' comment and Jason's answer below, for instance, both echo the very thoughtful answer JDH gave you earlier. Things said then about a counterexample to CH in a model apply mutatis mutandis to algorithms in a model witnessing P=NP. So in addition to digesting what's here, it might pay to revisit the answers to that earlier question. – Ed Dean Dec 21 2010 at 7:15
4
If you ask a question that confuses holding in a model with being true, then MathOverflow tries to explain the difference. A pointed observation, that is. Not even hints would help. – Ricky Demer Dec 21 2010 at 9:46
5
unable to parse: [[good enough to prove P = NP] in this model] or [good enough to prove [P = NP in this model]] – Ricky Demer Dec 21 2010 at 10:06
7
I strongly recommend that people reading here take a quick trip through Zirui Wang's previous questions -- they have a characteristic flavor, and it's good to know what you're getting in to. – JBL Dec 21 2010 at 14:26
show 17 more comments
## 8 Answers
There are examples such as the one due to Levin mentioned here which you can write down explicitly, but whose running time is polynomial if and only if P=NP. Thus in some [admittedly rather trivial] sense it's not finding an algorithm which is hard, but proving that it runs in polynomial time. This is the part which could conceivably be independent.
-
This algorithm is not even recursive, meaning it halts on all inputs. A polynomial time algorithm must be recursive in the first place. – Zirui Wang Dec 21 2010 at 5:12
5
A minor clarification: Levin-type algorithms (which can approximately describe as just running all possible algorithm in parallel) rely on a sub-algorithm producing a polynomial size certificate that can be verified polynomial time. Thus it gives an explicit polynomial time algorithm for NP \cap CoNP (if and only if NP \cap CoNP = P) but not for all of NP. – Mark Lewko Dec 21 2010 at 5:51
@Zirui: This is easily fixed by running any old (not necessarily polytime) algorithm in parallel with the rest and returning the result of that if it completes. – Noah Stein Dec 21 2010 at 15:31
@Mark: Wow, good point. I had somehow failed to notice that. Thanks! – Noah Stein Dec 21 2010 at 15:32
Suppose P = NP. Then this algorithm has a polynomial upper bound. But you may not be able to tell this bound in advance. So it still does not prove the algorithm is polynomial time. It boils down to the definition of polynomial time. In fact it may take infinite time to find the bound. – Zirui Wang Dec 23 2010 at 12:28
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This answer is basically an elaboration of Jason's answer, but it's too big to be a comment.
It's a little hard to speculate on a proof that doesn't exist, but most likely if someone constructed a model of ZFC where P=NP, then the "algorithm" in the model wouldn't be a real algorithm. You can formalize the notion of algorithm in first-order logic over the natural numbers, but the formalization is incomplete in the sense that there will be non-standard models that satisfy the same axioms. It's possible that someone will write down a model of ZFC where the set of natural numbers is "too big" — it contains natural numbers other than $0, 1, 2, 3, \ldots$ — such that for "algorithms" defined using these non-natural natural numbers, P=NP.
For example, the "algorithms" in the model could correspond to algorithms where you can take infinitely many steps. A non-standard natural number (by necessity) must be strictly bigger than every standard natural number, so Turing machines in this model would have extra states at infinity. Since time itself is a natural number, the running time of algorithms can take on infinite values. So now you have algorithms that have infinite steps, and can take infinitely long to run. From this, we can't learn much about whether P=NP in the real world.
I know this is all counterintuitive, but the reason is that once you are cooking up a model of ZFC, the only thing you have to do is formally satisfy the axioms, and the axioms don't constrain you enough to prevent you from creating non-standard models. If you want to understand this better, I suggest reading up on Skolem's construction of non-standard models of the natural numbers over Peano Arithmetic.
-
I think the following is implicit in earlier answers, but let me state it briefly. I'm addressing only the original question, not CH or other matters that gradually entered this discussion.
The main point is that a model of ZFC can have non-standard natural numbers; they're larger than all the standard ones, so we sometimes call them "infinite" even though from the point of view of the model they're finite. (That is, they satisfy, in the model, the formula that defines "finite ordinal number".) Now suppose such a model satisfies "There is PTime algorithm for SAT." Fix such an algorithm in the model, say a Turing machine program. It is true in the model that this algorithm has a finite number of control states (because that's part of the definition of "Turing machine") and that its running time is bounded by a polynomial, of some finite degree, of the input length (because that's part of the definition of "PTime"). Unfortunately, both of the occurrences of "finite" in the preceding sentence are (like the whole sentence) to be understood in the sense of the model. Neither the number of states nor the degree of the polynomial will necessarily be an actual natural number (in the sense of the real world rather than the model); they can be non-standard numbers. So the model's Turing machine might not be an actual Turing machine, and, even if it is, its running time might not be bounded by an actual polynomial.
-
Excellent answer! But I think your definition of "finite", which is relative, is not what computer scientists have in mind. Can you refute this? This nonstandard number does not have the exactness property that natural numbers have and hence it does not correspond to the Godel encoding of a Turing machine. – Zirui Wang Dec 22 2010 at 12:55
I was just reading a paper "communicated by Andreas R. Blass" and then I saw this answer by Andreas Blass. I said "Wow!" – Zirui Wang Dec 22 2010 at 13:01
2
@Zirui Wang: I'm not sure what you mean by "exactness property," but indeed a nonstandard number would not be the Gödel encoding of a Turing Machine in the real world. It might nevertheless be such an encoding in the model; i.e., it might satisfy, in the model, the formula expressing "Gödel encoding of a Turing machine," and that is why it might (if it satisfies appropriate additional formulas about computing SAT and taking only polynomial time) witness the truth of P=NP in the model, without witnessing P=NP in the real world. – Andreas Blass Dec 22 2010 at 16:11
Let me ask you a question instead. The consistency of PA is known to be independent of PA so we have a model of PA that thinks there is a proof of something contradictory like $0 \neq 0$. Therefore, this "proof" that $0 \neq 0$ is in the set-theoretic universe $V$. So does $V$ think that Peano Arithmetic is inconsistent?
The answer is no because $V$ realizes that this is not a true proof but rather a proof involving nonstandard numbers either with formulas or length. The same type of idea is happening here. Even if we have a nonstandard model thinking that it has a polynomial time algorithm for SAT, a standard model looking at this algorithm may see things differently. For an even more extreme example, consider the fact that if we take a total computable function $f$ with any given running time, a nonstandard model computes the standard portion of it in a time amount that it views as constant because it has a fixed $c$ that's greater than every Natural number. But does this mean that the function can actually be computed in constant time? Of course not, because $c$ is not a true finite number.
I should also make mention that the last thing I said is of a slightly different nature since even the nonstandard model will not view itself as computing the function in constant time. Mainly, it does not know where the standard portion ends and the nonstandard part begins.
Edit (addition to address comments at top of thread):
If P = NP turned out to be independent of ZFC, then we'd have a model of ZFC that would think that P = NP since by the definition of independence, P $\neq$ NP would not be provable from the axioms of ZFC. However, this would not be sufficient for generalizing the result to all models as you conjectured since there would also have to be a model of ZFC thinking that P $\neq$ NP by virtue of ZFC not proving P = NP. These results follow directly from Gödel's completeness theorem. On the other hand, if P = NP were provable in ZFC, then all models of ZFC would think that P = NP.
-
Who says Con(PA) is independent of PA? Godel's second incompleteness theorem only asserts Con(PA) is not provable. ~Con(PA) might be provable in PA. – Zirui Wang Dec 21 2010 at 5:08
5
If Con(PA) is not provable from PA, then PA is consistent because if a theory is inconsistent, then every statement is provable from that theory. – Jason Dec 21 2010 at 5:11
2
Godel's second incompleteness theorem assumes Con(PA). It states: If Con(PA), then Con(PA) is not provable. – Zirui Wang Dec 21 2010 at 5:18
Let me restate what I'm saying because you're right, the logic would be circular with this assumption. If PA is not consistent, then every statement is provable in PA. This is because everything follows from a contradiction. So if this were the case, then we'd have a proof of both CON(PA) and ~CON(PA). And I guess technically then CON(PA) would not be independent of ZFC, but then we'd also have a proof of both P = NP and P $\neq$ NP and no set-theoretic universe. – Jason Dec 21 2010 at 5:36
We discussed this issue over at math.SE, maybe reading math.stackexchange.com/questions/5377 would help you. – David Speyer Dec 21 2010 at 13:34
I think the source of the confusion here is the idea that all models of ZFC have the same notion of what a "natural number" (and hence, by an appropriate encoding, an "algorithm") is. Unfortunately, Godel's incompleteness theorem tells us that no recursively enumerable axiom system (of which ZFC is an example) can precisely pin down the theory of the true natural numbers (i.e. true arithmetic), which can thus only be fully described in the metatheory rather than in any formal system. As such, there exist statements G about natural numbers which are true in some models of ZFC and false in others, because these two models have genuinely different interpretations of the natural number system.
It is a proiri conceivable (though, in my opinion, unlikely), that P=NP is one of these statements. Specifically, it is conceivable that SAT is not solvable in polynomial time in the standard model of the natural numbers, but is solvable in polynomial time in an exotic model of the natural numbers, even if both models of the natural numbers are part of respective models of set theory obeying ZFC. The point here is that the exotic algorithm could have a length which is an exotic natural number, which could be larger than every standard natural number; similarly, the constants in the polynomial run time for this exotic algorithm could also be larger than every standard number. So there is no obvious way to convert the exotic polynomial time SAT solver into a standardly polynomial time SAT solver; it may even be that the exotic algorithm cannot be described at all in the standard model, let alone have a polynomial run time.
[Edit: actually, with Levin's trick, if SAT is solvable, it is always solvable with a bounded-length algorithm (namely, "run all possible algorithms in parallel in a carefully chosen manner"), so exotic length is not a genuine issue. However, this still does not exclude the possibility of exotic run time constants.]
It is even conceivable (though, again, I believe it to be unlikely) that the reverse is true: SAT is solvable in polynomial time in the standard model, but not in a exotic model. Here, the standard algorithm has a length which is a standard natural number, so the algorithm can at least be described in the exotic world. But just because it has a polynomial run time in the standard model, this does not necessarily imply a polynomial run time in the exotic model (unless one has a transfer principle, as is the case in the models coming from nonstandard analysis, but not all exotic models are of this type); the algorithm may solve all standard SAT problems in a polynomial amount of time, but require super-polynomial time to solve an exotic SAT problem. [In this scenario, ZFC + P!=NP would be $\omega$-inconsistent, but could still be consistent.]
-
It could also be that not all instances of SAT are in the universe. – Zirui Wang Apr 14 2011 at 17:28
This answer started life as a nascent comment intended for the back-and-forth above, but it ballooned into what follows.
ZW, as I pointed out above, your current question does parallel your earlier question about CH, as do the (very good) answers in each case. From your further comments, though, I think I now have some idea why the answers haven't satisfied you; I'll take a stab at answering what I think's bothering you. (If I'm right, then it's a fairly simple matter, but just one that wouldn't be the initial guess as the issue on MO. And if I'm wrong about what you don't like, oh well; but I've genuinely tried to figure out why you're unhappy with the answers so far given.)
The answers given try to clarify a (very common and understandable) mathematical confusion that people can have about independence results, but your further comment:
My confusion is, people take V as the standard model. But why so?
suggests something else is at the heart of what's bothering you personally. And now looking at your original question about $CH$, it seems clear there as well:
OK, Cohen has constructed a model in which both ZFC and ~CH are true. Isn't this model an answer to the continuum problem? Hasn't he showed that it is indeed possible to construct a set with cardinality between that of the integers and that of the reals? Why is it still not considered sufficient to settle CH? Why is one model not enough? Why for all models? In other words, why do we have to answer whether "ZFC |- CH" instead of just "CH" itself?
So it seems that part of what you're not happy with is simply the (extra-mathematical, somewhat conventional) privileged position of $ZFC$ as a foundational theory for mathematics. (Again, if I'm wrong in ascribing such thoughts to you, my apologies.) And that's perfectly fair; plenty of people have taken issue with that status for myriad reasons.
So maybe you're really thinking: "Hey, Cohen constructed this model $\mathcal{M}\models ZFC + \neg CH$, and I think this $\mathcal{M}$ can be (or should be, or is) the mathematical universe we all work in." Well that's a perfectly acceptable way to think, but now you no longer have a purely mathematical pursuit on your hands (one reason, by the way, why myself and others generally would be expecting to answer the question the way they did), thanks to the privileged position $ZFC$ enjoys. Now you've also got a sociological (and dare I say philosophical) endeavor, namely that of convincing fellow mathematicians of the truth/efficacy/beauty/... of your favored universe.
Those who answered you were working under the accepted convention that "settling" a problem means either proving it in $ZFC$, or refuting it there, or establishing its independence from $ZFC$, and answered your initial queries accordingly (and accurately). If I'm right about what you're finding unsatisfactory here, then you now get to immerse yourself in the delights of the philosophy of mathematics. Enjoy! (And if I'm wrong, at least I've only wasted my own time.)
-
Why do you consider a model in which ZFC hold outside the universe in which mathematicians work? (Model theory.) – Zirui Wang Dec 21 2010 at 11:34
1
I don't. My point is only that any statement which "merely" holds in some particular model of ZFC, rather than all of them, holds a less privileged pedigree than statements which hold in all of them (i.e. are provable in ZFC). Most non-logician mathematicians don't give much thought to foundations, nor do they need to. If you demanded an answer as to their foundations, they'll most likely say something like ZFC (and hope you go away :). If you point at a model of ZFC + -CH and argue that settles CH in the negative, anyone can point at a model of ZFC + CH and say "What about that?" .... – Ed Dean Dec 21 2010 at 11:48
As long as ZFC is the arbiter of such matters, there's nothing more to say than CH is independent of our axioms. So if you want to make a definite case one way or the other, you're asking people to change what is currently a pretty well entrenched convention. You'd need to offer some good reasons, of some sort or another. I recommend looking up "intrinsic" and "extrinsic" justification of axioms in relation to Godel, and perhaps some of the writings of Peter Koellner if you're interested in such matters. – Ed Dean Dec 21 2010 at 11:53
What more that can be said is that CH is 'more canonical' than -CH, because (I remember reading somewhere that) you can go from either to the other by forcing extensions, while minimal models satisfy CH. – Ricky Demer Dec 21 2010 at 11:56
Sure, you can say that CH is "more canonical" in some sense, or you can, say, argue for -CH along the lines of Koellner, working from a network of results by Woodin. In each of these cases, you are no longer letting ZFC and what it proves be the sole arbiter. All I wrote before is that " As long as ZFC is the arbiter of such matters, there's nothing more to say ..." – Ed Dean Dec 21 2010 at 12:05
show 1 more comment
As an addition to arsmath's answer, to make it clearer how these "additional" numbers may look like:
Lets say you have defined the class of ordinals $\mathbb{O}$, the successor-functor $S$, and the set of finite ordinals ("naturals") $\omega$. Then $a < b$ can be defined by $a\in b$ for $a,b\in\mathbb{O}$. Now you can create the formulas $A_0=n\in\omega\wedge n>\emptyset$, $A_1=n\in\omega\wedge n>S\emptyset$, $A_2=n\in\omega\wedge n>SS\emptyset$, ..., i.e. $A_i$ states that that there is a natural number $n$ that is larger than $i$.
Trivially, $ZFC\models A_i$. Hence, assuming $ZFC$ was consistent, by the Compactness Theorem, also $ZFC\cup \{ A_i|i\in\omega \}$ is consistent, and has a model. In this model, there is a number $n$ of which the model "thinks" it was finite (since it is in the $\omega$ of this model), which is not in "our" $\omega$ of the metatheory, since all the $A_i$ force this additional element to be larger than everything we consider finite.
Especially, there may be algorithms that - if they even make sense - will not terminate in "finitely" many steps, since this model has another understanding of finity. Since $X^n$ for this $n$ would be a polynomial for this model, such an algorithm could as well be in $P$ for this model.
-
This doesn't seem like much of a research-level question or discussion, but anyway I'm surprised to see no mention of Scott Aaronson's article: "Is P Versus NP Formally Independent?". It explains a lot of these basic issues in logic and would probably be helpful.
See: http://www.scottaaronson.com/papers/pnp.pdf
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567075967788696, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/85094/showing-a-ring-is-artinian
|
# Showing a ring is artinian?
We have a ring
$R=\begin{pmatrix} \mathbb{Z} & 0 \\ \mathbb{Z} & \mathbb{Z} \end{pmatrix}$
Let $I=\begin{pmatrix} 12\mathbb{Z} & 0 \\ 3\mathbb{Z} & 3\mathbb{Z} \end{pmatrix}$
How do I show that $R/I$ is artinian ring? $R/I=I$, the only ideal that isn't equal to I is $\begin{pmatrix} 0 & 0 \\ 3\mathbb{Z} & 3\mathbb{Z} \end{pmatrix}$ and $\begin{pmatrix} 12\mathbb{Z} & 0 \\ 3\mathbb{Z} & 0 \end{pmatrix}$. These are both DCC condition is satisfied with them.
So then you just conclude it's aritian. Also, the niradical elements are just the bottom left corner of the matrix.
-
## 1 Answer
I think you are trying too hard. This ring is Artinian because it is finite: it has only $12*3*3=72$ elements.
Let $\mathbb{Z}_n$ denote the ring of integers mod $n$. The quotient is $$R/I=\begin{pmatrix} \mathbb{Z}_{12} & 0 \\ \mathbb{Z}_3 & \mathbb{Z}_3 \end{pmatrix}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609310030937195, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/50339/summations-in-tan2
|
## Summations in $\tan^2$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hey all,
I was just wondering if anyone had come across the following identities, valid for $m\in\mathbb{N}$. I've used Abramowitz and Stegun, Maple, Mathematica etc but can't find them anywhere. I can prove these, though they happen 'accidentally' from a method which I am already looking at. Anyway the identities are
$$\sum_{k=1}^m \left(\tan\left(\frac{\pi(2k-1)}{4m}\right)\right)^2=m(2m-1) \hspace{4mm} \textrm{and} \hspace{4mm} \sum_{k=1}^m \left(\tan\left(\frac{\pi k}{2m+1}\right)\right)^2=m(2m+1)$$
and
$$\sum_{k=1}^m \left(\tan\left(\frac{\pi(2k-1)}{4m}\right)\right)^4=\frac{1}{3}m(2m-1)(4m^2+2m-3) \hspace{4mm} etc$$
There are other identities for all even powers but I haven't worked them out yet as I thought that there might not be any point if there are known results for these summations. It would be cool if there were lists of such identities, or even a general formula, as this would provide me with many useful references indeed!
Many thanks on Christmas!
-
How do you derive these identities? – Anixx Dec 25 2010 at 15:19
seems you confused m and n in the last identity – Anixx Dec 25 2010 at 15:20
Not much of a simplification, but your first sum can be "cut in half": for even $m$, your sum is the same as $$\sum_{k=1}^{\lfloor m/2\rfloor}\left(\tan^2\left(\frac{\pi}{4m}(2k-1)\right)+\cot^2\left(\frac{\pi}{4m}(2k-1)\right)\right)$$ ; for odd $m$, add 1. – J. M. Dec 25 2010 at 15:22
(and something similar can be done for the other sums) – J. M. Dec 25 2010 at 15:24
5
You'll find this and a list of further identities here emis.de/journals/HOA/IJMMS/30/3185.pdf – dke Dec 25 2010 at 15:48
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537570476531982, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/tag/exponential-sums/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘exponential sums’ tag.
## Heuristic limitations of the circle method
20 May, 2012 in expository, math.NT | Tags: circle method, exponential sums, Goldbach conjecture, major arcs, minor arcs, parity problem, prime number theorem, prime numbers, twin prime conjecture | by Terence Tao | 16 comments
One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations ${f_3(x)}$ of an integer ${x}$ as the sum of three primes ${x = p_1+p_2+p_3}$, as a Fourier-analytic integral over the unit circle ${{\bf R}/{\bf Z}}$ involving exponential sums such as
$\displaystyle S(x,\alpha) := \sum_{p \leq x} e( \alpha p) \ \ \ \ \ (1)$
where the sum here ranges over all primes up to ${x}$, and ${e(x) := e^{2\pi i x}}$. For instance, the expression ${f(x)}$ mentioned earlier can be written as
$\displaystyle f_3(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha. \ \ \ \ \ (2)$
The strategy is then to obtain sufficiently accurate bounds on exponential sums such as ${S(x,\alpha)}$ in order to obtain non-trivial bounds on quantities such as ${f_3(x)}$. For instance, if one can show that ${f_3(x)>0}$ for all odd integers ${x}$ greater than some given threshold ${x_0}$, this implies that all odd integers greater than ${x_0}$ are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.
Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff ${p \leq x}$ with a smoother cutoff ${\chi(p/x)}$ for a suitable chocie of cutoff function ${\chi}$, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function ${\Lambda(n)}$. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.
In many cases, it turns out that one can get fairly precise evaluations on sums such as ${S(x,\alpha)}$ in the major arc case, when ${\alpha}$ is close to a rational number ${a/q}$ with small denominator ${q}$, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that
$\displaystyle S(x,0) \approx \frac{x}{\log x}$
and the prime number theorem in residue classes modulo ${q}$ suggests more generally that
$\displaystyle S(x,\frac{a}{q}) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}$
when ${q}$ is small and ${a}$ is close to ${q}$, basically thanks to the elementary calculation that the phase ${e(an/q)}$ has an average value of ${\mu(q)/\phi(q)}$ when ${n}$ is uniformly distributed amongst the residue classes modulo ${q}$ that are coprime to ${q}$. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.
In the minor arc case when ${\alpha}$ is not close to a rational ${a/q}$ with small denominator, one no longer expects to have such precise control on the value of ${S(x,\alpha)}$, due to the “pseudorandom” fluctuations of the quantity ${e(\alpha p)}$. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of ${k}$ “pseudorandom” phases should fluctuate randomly and be of typical magnitude ${\sim \sqrt{k}}$, one expects upper bounds of the shape
$\displaystyle |S(x,\alpha)| \lessapprox \sqrt{\frac{x}{\log x}} \ \ \ \ \ (3)$
for “typical” minor arc ${\alpha}$. Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that
$\displaystyle \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2\ d\alpha \sim \frac{x}{\log x} \ \ \ \ \ (4)$
which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as ${x^{4/5+o(1)}}$ are far more typical.
Because one only expects to have upper bounds on ${|S(x,\alpha)|}$, rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as ${e(-x\alpha)}$ for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of ${x}$, so that averaging in ${x}$ is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude ${|S(x,\alpha)|^3}$ oscillates in sympathetic resonance with the phase ${e(-x\alpha)}$, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region ${\Omega_{minor}}$:
$\displaystyle |\int_{\Omega_{minor}} |S(x,\alpha)|^3 e(-x\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^3\ d\alpha.$
Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as ${f(x)}$, at least when ${x}$ is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer ${x}$ is the sum of three primes; my own result that all odd numbers greater than ${1}$ can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem at medium-sized ranges of ${x}$, such as ${x \sim 10^{50}}$. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)
However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number ${x}$ as the sum ${x = p_1 + p_2}$ of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations ${2 = p_1 - p_2}$ of ${2}$ as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large ${x}$, one has to find a non-trivial lower bound for the quantity
$\displaystyle f_2(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^2 e(-x\alpha)\ d\alpha \ \ \ \ \ (5)$
for sufficiently large ${x}$, as this quantity ${f_2(x)}$ is also the number of ways to represent ${x}$ as the sum ${x=p_1+p_2}$ of two primes ${p_1,p_2}$. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity
$\displaystyle \tilde f_2(x) = \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha \ \ \ \ \ (6)$
that goes to infinity as ${x \rightarrow \infty}$, as this quantity ${\tilde f_2(x)}$ is also the number of ways to represent ${2}$ as the difference ${2 = p_1-p_2}$ of two primes less than or equal to ${x}$.
In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums ${S(x,\alpha)}$. Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity ${f_2(x)}$ or ${\tilde f_2(x)}$, expresses it in terms of ${S(x,\alpha)}$ using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to ${x=p_1+p_2}$ or ${2=p_1-p_2}$, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on ${f_2(x)}$ or ${\tilde f_2(x)}$.
Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on ${f_2(x)}$ or ${\tilde f_2(x)}$ (or similar quantities) purely from the upper and lower bounds on ${S(x,\alpha)}$ or similar quantities (and of various ${L^p}$ type norms on such quantities, such as the ${L^2}$ bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in ${S(x,\alpha)}$ are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:
• (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
• (ii) Upper and lower bounds on the magnitude of ${S(x,\alpha)}$ are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of ${O(1/\log x)}$ or better); but
• (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.
I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.
In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.
Read the rest of this entry »
## Every odd integer larger than 1 is the sum of at most five primes
1 February, 2012 in math.NT, paper | Tags: circle method, exponential sums, Goldbach conjecture | by Terence Tao | 130 comments
I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is as stated in the title, and is in the spirit of (though significantly weaker than) the even Goldbach conjecture (every even natural number is the sum of at most two primes) and odd Goldbach conjecture (every odd natural number greater than 1 is the sum of at most three primes). It also improves on a result of Ramaré that every even natural number is the sum of at most six primes. This result had previously also been established by Kaniecki under the additional assumption of the Riemann hypothesis, so one can view the main result here as an unconditional version of Kaniecki’s result.
The method used is the Hardy-Littlewood circle method, which was for instance also used to prove Vinogradov’s theorem that every sufficiently large odd number is the sum of three primes. Let’s quickly recall how this argument works. It is convenient to use a proxy for the primes, such as the von Mangoldt function ${\Lambda}$, which is mostly supported on the primes. To represent a large number ${x}$ as the sum of three primes, it suffices to obtain a good lower bound for the sum
$\displaystyle \sum_{n_1,n_2,n_3: n_1+n_2+n_3=x} \Lambda(n_1) \Lambda(n_2) \Lambda(n_3).$
By Fourier analysis, one can rewrite this sum as an integral
$\displaystyle \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha$
where
$\displaystyle S(x,\alpha) := \sum_{n \leq x} \Lambda(n) e(n\alpha)$
and ${e(\theta) :=e^{2\pi i \theta}}$. To control this integral, one then needs good bounds on ${S(x,\alpha)}$ for various values of ${\alpha}$. To do this, one first approximates ${\alpha}$ by a rational ${a/q}$ with controlled denominator (using a tool such as the Dirichlet approximation theorem) ${q}$. The analysis then broadly bifurcates into the major arc case when ${q}$ is small, and the minor arc case when ${q}$ is large. In the major arc case, the problem more or less boils down to understanding sums such as
$\displaystyle \sum_{n\leq x} \Lambda(n) e(an/q),$
which in turn is almost equivalent to understanding the prime number theorem in arithmetic progressions modulo ${q}$. In the minor arc case, the prime number theorem is not strong enough to give good bounds (unless one is using some extremely strong hypotheses, such as the generalised Riemann hypothesis), so instead one uses a rather different method, using truncated versions of divisor sum identities such as ${\Lambda(n) =\sum_{d|n} \mu(d) \log\frac{n}{d}}$ to split ${S(x,\alpha)}$ into a collection of linear and bilinear sums that are more tractable to bound, typical examples of which (after using a particularly simple truncated divisor sum identity known as Vaughan’s identity) include the “Type I sum”
$\displaystyle \sum_{d \leq U} \mu(d) \sum_{n \leq x/d} \log(n) e(\alpha dn)$
and the “Type II sum”
$\displaystyle \sum_{d > U} \sum_{w > V} \mu(d) (\sum_{b|w: b > V} \Lambda(b)) e(\alpha dw) 1_{dw \leq x}.$
After using tools such as the triangle inequality or Cauchy-Schwarz inequality to eliminate arithmetic functions such as ${\mu(d)}$ or ${\sum_{b|w: b>V}\Lambda(b)}$, one ends up controlling plain exponential sums such as ${\sum_{V < w < x/d} e(\alpha dw)}$, which can be efficiently controlled in the minor arc case.
This argument works well when ${x}$ is extremely large, but starts running into problems for moderate sized ${x}$, e.g. ${x \sim 10^{30}}$. The first issue is that of logarithmic losses in the minor arc estimates. A typical minor arc estimate takes the shape
$\displaystyle |S(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^3 x \ \ \ \ \ (1)$
when ${\alpha}$ is close to ${a/q}$ for some ${1\leq q\leq x}$. This only improves upon the trivial estimate ${|S(x,\alpha)| \ll x}$ from the prime number theorem when ${\log^6 x \ll q \ll x/\log^6 x}$. As a consequence, it becomes necessary to obtain an accurate prime number theorem in arithmetic progressions with modulus as large as ${\log^6 x}$. However, with current technology, the error term in such theorems are quite poor (terms such as ${O(\exp(-c\sqrt{\log x}) x)}$ for some small ${c>0}$ are typical, and there is also a notorious “Siegel zero” problem), and as a consequence, the method is generally only applicable for very large ${x}$. For instance, the best explicit result of Vinogradov type known currently is due to Liu and Wang, who established that all odd numbers larger than ${10^{1340}}$ are the sum of three odd primes. (However, on the assumption of the GRH, the full odd Goldbach conjecture is known to be true; this is a result of Deshouillers, Effinger, te Riele, and Zinoviev.)
In this paper, we make a number of refinements to the general scheme, each one of which is individually rather modest and not all that novel, but which when added together turn out to be enough to resolve the five primes problem (though many more ideas would still be needed to tackle the three primes problem, and as is well known the circle method is very unlikely to be the route to make progress on the two primes problem). The first refinement, which is only available in the five primes case, is to take advantage of the numerical verification of the even Goldbach conjecture up to some large ${N_0}$ (we take ${N_0=4\times 10^{14}}$, using a verification of Richstein, although there are now much larger values of ${N_0}$ – as high as ${2.6 \times 10^{18}}$ – for which the conjecture has been verified). As such, instead of trying to represent an odd number ${x}$ as the sum of five primes, we can represent it as the sum of three odd primes and a natural number between ${2}$ and ${N_0}$. This effectively brings us back to the three primes problem, but with the significant additional boost that one can essentially restrict the frequency variable ${\alpha}$ to be of size ${O(1/N_0)}$. In practice, this eliminates all of the major arcs except for the principal arc around ${0}$. This is a significant simplification, in particular avoiding the need to deal with the prime number theorem in arithmetic progressions (and all the attendant theory of L-functions, Siegel zeroes, etc.).
In a similar spirit, by taking advantage of the numerical verification of the Riemann hypothesis up to some height ${T_0}$, and using the explicit formula relating the von Mangoldt function with the zeroes of the zeta function, one can safely deal with the principal major arc ${\{ \alpha = O( T_0 / x ) \}}$. For our specific application, we use the value ${T_0= 3.29 \times 10^9}$, arising from the verification of the Riemann hypothesis of the first ${10^{10}}$ zeroes by van de Lune (unpublished) and Wedeniswki. (Such verifications have since been extended further, the latest being that the first ${10^{13}}$ zeroes lie on the line.)
To make the contribution of the major arc as efficient as possible, we borrow an idea from a paper of Bourgain, and restrict one of the three primes in the three-primes problem to a somewhat shorter range than the other two (of size ${O(x/K)}$ instead of ${O(x)}$, where we take ${K}$ to be something like ${10^3}$), as this largely eliminates the “Archimedean” losses coming from trying to use Fourier methods to control convolutions on ${{\bf R}}$. In our paper, we set the scale parameter ${K}$ to be ${10^3}$ (basically, anything that is much larger than ${1}$ but much less than ${T_0}$ will work), but we found that an additional gain (which we ended up not using) could be obtained by averaging ${K}$ over a range of scales, say between ${10^3}$ and ${10^6}$. This sort of averaging could be a useful trick in future work on Goldbach-type problems.
It remains to treat the contribution of the “minor arc” ${T_0/x \ll |\alpha| \ll 1/N_0}$. To do this, one needs good ${L^2}$ and ${L^\infty}$ type estimates on the exponential sum ${S(x,\alpha)}$. Plancherel’s theorem gives an ${L^2}$ estimate which loses a logarithmic factor, but it turns out that on this particular minor arc one can use tools from the theory of the large sieve (such as Montgomery’s uncertainty principle) to eliminate this logarithmic loss almost completely; it turns out that the most efficient way to do this is use an effective upper bound of Siebert on the number of prime pairs ${(p,p+h)}$ less than ${x}$ to obtain an ${L^2}$ bound that only loses a factor of ${8}$ (or of ${7}$, once one cuts out the major arc).
For ${L^\infty}$ estimates, it turns out that existing effective versions of (1) (in particular, the bound given by Chen and Wang) are insufficient, due to the three logarithmic factors of ${\log x}$ in the bound. By using a smoothed out version ${S_\eta(x,\alpha) :=\sum_{n}\Lambda(n) e(n\alpha) \eta(n/x)}$ of the sum ${S(\alpha,x)}$, for some suitable cutoff function ${\eta}$, one can save one factor of a logarithm, obtaining a bound of the form
$\displaystyle |S_\eta(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^2 x$
with effective constants. One can improve the constants further by restricting all summations to odd integers (which barely affects ${S_\eta(x,\alpha)}$, since ${\Lambda}$ was mostly supported on odd numbers anyway), which in practice reduces the effective constants by a factor of two or so. One can also make further improvements in the constants by using the very sharp large sieve inequality to control the “Type II” sums that arise from Vaughan’s identity, and by using integration by parts to improve the bounds on the “Type I” sums. A final gain can then be extracted by optimising the cutoff parameters ${U, V}$ appearing in Vaughan’s identity to minimise the contribution of the Type II sums (which, in practice, are the dominant term). Combining all these improvements, one ends up with bounds of the shape
$\displaystyle |S_\eta(x,\alpha)| \ll \frac{x}{q} \log^2 x + \frac{x}{\sqrt{q}} \log^2 q$
when ${q}$ is small (say ${1 < q < x^{1/3}}$) and
$\displaystyle |S_\eta(x,\alpha)| \ll \frac{x}{(x/q)^2} \log^2 x + \frac{x}{\sqrt{x/q}} \log^2(x/q)$
when ${q}$ is large (say ${x^{2/3} < q < x}$). (See the paper for more explicit versions of these estimates.) The point here is that the ${\log x}$ factors have been partially replaced by smaller logarithmic factors such as ${\log q}$ or ${\log x/q}$. Putting together all of these improvements, one can finally obtain a satisfactory bound on the minor arc. (There are still some terms with a ${\log x}$ factor in them, but we use the effective Vinogradov theorem of Liu and Wang to upper bound ${\log x}$ by ${3100}$, which ends up making the remaining terms involving ${\log x}$ manageable.)
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 186, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942426860332489, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/46844?sort=oldest
|
## Does the Hodge star operator respect complex structure?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Hodge star operator $\ast$ acts on the differential forms of a differential manifold sending $\Omega^{k}$ to $\Omega^{N-k}$. If the manifold is complex, then for $p+q=k$, does $\ast$ map $\Omega^{p,q}$ into some $\Omega^{a,b}$, where $a+b=N-k$.
-
The last question is ambiguous, but I think the 2nd sentence is also a question, so you should put a question mark after it. You can find the answer in many places, e.g. Griffiths-Harris page 66. – Donu Arapura Nov 21 2010 at 19:14
So yes in the sense you mean. – Donu Arapura Nov 21 2010 at 19:18
The last question is just a rewording of the previous line. Since it is non-essential and apparently confusing I'll delete it. – Abtan Massini Nov 21 2010 at 19:24
1
Since $∗$ is a real operator, to be more precise, you should say that after complexifying the space of forms, and extending $∗$ to be complex linear, then indeed $∗$ maps $\Omega^{p,q}$ to $\Omega^{N-q,N-p}$. Frequently, it is preferable to use $\bar *$, which is the composition of $∗$ with complex conjugation, to map $\Omega{p,q}$ to $\Omega{N−p,N−q}$. This way, we have $\alpha \wedge \bar * \beta = g(\alpha, \beta) vol$, where $g$ is the Hermitian metric. – Spiro Karigiannis Nov 21 2010 at 19:42
1
Our comments seemed to have crossed. Yes, p 82 not 66, and yes. (Be warned that some authors use a different convention, where $N-p,N-q$ get switched. It's a question of whether $*$ is linear or antilinear.) – Donu Arapura Nov 21 2010 at 19:42
show 3 more comments
## 1 Answer
As Abtan requested, I'm converting my comments to an answer:
Suppose that $X$ is an $N$ (complex) dimensional complex manifold endowed with a Hermitean metric, or equivalently a Riemannian metric g satisfying $g(JX,JY)=g(X,Y)$, where $J$ is the complex structure. Let `$*$` denote the `$\mathbb{C}$`-antilinear extension of the Hodge star operator to complex valued forms (some people -- including me -- prefer to write this as `$\overline{*}$` as Spiro points out in the comments). Then as one finds on page 82 of Griffiths and Harris, `$$*\Omega^{pq}\subset \Omega^{N-q,N-p}$$` where I'm following the notation in the question and writing $\Omega^{pq}$ for the space of $C^\infty$ forms of type $(p,q)$.
-
@Donu: there was a display problem with a math expression. I hope you don't mind that I fixed it. – Willie Wong Nov 21 2010 at 20:03
Willie, I noticed that too. We might have been editing at the same time (?). – Donu Arapura Nov 21 2010 at 20:05
Oh and thanks. Looks better. – Donu Arapura Nov 21 2010 at 20:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336385726928711, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/144605-finding-area.html
|
# Thread:
1. ## Finding the area..
Ok I'm studying for my final looking over my old tests and I can't remember how I got this one right. I tried going from 0, to pi/2, to pi/4, to 3pi/4, to pi using sinx to plug in each but I'm not getting the right answer. Help would be appreciated.
Find the area under the graph of f(x)=sinx from x=0 to x=pi, using the four rectangles with the sample points to be the left endpoints. The answer = 1.896
2. Originally Posted by tbenne3
Ok I'm studying for my final looking over my old tests and I can't remember how I got this one right. I tried going from 0, to pi/2, to pi/4, to 3pi/4, to pi using sinx to plug in each but I'm not getting the right answer. Help would be appreciated.
Find the area under the graph of f(x)=sinx from x=0 to x=pi, using the four rectangles with the sample points to be the left endpoints. The answer = 1.896
$\frac{\pi - 0}{4} \left[\sin(0) + \sin\left(\frac{\pi}{4}\right) + \sin\left(\frac{\pi}{2}\right) + \sin\left(\frac{3\pi}{4}\right) \right]$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432893991470337, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/3247/using-additional-authenticated-data-as-a-secondary-key?answertab=votes
|
# Using “Additional Authenticated Data” as a secondary key
In implementing a cipher in GCM or CCM mode, you are provided the option to add "Additional Authenticated Data" (AAD). This AAD is required for decrypting the cipher text, and seems to be used when data is crucially specific to a label.
EDIT: to clarify, an example of when it might be used that I have seen is if you want to encrypt the number of shares of a stock that has been purchased, but need to make sure that that number corresponds to the correct stock you would use an AAD containing the name of the stock.
What I am wondering is, could you use this AAD as a secondary key and simply keep it secret?
Thanks
-
## 1 Answer
You certainly could keep the AAD secret; however, for GCM, it wouldn't provide any additional security beyond what the secret key already provides; for CCM, it does still provide some limited authentication protection (but probably not enough).
The bottom line: if you can't trust that your key is secret, well, keeping the AAD secret (or have a secret portion) won't help you.
Details:
• If the attacker has a guess for the secret key, he can still verify it, by using that key to decrypt the data, and seeing if the decryption makes sense. He can do this because for both GCM and CCM, the AAD affects only the tag, it does not modify how the ciphertext is converted into plaintext.
• If the attacker has the secret key, he can decrypt ciphertext (just as above).
• If the attacker has the GCM secret key, and has seen one valid encrypted message, he can encrypt any plaintext of his own choosing. This is because the GCM tag can be expressed as $F( Key, Nonce, Plaintext ) \oplus (G( Key, AAD ) \otimes H( Key, Plaintext))$ (where $\otimes$ is multiplication in $GF(2^{128})$). If he has the key, he can compute $F$ and $H$; if he has seen a valid packet, he can solve the above for the value of $G( Key, AAD )$, even if he doesn't know the value of $AAD$.
• If the attacker has the CCM secret key, and has seen one valid encrypted message with a specific nonce, he can encrypt any plaintext of the same length of his choosing with the same nonce. That's because the CCM tag can be expressed as $F_{K, Plaintext}( G(Key, Nonce, AAD, PlaintextLen) )$, where $F_{K, Plaintext}$ is an invertible function; hence, given a valid message, the attacker can recover $G(Key, Nonce, AAD, PlaintextLen)$, and use that to generate a tag for another message with the same length.
The above assumes you transmit the entire 16-byte tag in both cases; if you truncate the tag to $n$ bits, this actually adds some additional protection to CCM (as inverting the $F_{K, Plaintext}$ function does require all 128 bits; if the attacker isn't given all 128 bits, he'll end up having to search for them).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464682340621948, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35177/what-happens-when-a-photon-hits-a-mirror/35192
|
# What happens when a photon hits a mirror?
When a photon of light hits a mirror does the exact same photon of light bounce back or is it absorbed then one with the same properties emitted? If the same one is bounced back does it's velocity take all values on $[-c,c]$ or does it just jump from $c$ to $-c$ when it hits the mirror?
Or, is the phenomenon of a mirror better explained using a wave analogy? If so, what is this explanation?
-
1
– Ϛѓăʑɏ βµԂԃϔ Aug 30 '12 at 1:27
## 4 Answers
If you think of this in terms of quantum field theory, which is really required to give meaning to the photon, then all you are able to say is that the photon can take any of all possible paths from where it is emitted to where it is absorbed. These paths will contain paths where the photon momentarily splits into an electron positron pair, where the interactions with the electrons in the mirror involve all sorts of virtual particles, where the photon travels in directions which are far from the classical trajectory etc. The total amplitude is given by the sum of all these possibilities and they can all occur. In the classical limit this sum over all paths gets dominated by the contributions closest to the classical straight line path of the photon with velocity $c$, so classically we see light travel in a straight line at velocity $c$, and obey the laws of optics. However if you really wanted to follow the path of an individual photon you would see that it could do any of a spectacular number of things (and unfortunately our attempts to observe the photon would interfere with its path). If you want to understand this better, I highly recommend Feynman's description of it all in his lectures here or in his book taken from the lectures: "QED, the strange theory of light and matter".
-
How do mirrors work? is closely related to your question, if not a precise duplicate.
We normally think of photon scattering as absorbing the original photon and emitting a new one with a different momentum, so in your example of the mirror the incoming photon interacts with the free electrons in the metal and is absorbed. The oscillations of the free electrons then emit a new photon headed out from the mirror. Unlike e.g. electrons, photon number isn't conserved and photons can be created and destroyed whenever they interact.
-
But how does the emitter know the direction in which to emit the photons of an incoming beam, so that the reflection angle is correct? – Arnold Neumaier Aug 30 '12 at 10:11
1
The reflected photon interacts with the total fields it sees particularly at optical frequencies which are so low in energy. When absorption and re emission happens the phase of the oritinal photon is lost – anna v Aug 30 '12 at 13:10
Ray optics describe the best the path of a photon. As a particle, when hitting matter solid state it behaves like a billiard ball scattering elastically with the collective electric field of the medium it hits, when it is reflected.
It will be absorbed if its energy, given by $E=h\nu$, fits some energy level of the atoms, molecules, system it hits and then a re-emitted photon can change both direction and energy with respect to the originating one, and the originating one loses energy, i.e. changes frequency. If it is reflected, it of course goes with velocity $c$ (as all photons) whatever its direction (elastic scattering means only change of direction and not energy).
-
Your first paragraph seems to answer in the affirmative that the photon's velocity can take on all values on [-c,c] as it is scattering elastically. This is misleading at best. A photon is not a classical object with "primitive this-ness", it is a vibration in a field. It makes no sense to talk about a photon as though it is slowing down and changing direction. The group velocity is what changes. – user1247 Aug 30 '12 at 8:47
@user1247 !!! elastic scatter means change only in direction not value of momentum, in classical physics also. Elastic scattering crossections exist for all scatterings of elementary particles including photons. When there is a slow down, of course in particles with mass, it is called inelastic. – anna v Aug 30 '12 at 13:06
you gave the billiard ball as an example. Lets consider it scattering in one dimension. Its velocity changes continuously because the acceleration is not infinite. Infinite acceleration is not only non physical, it is also wrong and misleading in the case of a photon. The group velocity can do such things, but a single-photon description breaks down and does not correspond to any physical reality. – user1247 Aug 31 '12 at 16:20
@user1247 what do you mean "scattering in one dimension. The photon is not one dimensional, it is four dimensional. – anna v Aug 31 '12 at 17:48
You gave the billiard ball as an example. When a billiard ball collides with another billiard ball, and scatters elastically, its velocity changes during the collision. This is a simple fact. The number of dimensions doesn't matter, but of course it is simplest to consider a 1d collision. – user1247 Aug 31 '12 at 20:38
show 4 more comments
I think it can probably be misleading to think of the matter as "knowing" which way to emit the reflected photon. In order to fully describe this process it seems necessary to combine the mechanism of the interaction of light with matter, which allows for the possibility of absorption and radiation by electrons within the lattice of a material, with the Feynman path integral formulation as mentioned already in order to sum the amplitudes for an event to occur. The observed fact of equal angles of incidence and reflection is due to it being the route with the greatest coherence of phases. Reflection at a different points on the mirror will tend to cancel out rapidly as you depart from the point of equal angles. (this observed path is also the shortest path by Fermat). All that is then left to do is to explain how it is exactly that photons induce movement in charges, in a way which will clearly depend on the detailed structure of the material.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344555139541626, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/2828/how-hard-is-to-find-the-operators-of-an-addition-knowing-the-sum-of-them?answertab=votes
|
How hard is to find the operators of an addition knowing the sum of them?
I want to learn whether or no there is a cryptographic primitive,scheme assumption that is based on the following hard problem if it is hard . By hard we mean that we have a polynomial adversary: The attacker obtains a number $\sigma$ . In order to reverse engineer that or to go one step further for the cryptanalysis she needs to break that number in a set of numbers $m_{1}, m_{2}, m_{3}, \ldots, m_{n} : \sum_{i=1}^{n}m_{i}=\sigma$ . How hard is that problem?
-
The question makes little sense. An infinite amount of such $m$'s exists, and it's trivial to find any one that works. Did you have an additional condition on the $m$'s? For instance if $\sigma = 1059458$ you can let $m_1 = 1059457$ and $m_2 = 1$, problem solved. Multiplication would be more interesting, but then it's just the factorization problem. – Thomas Jun 8 '12 at 10:15
yes i have something else: let $n$ be chosen randomly by a PRG which is indistinguishable from a real random generator – curious Jun 8 '12 at 10:25
$n$? The number of integers in which to partition $\sigma$? How large can $n$ be? The problem seems trivial, I have trouble following your train of thought - but you may have omitted an important piece of information (so before I post an answer I just want to make sure I'm answering the right question). – Thomas Jun 8 '12 at 10:27
yes it is the number of integers in which $\sigma$ is partitioned. So the hardness of this lies to how big $n$ can be? The bigger the harder. Is there a formulation in mathematics for this? Could this be a subset problem: given a set of elements $X$ is there a subset $N$ whose sum equals 0 ? In my case i do not even know the set $X$ and the sum equals something else. So it's even harder than subset sum? Or i am wrong? – curious Jun 8 '12 at 10:33
The problem is that the addition operator is far too "malleable" (for lack of a better word) for this purpose. It is effortless to partition any integer, no matter how big, into any $n$ "parts" which sum up to the initial integer, with no restrictions. As the problem is formulated now, you can make all $m$'s except the first one zero, and let the first one be $\sigma$ (or if zero is disallowed, use 1 instead and adjust the first one accordingly). – Thomas Jun 8 '12 at 10:38
show 3 more comments
2 Answers
From what could gather from the chat yesterday, you are looking into a scheme that implements some kind of locality preserving hashing. Let me first explain how I understood the scheme you are describing:
Given a feature extractor $E : \{0,1\}^* \rightarrow {\{0,1\}^k}^n$ (i.e. given a message $m$ it extracts $n$ features of lenth $k$) and a cryptographic hash function $\mathcal{H} : \{0,1\}^* \rightarrow \{0,1\}^{8l}$ the scheme proceeds as follows:
1. On input message $m$, extract features $f_1,\ldots,f_n \gets E(m)$.
2. For each feature $f_i$ compute the hash $h_{i,1}||\ldots||h_{i,l} \gets \mathcal{H}(f_i)$.
3. For each $j \in \{1,\ldots,l\}$ compute $T_j := \sum\limits_{i=1}^n f_{i,j}$. (For some reson, from here on, we view bytes as signed.)
4. Compute $F_j = \left\{\begin{array}{ll} 0&\text{if }T_j < 0\\1&\text{otherwise}\end{array}\right.$.
5. Output $F=F_1||\ldots||F_l$.
Now your main question was, how hard is it, given only $F$ (or $T$) to reconstruct the original hash values. And, does this have anything to do with the subset sum problem.
The answer is, it has absolutely nothing to do with the subset sum problem but it is statistically infeasible to reconstruct the original hash values.
In step 4 bitstrings of length $8$ are compressed to single bits. Now this step in not one way (in a cryptographic sense) because it is trivial to find a preimage (if the bit is 1, choose a random positive number). However, if you want the original input, as there are $2^7=128$ possible preimages, you would have to guess, which one the input was. Considering that you've got $l$ such blocks (and assuming that the $T_i$ are uniformly distributed) you have a chance of about $2^{-7l}$ to guess all the $T_i$s.
Now suppose you had the $T_i$. This is where your question about the subset sum problem comes in. The thing is, if you only give the algorithm the $T_i$ and not the set of numbers, there are two possibilities. Either, you have not fixed the set, then the problem is trivial (choose $n-1$ random numbers and compute the last number by subtracting all those from $T_i$), or you do fix the set, but do not tell the algorithm what it is. In this case, the problem is basically "Guess the set of size $n$ I'm thinking about.", which is statistically infeasible, if your cannot trivially be guessed.
So in summary, given $F$ or $T$ it is infeasible to reconstruct the original hash values (and therefore also the original message) given that the entropy of the messages is not ridiculously small. However it might very well be possible to find a message $m'$ for which the resulting hash $F'$ will collide with $F$.
-
I guess you wanted to say that there are $2^8$ possible preimages for a bruteforce from step 5 to step 4 – curious Jun 9 '12 at 9:53
1
$2^7$ actually, as there are $2^8$ input values divided between two outputs – Maeher Jun 9 '12 at 10:18
Well, the problem "given a finite set $A$ of integers, is there a subset that sums to a target value $B$" is known as the Subset Sum problem; it is known to be hard. Specifically, the decisional problem (is there such a sum) is NP-complete, and the computational problem (find the subset) is NP-hard.
That means that if you could solve large instances of this problem quickly, you could use that to solve a lot of interesting problems quickly, including just about any problem in crypto (finding AES keys given plaintext/ciphertext, factoring numbers, etc).
Because of this, assuming this is a hard problem would appear to be a fairly safe assumption.
On the other hand, it's not at all clear problem your problem statement that this is the problem you're relying on. You are you "add each index of them and construct a table"; what do you mean $n_1[0]$? If this is bit 0 of message digest $n_0$ and you're adding all the bits 0's from the message digests to form bit 0 of $T$, then you're not relying on subset problem at all; you're relying on the related problem that uses bitwise exclusive or -- that problem is known to be easy.
-
2
It should also be noted that being NP-hard only means that some cases of the problem are hard to solve, not that all or even most of them are. In particular, there are plenty of easy instances of the subset sum problem (such as any instance where $A$ contains both $X$ and $B-X$). For crypto, it's not enough to have a problem that is known to be sometimes hard; we also need a way to generate instances of it that are each extremely likely to be among the hard ones. – Ilmari Karonen Jun 9 '12 at 13:31
@poncho yes exactly.I add not exactly bit 0 but whatever it is at position 0 at all the digests.This could be byte, 2bytes, ans so on.Why this is not subset sum?Knowing the sum give me from which number this is has been made – curious Jun 10 '12 at 16:56
1
@curious: this is not formally the subset sum problem, and there is no blatantly obvious way to solve an instance of the subset-sum problem with an oracle that can solve your problem. On the other hand, if you specify the full sum of each bit (and not just the lsbit of the sum), then it turns out (I have a marvelous proof that won't fit in the margin of this comment) that that problem is NP-complete as well (which means that you can solve the subset sum problem with this; it's just that the transform is nonobvious). – poncho Jun 10 '12 at 22:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483036398887634, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/38838/will-the-positive-ions-in-an-aqueous-solution-be-attracted-to-a-charged-body?answertab=oldest
|
# Will the positive ions in an aqueous solution be attracted to a charged body ?
If I had a negatively charged body , say an electret , and i put it in a container of NaCl solution . Will the positive ions of sodium be attracted to it ? and why? If no, why do the positive ions attract to the cathode during electrolysis?
-
– Ϛѓăʑɏ βµԂԃϔ Oct 2 '12 at 14:37
## 2 Answers
Yes, and this is a very important effect in synthetic polyelectrolytes and in biological macromolecules.
The usual model is to treat the added charge as an imobile object with a given charge. The dissolved ions are then treated as mobile charges. The mobile ions with the opposite charge will be attracted to the fixed charge, and repelled from each other. The mobile ions with the same charge as the fixed charge will be repelled from it, and each other. The end result of that is a continuous charge distribution throughout the liquid. The electrostatic potential is the sum of the potential from the fixed ion with the potential due to the sea of mobile ions. You can treat the mobile ions statistical mechanically: they follow a Boltzmann distribution with energies equal to their charge times the electrostatic potential at their location in the liquid. Put those two terms together to get the Poisson-Boltzmann equation, a partial differential equation for the electrostatic potential in the liquid. In general that is difficult to solve, and it's usually solved numerically. For some special problems, it is possible to solve the PB equation analytically. The usual approximation is that the electrostatic energy is much less than the thermal energy $kT$, in which case you can linearize the PB equation to get the Debye-Huckel equation.
The Debye-Huckel equation is actually a bit instructive qualitatively. In the absence of anything else, the electrostatic potential falls off as $V(r) \propto 1/r$. In the presence of the mobile ions, the electrostatic falls off as $V(r) \propto e^{-\kappa r}/r$. $\kappa^{-1}$ is caled the Debye screening length, and is proportional to (among other things) the square root of the ionic strength of the solution.
-
Yes, this is the basis of electroplating.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422135353088379, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/139482/frac1ex-1-gammas-zetas-and-xs-1?answertab=active
|
# $\frac{1}{e^x-1}$, $\Gamma(s)$, $\zeta(s)$, and $x^{s-1}$
Just to give a few examples, we have that
$$\eqalign{ & \int\limits_0^\infty {\frac{{{x^{s - 1}}}}{{{e^x} - 1}}dx} = \Gamma \left( s \right)\zeta \left( s \right) \cr & \int\limits_0^\infty {\frac{{{x^{s - 1}}}}{{{e^x} + 1}}dx} = \left( {1 - {2^{1 - s}}} \right)\Gamma \left( s \right)\zeta \left( s \right) =\eta(s)\Gamma(s) \cr & \int\limits_0^\infty {\frac{{{x^{s - 1}}}}{{{e^x}\left( {{e^x} - 1} \right)}}dx} = \Gamma \left( s \right)\left( {\zeta \left( s \right) - 1} \right) \cr & \int\limits_0^\infty {\frac{{{x^{s-1}}{e^x}}}{{{{\left( {{e^x} - 1} \right)}^2}}}dx} = \Gamma \left( {s } \right)\zeta \left( s-1 \right)\cr & \int\limits_0^\infty {\frac{{{e^x}{x^{s - 1}}}}{{{{\left( {1 + {e^x}} \right)}^2}}}} = \Gamma \left( s \right)\eta \left( s-1 \right) \cr}$$
Is there any theory that enables us to state that any integral of the form
$$\int\limits_0^\infty {F\left( {\frac{1}{{{e^x} - 1}},{e^x},{x^s}} \right)dx}$$
will necessarily be evaluated in terms of $\zeta$ and $\Gamma$?
-
How about the integral representation of Dirichlet L-functions? – sos440 May 1 '12 at 17:44
@sos440 I'm not really into that theory, but if you want to give an answer in terms of that, I guess I can manage. I will only need you to think that it is absolutely new theory for me. – Peter Tamaroff May 1 '12 at 17:46
What about $\int_0^\infty \frac{\sqrt{x}\exp(-x)}{\sqrt[3]{\exp(x)-1}}\mathrm dx$? – J. M. May 1 '12 at 17:48
I'm also unfamiliar to Dirichlet $L$-functions; All my relevant knowledge on this subject just amounts a one-semester-course lecture on analytic number theory. So I cannot explain a possible deep linkage between the integral representation and the corresponding number-theoretic facts. As I remember, these integrals can be generalized to represent the Dirichlet series of (completely) multiplicative arithmetic functions. – sos440 May 1 '12 at 17:57
@J.M. What about it? – Peter Tamaroff May 5 '12 at 2:17
show 3 more comments
## 1 Answer
I can offer the following observation. Consider the following identities \begin{align} \frac{1}{e^{x} - 1} = \sum_{k \geqslant 0} e^{-kx}, \quad \frac{1}{e^{x} + 1} = \sum_{k \geqslant 1} (-1)^{k + 1} e^{-kx} \quad \text{and} \quad \int_{0}^{\infty} x^{s} e^{kx} \ d^{\times} x = \Gamma(s) \, k^{-s}, \end{align} where $d^{\times} x = \frac{dx}{x}$ is the Haar invariant measure on $\mathbb{R}$. For $\mathsf{Re}(s) > 1$, $\mathsf{Re}(a) < 1$ and $a \not \in \mathbb{Z}$, one has \begin{align} \int_{0}^{\infty} \frac{x^{s} e^{a x}}{e^{x} - 1} \ d^{\times} x & = \int_{0}^{\infty} \sum_{k \geqslant 1} x^{s} e^{(a - k) x} \ d^{\times} x \\ & = \Gamma(s) \sum_{k \geqslant 1} (k - a)^{-s} \\ & = \Gamma(s) \zeta(s,1-a), \end{align} where $\zeta(s,1-a)$ denotes Hurwitz zeta function. Similarly, one has \begin{align} \int_{0}^{\infty} \frac{x^{s} e^{a x}}{e^{x} + 1} \ d^{\times} x & = \int_{0}^{\infty} \sum_{k \geqslant 1} (-1)^{k+1} x^{s} e^{(a - k) x} \ d^{\times} x \\ & = \Gamma(s) \sum_{k \geqslant 1} (-1)^{k+1} (k - a)^{-s} \\ & = \Gamma(s) 2^{-s} \left( \zeta(s,\tfrac{1-a}{2}) - \zeta(s,1 - \tfrac{a}{2}) \right). \end{align} The interchange of the summations and the integral sign is allowable since the summations are absolutely and uniformly convergent on their domains. These two formulas generalize the first two of your quoted formulas. The third one follows by splitting the integral into two using partial fractions and the integral formula of the Gamma function.
-
Thank you. This is good. If you can, try considering powers of $(e^x \pm 1)^{-1}$. – Peter Tamaroff May 1 '12 at 17:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7705098986625671, "perplexity_flag": "middle"}
|
http://programarcadegames.com/index.php?chapter=python_as_calculator&lang=en
|
# Program Arcade GamesWith Python And Pygame
< Previous Home Next >
# Chapter 1: Create a Custom Calculator
(Hi! If you don't already have a machine with Python and Pygame installed, then hop back to the “forward” section to download and install them so you can get started.)
## 1.1 Introduction
One of the simplest things that can be done with Python is to use it as a fancy calculator. Wait, a calculator isn't a game. Why are we talking about calculators? Boring....
Hey, to calculate objects dropping, bullets flying, and high scores, we need calculations. Plus, any true geek will consider a calculator as a toy rather than a torture device! Let's start our game education with calculators. Don't worry, we'll start graphics by Chapter 5.
A simple calculator program can be used to ask the user for information and then calculate boring things like mortgage payments, or more exciting things like the trajectory of mud balls as they are flung through the air.
As our first example we will calculate kinetic energy, something we might need to do as part of a game physics engine.
The best thing about doing this as a program is the ability to hide the complexities of an equation. All the user needs to do is supply the information and he or she can get the result in an easy-to-understand format. Any similar custom calculator could run on a smart phone, allowing a person to easily perform the calculation on the go.
## 1.2 Printing
### 1.2.1 Printing Text
How does a program print something to the screen?
```print("Hello World.")
```
This program prints out “Hello World” to the screen. Go ahead and enter it into IDLE prompt and see how it works. Try printing other words and phrases as well. The computer will happily print out just about anything you like, true or not.
What does the “Hello World” program look like in other computer programming languages? Check out Wikipedia. They keep a nice set of “Hello World” programs written in many different computer programming languages:
http://en.wikipedia.org/wiki/Hello_world_program_examples
It is interesting to see how many different computer languages there are. You can get an idea how complex a language is by how easy the “Hello World” program is.
Remember, the command for printing in Python is easy. Just use print. After the print command are a set of parentheses ( ). Inside these parentheses is what should be printed to the screen. Using parentheses to pass information to a function is standard practice in math, and computer languages.
Math students learn to use parentheses evaluating expressions like $sin(\theta)=cos(\frac{\pi}{2}-\theta)$. $sin$ and $cos$ are functions. Data passed to these functions is inside the parenthesis. What is different in our case is that the information being passed is text.
Notice that there are double quotes around the text to be printed. If a print statement has quotes around text, the computer will print it out just as it is written. For example, this program will print 2+3:
```print("2+3")
```
### 1.2.2 Printing Results of Expressions
This next program does not have quotes around $2+3$, and the computer will evaluate it as a mathematical expression. It will print 5 rather than 2+3.
```print(2+3)
```
The code below will generate an error because the computer will try to evaluate “Hello World” as a mathematical expression, and that doesn't work at all:
```print(Hello World)
```
The code above will print out an error SyntaxError: invalid syntax which is computer-speak for not knowing what “Hello” and “World” mean.
Also, please keep in mind that this is a single-quote: ' and this is a double-quote: " If I ask for a double-quote, it is a common mistake to write "" which is really a double double-quote.
### 1.2.3 Printing Multiple Items
A print statement can output multiple things at once, each item separated by a comma. For example this code will print out Your new score is 1040
```print("Your new score is", 1030+10)
```
The next line of code will print out Your new score is 1030+10. The numbers are not added together because they are inside the quotes. Anything inside quotes, the computer treats as text. Anything outside the computer thinks is a mathematical statement or computer code.
```print("Your new score is", "1030+10")
```
Does a comma go inside or outside the quotes?
This next code example doesn't work at all. This is because there is no comma separating the text between the quotes, and the 1030+10. At first, it may appear that there is a comma, but the comma is inside the quotes. The comma that separates the terms to be printed must be outside the quotes. If the programmer wants a comma to be printed, then it must be inside the quotes:
```print("Your new score is," 1030+10)
```
This next example does work, because there is a comma separating the terms. It prints:
Note that only one comma prints out. Commas outside the quotes separate terms, commas inside the quotes are printed. The first comma is printed, the second is used to separate terms.
```print("Your new score is,", 1030+10)
```
## 1.3 Escape Codes
If quotes are used to tell the computer the start and end of the string of text you wish to print, how does a program print out a set of double quotes? For example:
print("I want to print a double quote " for some reason.")
This code doesn't work. The computer looks at the quote in the middle of the string and thinks that is the end of the text. Then it has no idea what to do with the commands for some reason and the quote and the end of the string confuses the computer even further.
It is necessary to tell the computer that we want to treat that middle double quote as text, not as a quote ending the string. This is easy, just prepend a baskslash in front of quotes to tell the computer it is part of a string, not a character that terminates a string. For example:
```print("I want to print a double quote \" for some reason.")
```
This combination of the two characters \" is called an escape code. Almost every language has them. Because the backslash is used as part of an escape code, the backslash itself must be escapted. For example, this code does not work correctly:
```print("The file is stored in C:\new folder")
```
Why? Because \n is an escape code. To print the backslash it is necessary to escape it like so:
```print("The file is stored in C:\\new folder")
```
There are a few other important escape codes to know. Here is a table of the important escape codes:
Escape codeDescription
\'Single Quote
\"Double Quote
\tTab
\rCR: Carriage Return (move to the left)
\nLF: Linefeed (move down)
What is a “Carriage Return” and a “Linefeed”? Try this example:
```print("This\nis\nmy\nsample.")
```
The output from this command is:
```This
is
my
sample.
```
The \n is a linefeed. It moves “cursor” where the computer will print text down one line. The computer stores all text in one big long line. It knows to display the text on different lines because of the placement of \n characters.
To make matters more complex, different operating systems have different standards on what makes a line ending.
Escape codesDescription
\r\nCR+LF: Microsoft Windows
\nLF: UNIX based systems, and newer Macs.
\rCR: Older Mac based systems
Usually your text editor will take care of this for you. Microsoft Notepad doesn't though, and UNIX files opened in notepad look terrible because the line endings don't show up at all, or show up as black boxes.
## 1.4 Comments
Comments are important (even if the computer ignores them)
Sometimes code needs some extra explanation to the person reading it. To do this, we add “comments” to the code. The comments are meant for the human reading the code, and not for the computer.
There are two ways to create a comment. The first is to use the # symbol. The computer will ignore any text in a Python program that occurs after the #. For example:
```# This is a comment, it begins with a # sign
# and the computer will ignore it.
print("This is not a comment, the computer will")
print("run this and print it out.")
```
The # sign between quotes is not treated as a comment. A programmer can disable a line of code by putting a # sign in front of it. It is also possible to put a comment in at the end of a line.
```print("A # sign between quotes is not a comment.")
# print("This is a comment, even if it is computer code.")
print("Hi") # This is an end-of-line comment
```
It is possible to comment out multiple lines of code using three single quotes in a row to delimit the comments.
```print("Hi")
'''
This is
a
multi
line
comment. Nothing
Will run in between these quotes.
print("There")
'''
print("Done")
```
Most professional Python programmers will only use this type of multi-line comment for something called docstrings. Docstrings allow documentation to be written along side the code and later be automatically pulled out into printed documentation, websites, and Integrated Development Environments (IDEs). For general comments, the # tag works best.
Even if you are going to be the only one reading the code that you write, comments can help save time. Adding a comment that says “Handle alien bombs” will allow you to quickly remember what that section of code does without having to read and decipher it.
## 1.5 Assignment Operators
How do we store the score in our game? Or keep track of the health of the enemy? What we need to do this is the assignment operator. (An operator is a symbol like + or -.) This stores a value into a variable to be used later on. The code below will assign 10 to the variable x, and then print the value stored in x.
Look at the example below. Click on the “Step” button to see how the code operates.
```# Create a variable x
# Store the value 10 into it.
x = 10
# This prints the value stored in x.
print(x)
# This prints the letter x, but not the value in x
print("x")
# This prints "x= 10"
print("x=",x)
``` Step Variables: `x=` Output: ```10
x
x= 10
```
Variables go outside the quotes.
Note: The listing above also demonstrates the difference between printing an x inside quotes and an x outside quotes. If an x is inside quotation marks, then the computer prints x. If an x is outside the quotation marks then the computer will print the value of x. Getting confused on the “inside or outside of quotes” question is very common for those learning to program.
An assignment statement (a line of code using the = operator) is different than the algebraic equality your learned about in math. Do not think of them as the same. On the left side of an assignment operator must be exactly one variable. Nothing else may be there.
On the right of the equals sign/assignment operator is an expression. An expression is anything that evaluates to a value. Examine the code below.
```x = x + 1
```
The code above obviously can't be an algebraic equality. But it is valid to the computer because it is an assignment statement. Mathematical equations are different than assignment statements even if they have variables, numbers, and an equals sign.
The code above statement takes the current value of x, adds one to it, and stores the result back into x.
Expanding our example, the statement below will print the number 6.
```x = 5
x = x + 1
print(x)
```
Statements are run sequentially. The computer does not “look ahead.” In the code below, the computer will print out 5 on line 2, and then line 4 will print out a 6. This is because on line 2, the code to add one to x has not been run yet.
```x = 5
print(x) # Prints 5
x = x + 1
print(x) # Prints 6
```
The next statement is valid and will run, but it is pointless. The computer will add one to x, but the result is never stored or printed.
```x + 1
```
The code below will print 5 rather than 6 because the programmer forgot to store the result of x + 1 back into the variable x.
```x = 5
x + 1
print(x)
```
The statement below is not valid because on the left of the equals sign is more than just a variable:
```x + 1 = x
```
Python has other types of assignment operators. They allows a programmer to modify a variable easily. For example:
```x += 1
```
The above statement is equivalent to writing the code below:
```x = x + 1
```
There are also assignment operators for addition, subtraction, multiplication and division.
## 1.6 Variables
Variables should start with a lower case letter. Variables can start with an upper case letter or an underscore, but those are special cases and should not be done on a normal basis. After the first lower case letter, the variable may include uppercase and lowercase letters, along with numbers and underscores. Variables may not include spaces.
Variables are case sensitive. This can be confusing if a programmer is not expecting it. In the code below, the output will be 6 rather than 5 because there are two different variables, x and X.
```x = 6
X = 5
print(x)
```
The official style guide for Python (yes, programmers really wrote a book on style) says that multi-word variable names in Python should be separated by underscores. For example, use hair_style and not hairStyle. Personally, if you are one of my students, I don't care about this rule too much because the next language we introduce, Java, has the exact opposite style rule. I used to try teaching Java style rules while in this class, but then I started getting hate-mail from Python lovers. These people came by my website and were shocked, shocked I tell you, about my poor style.
Joan Rivers has nothing on these people, so I gave up and try to use proper style guides now.
Here are some example variable names that are ok, and not ok to use:
Legal variable names Illegal variable names Legal, but not proper
first_name first name FirstName
distance 9ds firstName
ds9 %correct X
All upper-case variable names like MAX_SPEED are allowed only in circumstances where the variable's value should never change. A variable that isn't variable is called a constant.
## 1.7 Operators
For more complex mathematical operations, common mathematical operators are available. Along with some not-so-common ones:
operator operation example equation example code
+ addition $3 + 2$ a = 3 + 2
- subtraction $3 - 2$ a = 3 - 2
* multiplication $3 \cdot 2$ a = 3 * 2
/ division $\frac{10}{2}$ a = 10 / 2
// floor division N/A a = 10 // 3
** power $2^3$ a = 2 ** 3
% modulus N/A a = 8 % 3
“Floor division” will always round the answer down to the nearest integer. For example, 11//2 will be 5, not 5.5, and 99//100 will equal 0.
Multiplication by juxtaposition does not work in Python. The following two lines of code will not work:
```# This does not work
x = 5y
x = 5(3/2)
```
It is necessary to use the multiplication operator to get these lines of code to work:
```# This does work
x = 5 * y
x = 5 * (3 / 2)
```
### 1.7.1 Operator Spacing
There can be any number of spaces before and after an operator, and the computer will understand it just fine. For example each of these three lines are equivilant:
```x=5*(3/2)
x = 5 * ( 3 / 2 )
x =5 5*( 3/ 2)
```
The official style guide for Python says that there should be a space before and after each operator. (You've been dying to know, right? Ok, the official style guide for python code is here: PEP-8.) Of the three lines of code above, the most “stylish” one would be line 2.
## 1.8 Order of Operations
Python will evaluate expressions using the same order of operations that are expected in standard mathematical expressions. For example this equation does not correctly calculate the average:
```average=90+86+71+100+98/5
```
The first operation done is 98/5. The computer calculates:
$90+86+71+100+\frac{98}{5}$
rather than the desired:
$\dfrac{90+86+71+100+98}{5}$
By using parentheses this problem can be fixed:
```average=(90+86+71+100+98)/5
```
## 1.9 Trig Functions
Trigonometric functions are used to calculate sine and cosine in equations. By default, Python does not know how to calculate sine and cosine, but it can once the proper library has been imported. Units are in radians.
```# Import the math library
# This line is done only once, and at the very top
# of the program.
from math import *
# Calculate x using sine and cosine
x = sin(0) + cos(0)
```
## 1.10 Custom Equation Calculators
A program can use Python to calculate the mileage of a car that drove 294 miles on 10.5 gallons of gas.
```m = 294 / 10.5
print(m)
```
This program can be improved by using variables. This allows the values to easily be changed in the code without modifying the equation.
```m = 294
g = 10.5
m2 = m / g # This uses variables instead
print(m2)
```
Good variable names are important
By itself, this program is actually difficult to understand. The variables m and g don't mean a lot without some context. The program can be made easier to understand by using appropriately named variables:
```milesDriven = 294
gallonsUsed = 10.5
mpg = milesDriven / gallonsUsed
print(mpg)
```
Now, even a non-programmer can probably look at the program and have a good idea of what it does. Another example of good versus bad variable naming:
```# Hard to understand
ir = 0.12
b = 12123.34
i = ir * b
# Easy to understand
interestRate = 0.12
accountBalance = 12123.34
interestAmount = interestRate * accountBalance
```
In the IDLE editor it is possible to edit a prior line without retyping it. Do this by moving the cursor to that line and hitting the “enter” key. It will be copied to the current line.
Entering Python code at the >>> prompt is slow and can only be done one line at a time. It is also not possible to save the code so that another person can run it. Thankfully, there is an even better way to enter Python code.
Python code can be entered using a script. A script is a series of lines of Python code that will be executed all at once. To create a script, open up a new window as shown in Figure 1.2.
Enter the Python program for calculating gas mileage, and then save the file. Save the file to a flash drive, network drive, or some other location of your choice. Python programs should always end with .py. See Figure 1.3.
Run the program typed in by clicking on the “Run” menu and selecting “Run Module”. Try updating the program to different values for miles driven and gallons used.
Caution, common mistake!
From this point forward, almost all code entered should be in a script/module. Do not type your program out on the IDLE >>> prompt. Code typed here is not saved. If this happens, it will be necessary to start over. This is a very common mistake for new programmers.
This program would be even more useful if it would interact with the user and ask the user for the miles driven and gallons used. This can be done with the input statement. See the code below:
```# This code almost works
milesDriven = input("Enter miles driven:")
gallonsUsed = input("Enter gallons used:")
mpg = milesDriven / gallonsUsed
print("Miles per gallon:", mpg)
```
Running this program will ask the user for miles and gallons, but it generates a strange error as shown in Figure 1.4.
The reason for this error can be demonstrated by changing the program a bit:
```milesDriven = input("Enter miles driven:")
gallonsUsed = input("Enter gallons used:")
x = milesDriven + gallonsUsed
print("Sum of m+g:",x)
```
Running the program above results in the output shown in Figure 1.5.
The program doesn't add the two numbers together, it just puts one right after the other. This is because the program does not know the user will be entering numbers. The user might enter “Bob” and “Mary”, and adding those two variables together would be “BobMary”; which would make more sense.
Input must be converted to numbers
To tell the computer these are numbers, it is necessary to surround the input function with an int( ) or a float( ). Use the former for integers, and the latter for floating point numbers.
The final working program:
```# Sample Python/Pygame Programs
# Simpson College Computer Science
# http://programarcadegames.com/
# http://simpson.edu/computer-science/
# Explanation video: http://youtu.be/JK5ht5_m6Mk
# Calculate Miles Per Gallon
print("This program calculates mpg.")
# Get miles driven from the user
milesDriven=input("Enter miles driven:")
# Convert text entered to a
# floating point number
milesDriven=float(milesDriven)
#Get gallons used from the user
gallonsUsed=input("Enter gallons used:")
# Convert text entered to a
# floating point number
gallonsUsed=float(gallonsUsed)
# Calculate and print the answer
mpg=milesDriven/gallonsUsed
print ("Miles per gallon:",mpg)
``` Step Variables: `milesDriven=` `gallonsUsed=` `mpg=` Output: ```This program calculates mpg.
Enter miles driven:288
Enter gallons used:15
Miles per gallon: 19.2
```
And another example, calculating the kinetic energy of an object:
```# Sample Python/Pygame Programs
# Simpson College Computer Science
# http://programarcadegames.com/
# http://simpson.edu/computer-science/
# Calculate Kinetic Energy
print("This program calculates the kinetic energy of a moving object.")
m_string=input("Enter the object's mass in kilograms: ")
m=float(m_string)
v_string=input("Enter the object's speed in meters per second: ")
v=float(v_string)
e=0.5*m*v*v
print("The object has "+str(e)+" joules of energy.")
```
To shorten a program, it is possible to nest the input statement into the float statement. For example these lines of code:
```
milesDriven=input("Enter miles driven:")
milesDriven=float(milesDriven)
```
Perform the same as this line:
```milesDriven=float(input("Enter miles driven:"))
```
In this case, the output of the input function is directly fed into the float function. Either one works, and it is a matter of programmer's preference which to choose. It is important, however, to be able to understand both forms.
## 1.11 Review Questions
1. Write a line of code that will print your name.
2. How do you enter a comment in a program?
3. What do the following lines of code output?
```print(2 / 3)
print(2 // 3)
```
4. Write a line of code that creates a variable called pi and sets it to an appropriate value.
5. Why does this code not work?
```A = 22
print(a)
```
6. All of the variable names below can be used. But which of these is the better variable name to use?
```a
A
Area
AREA
area
areaOfRectangle
AreaOfRectangle
```
7. Which of these variables names are not allowed in Python? (More than one might be wrong.)
```apple
Apple
APPLE
Apple2
1Apple
account number
account_number
account.number
accountNumber
account#
```
8. Why does this code not work?
```print(a)
a=45
```
9. Explain the mistake in this code:
```pi = float(3.14)
```
10. Explain the mistake in the following code:
```radius = input("Radius:")
x = 3.14
pi = x
area = pi * radius ** 2
```
11. Explain the mistake in the following code:
```a = ((x)*(y))
```
12. Explain the mistake in the following code:
```radius = input(float("Enter the radius:"))
```
13. Explain the mistake in the following code:
area = π*radius**2
14. Write a line of code that will ask the user for the length of a square's side and store the result in a variable. Make sure to convert the value to an integer.
15. Write a line of code that prints the area of the square, using the number the user typed in that you stored in question 9.
16. Do the same as in questions 14 and 15, but with the formula for the area of an ellipse.
$s=\pi ab$
where $a$ and $b$ are the lengths of the major radii.
17. Do the same as in questions 14 and 15, but with a formula to find the pressure of a gas.
$P=\dfrac{nRT}{V}$
where $n$ is the number of moles, $T$ is the absolute temperature, $V$ is the volume, and $R$ is the gas constant 8.3144.
## 1.12 Lab
Complete Lab 1 before continuing. This lab covers the material in this chapter and has you apply what you've learned.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132468700408936, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/08/07/
|
# The Unapologetic Mathematician
## Unitary and Orthogonal Matrices and Orthonormal Bases
I almost forgot to throw in this little observation about unitary and orthogonal matrices that will come in handy.
Let’s say we’ve got a unitary transformation $U$ and an orthonormal basis $\left\{e_i\right\}_{i=1}^n$. We can write down the matrix as before
$\displaystyle\begin{pmatrix}u_{1,1}&\cdots&u_{1,n}\\\vdots&\ddots&\vdots\\u_{n,1}&\cdots&u_{n,n}\end{pmatrix}$
Now, each column is a vector. In particular, it’s the result of transforming a basis vector $e_i$ by $U$.
$\displaystyle U(e_i)=u_{1,i}e_1+\dots+u_{n,i}e_n$
What do these vectors have to do with each other? Well, let’s take their inner products and find out.
$\displaystyle\langle U(e_i),U(e_j)\rangle=\langle e_i,e_j\rangle=\delta_{i,j}$
since $U$ preserves the inner product. That is the collection of columns of the matrix of $U$ form another orthonormal basis.
On the other hand, what if we have in mind some other orthonormal basis $\left\{f_j\right\}_{j=1}^n$. We can write each of these vectors out in terms of the original basis
$\displaystyle f_j=a_{1,j}e_1+\dots+a_{n,j}e_n$
and even get a change-of-basis transformation (like we did for general linear transformations) $A$ defined by
$\displaystyle A(e_j)=f_j=a_{1,j}e_1+\dots+a_{n,j}e_n$
so the $a_{i,j}$ are the matrix entries for $A$ with respect to the basis $\left\{e_i\right\}$. This transformation $A$ will then be unitary.
Indeed, take arbitrary vectors $v=v^ie_i$ and $w=w^je_j$. Their inner product is
$\displaystyle\langle v,w\rangle=\langle v^ie_i,w^je_j\rangle=\overline{v^i}w^j\langle e_i,e_j\rangle=\overline{v^i}w^j\delta_{i,j}$
On the other hand, after acting by $A$ we find
$\displaystyle\langle A(v),A(w)\rangle=\langle v^iA(e_i),w^jA(e_j)\rangle=\overline{v^i}w^j\langle f_i,f_j\rangle=\overline{v^i}w^j\delta_{i,j}$
since the basis $\left\{f_j\right\}$ is orthonormal as well.
To sum up: with respect to an orthonormal basis, the columns of a unitary matrix form another orthonormal basis. Conversely, writing any other orthonormal basis in terms of the original basis and using these coefficients as the columns of a matrix gives a unitary matrix. The same holds true for orthogonal matrices, with similar reasoning all the way through. And both of these are parallel to the situation for general linear transformations: the columns of an invertible matrix with respect to any basis form another basis, and conversely.
Posted by John Armstrong | Algebra, Linear Algebra | 3 Comments
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8954669237136841, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32830/is-it-theoretically-possible-to-reach-0-kelvin/32837
|
# Is it theoretically possible to reach 0 kelvin?
I'm having a discussion with someone. I said that it is -even theoretically- impossible to reach 0K, because that would imply that all molecules in the substance would stand perfectly still.
He said that this isn't true, because my theory violates energy-time uncertainty principle. He also told me to look up the Schrodinger equation and solve it for an oscillator approximating a molecule. See that it's lowest energy state is still non-zero.
Is he right in saying this and if so, can you explain me a bit better what he is talking about.
-
1
Claiming an edit to fix the capital letter in Kelvin.. Really? – Edward Stumperd Jul 25 '12 at 12:11
– Christoph Jul 25 '12 at 12:33
@Christoph: Doesn't your reference say that proper names are upper-case? – Nick Kidman Jul 25 '12 at 13:19
1
That is so contrary to common usage to be ridiculous. I have never seen anyone write "5 newtons" or "10 joules" or "5 kelvin", though the abbreviations are much more common anyway. – Jerry Schirmer Jul 25 '12 at 17:34
1
@JerrySchirmer: No, it is so not contrary to common usage. Just because you don't know the correct usage, doesn't mean you get to hide behind "common usage". The Wikipedia articles on "kelvin" and "newton", among others, clearly know the right usage. The article on "kelvin" even adds "When reference is made to the unit kelvin (either a specific temperature or a temperature interval), kelvin is always spelled with a lowercase k..." Let me ask you this: How do you spell "kilonewton" and "millikelvin"? milliKelvin? Now if anything is rediculous, it's that! – ThePopMachine Jul 26 '12 at 5:35
show 5 more comments
## 6 Answers
By the third law of thermodynamics, a quantum system has temperature absolute zero if and only if its entropy is zero, i.e., if it is in a pure state.
Because of the unavoidable interaction with the environment this is impossible to achieve.
But it has nothing to do with all molecules standing still, which is impossible for a quantum system as the mean square velocity in any normalized state is positive.
-
Just like Steve B, your giving another reason why it's impossible but not explaining why my reasoning is wrong. In fact you're even admitting that all molecules standing still is impossible. So by the definition of temperature, you're saying my reasoning is right. – Edward Stumperd Jul 25 '12 at 14:12
2
@EdwardStumperd: Your definition of temperature is valid only in classical statistical mechanics. But you are not allowed to use it in the quantum realm. – Arnold Neumaier Jul 25 '12 at 14:15
Oh, I was unaware of that. What would be the quantum definition of temperature? – Edward Stumperd Jul 25 '12 at 14:17
1
In thermodynamics it is the quantity conjugate to entropy, in other words, the integrating factor for changes of entropy. This is valid both in classical and quantum mechanics. To work out what it means microscopically, one must look at specific models, and then CM and QM differ. In a grand canonical ensemble, it turns out that $k_B$ times the temperature is the inverse of the factor $\beta$ that multiplies the Hamiltonian in the expression $\rho=Z^{-1}e^{-\beta (H+\mu N)}$ for the density matrix. – Arnold Neumaier Jul 25 '12 at 14:23
This works if the temperature nonzero. But the density has a well-defined limit for $\beta\to\infty$ which defines the zero temperature case. The limit is the orthogonal projector to the eigenspace of the ground state energy. in the usual case that the ground state is nondegenerate, the result is a pure state. – Arnold Neumaier Jul 25 '12 at 14:32
show 4 more comments
I think you are both wrong.
"The lowest energy state still has non-zero energy" does not mean that the temperature cannot be zero. If the system is in the ground state with 100% probability, then the temperature is zero. It doesn't matter what the ground state energy is.
It's true that all molecules in the substance would stand perfectly still at absolute zero [well, they don't have exact positions by the uncertainty principle, but the probability distribution of position would be perfectly stationary]. But so what? Why would that make absolute zero impossible? [see update below]
Nevertheless, there is no process that can get a system all the way to absolute zero in a finite amount of time or a finite number of steps. There's just no way to get that last little bit of energy out. This is one aspect of the third law of thermodynamics, as discussed in some (but not all) thermodynamics textbooks.
-- UPDATE --
It seems likely that I misunderstood. By "stand perfectly still", I guess you meant "have a fixed and definite position, and a fixed and definite velocity equal to 0". If that's what you meant, then "standing perfectly still" is indeed impossible (because of the Heisenberg Uncertainty Principle). But "standing perfectly still" is not expected or required to happen at absolute zero. Again, a harmonic oscillator which is in the ground state with 100% probability is at absolute zero, but does not have fixed and definite position or velocity.
-
You explained why you think he is wrong, but -while giving a vague direction of what may be another reason why it's theoretically impossible to reach 0K- you didn't explain why you think my reasoning is wrong. – Edward Stumperd Jul 25 '12 at 13:03
You didn't tell us your reasoning. Why do you think it is impossible for all molecules in the substance to be perfectly still? In your question you gave no reason whatsoever for this belief. I can't explain why your reasoning is wrong if I don't know your reasoning. – Steve B Jul 25 '12 at 15:17
I know it's not a full explanation as I don't give a reason why it would be impossible for all molecules in the substance to be perfectly still (1), but that wouldn't matter for the question as long as (1) is true and we use the classical definition of temperature. However, all of this is irrelevant now after Arnold's answer. – Edward Stumperd Jul 25 '12 at 15:57
the problem is that the words "stand perfectly still" is ambiguous. If you had said WHY it is impossible to "stand perfectly still" then it would have helped us understand what you meant by "stand perfectly still". It seems my first guess at what you meant was wrong. I updated my answer. – Steve B Jul 25 '12 at 19:17
This answer is fine, but there is a small issue--- how do you determine an (isolated) system is in it's ground state with certainty? The more degrees of freedom you have, the harder it is. If you have an atom, you can determine it is in its ground state, perhaps with something approaching certainty, but there will be radiation surrounding the atom. If you make a cavity to cool the radiation to nothing, you will have to cool the cavity, and so on, so that the third law tells you that there will never quite be certain its in the ground state. – Ron Maimon Jul 26 '12 at 1:43
For a temperature to be definable and measurable the distribution of the kinetic energies of the molecules in the medium under discussion should be known.
The process of cooling involves removing thermal energy from a system. When no more energy can be removed, the system is at absolute zero, which cannot be achieved experimentally. Absolute zero is the null point of the thermodynamic temperature scale, also called absolute temperature. If it were possible to cool a system to absolute zero, all motion of the particles comprising matter would cease and they would be at complete rest in this classical sense. Microscopically in the description of quantum mechanics, however, matter still has zero-point energy even at absolute zero, because of the uncertainty principle.
The uncertainty principle assures that molecules cannot stay perfectly still and continue being in a certain position , i.e. in the material under study. Certainly not all molecules of the material, this would be necessary to define a 0K temperature.
The solution with the vibrational degrees of freedom that molecules may have is not conclusive , though sufficient as proof for that the specific material that displays these vibrational modes cannot go to 0K. It is the HUP that is general for all materials.
-
So I was right. What was he talking about then? – Edward Stumperd Jul 25 '12 at 12:21
he was talking about molecules where the temperature is also determined by the rotational and vibrational degrees of freedom, missing in single atom gases for example. So although it is correct that a molecule will always have some vibrational energy and thus contribute to temperature that way, and thus not reach 0k, it is not a universal argument, since there exist single molecule materials, with no vibrational rotational degrees of freedom. ( I gave a link for degrees of freedom) – anna v Jul 25 '12 at 13:08
Ah ok, I know about degrees of freedom and what not, but I didn't realize that that was what he was talking about. – Edward Stumperd Jul 25 '12 at 13:21
Also: if that is true isn't he just trying to prove the exact thing that I am saying: "it is impossible for all molecules to stand perfectly still, which would be necessary to achieve 0K, so reaching 0K is impossible". – Edward Stumperd Jul 25 '12 at 13:26
yes, it seems so to me too. – anna v Jul 25 '12 at 13:30
I wonder why the measurement postulate has not been mentioned so far. Consider a cubical microcrystal of sodium chloride containing 64 atoms (4 on each side). If we cool it off so it is as close to absolute zero as possible, then we can represent its state as a superposition of pure states. One of those states is the ground state. If we then measure its energy, is there not some finite probability that it will be found in its ground state?
The atoms will not be stationary. They still have their zero-point energy. But in the ground state the temperature of the crystal is absolute zero.
-
You need a system to be undisturbed for a long time to have a well defined energy. – Ron Maimon Jul 26 '12 at 1:44
Ron, you haven't addressed my point about the measurement postulate. Isn't the act of measurement supposed to force the system into a definite eigenstate? – Marty Green Jul 26 '12 at 1:59
How long to prepare the system and do the measurement? If you want to be sure it's in the ground state, you have to leave the system alone forever. The third law is asymptotic, the longer you are willing to wait, the closer you can come. – Ron Maimon Jul 26 '12 at 2:08
from WP-negative temperature
In physics, certain systems can achieve negative temperature; that is, their thermodynamic temperature can be expressed as a negative quantity on the kelvin scale.
A substance with a negative temperature is not colder than absolute zero, but rather it is hotter than infinite temperature. As Kittel and Kroemer (p. 462) put it, "The temperature scale from cold to hot runs: +0 K, . . . , +300 K, . . . , +∞ K, −∞ K, . . . , −300 K, . . . , −0 K."
. The inverse temperature β = 1/kT (where k is Boltzmann's constant) scale runs continuously from low energy to high as +∞, . . . , −∞.
from Positive and negative picokelvin temperatures :
... of the procedure for cooling an assembly of silver or rhodium nuclei to negative nanokelvin temperatures.
-
This is what my Science teacher said on the matter. Nothing can reach absolute zero because Energy is linked to Mass, in the sense that if there is no energy, there is no mass. It would disappear. That can't happen due to other laws, so 0K can't be reached.
-
1
This is not quite true. 0K is not reached when there is absolutely no energy, but rather when the system in question is in its lowest energy state as allowed by the physical laws it has to obey. Thus for a gas in a box, if all the particles are stationary, then their motional degrees of freedom are at 0K even though they have energy in the form of mass - they just don't have kinetic energy. – Emilio Pisanty Nov 8 '12 at 15:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478080868721008, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/899/cracking-pre-paid-cards
|
# Cracking Pre-Paid Cards
Is it possible to deduce the original function that used to generate those pre-paid cards number that are used for charging your mobile phone credits? For Example, If I've collected about 1000 of those cards, how to analyse those numbers to that I can re-generate the generated numbers again using my own software?
-
1
I suppose this will depend on the algorithm used by your mobile phone provider. Without any details, this will give only speculation. – Paŭlo Ebermann♦ Oct 3 '11 at 16:56
## 3 Answers
If the people who generated the pre-paid card numbers did their job properly, then no, it is not possible to reconstruct these numbers.
The simplest pre-paid card number generation system is to use purely random numbers, and store generated numbers in a database. To verify a generated number, simply look it up in the database. The function which can then rebuild the numbers is the RNG -- if that is `/dev/urandom` from a Linux/*BSD/MacOS/Solaris system, or `CryptGenRandom()` from a Windows system, then there is no known weakness which would allow an attacker to merely recognize the RNG as being that specific system (as opposed to, say, someone flipping a coin repeatedly), let alone predict the next number with a non-trivial probability of success.
The method with random numbers and a database has two main drawbacks:
• You can get collisions. E.g., if you use 16-digit numbers, you can expect to reach your first collision after 108 (100 millions) generated numbers. You can deal with that by checking your generated numbers against the database, but this can make the industrial process more complex (i.e. you have to talk to the database before deciding whether the card is worth printing).
• Each number verification entails a database lookup. With millions of customers, this may imply an heavy load, especially if there are people who routinely try random numbers, just in case they get lucky (apparently, people do that). Database access is unavoidable at some point, if the card is genuine (if only to update the customer account), but one would prefer to have a lighter way to filter out most invalid card numbers.
So a slightly more complex method involves symmetric encryption. Namely, you use a block cipher operating over the space of $n$-digit integers. So you have a secret key $K$; let $E_K$ be the encryption function, and $D_K$ the decryption. You keep a counter $c$ of generated card numbers. To get a new card number:
1. Increment $c$.
2. Encode $c$ into a sequence of $m$ digits and pad it with $n-m$ zeros.
3. Encrypt the whole thing with $E_K$.
Then, when it comes to verify a number, try first decrypting it with $D_K$: if the result does not end with $n-m$ zeros, then you know that it is not a genuine number, and can be rejected without involving the database. This method also guarantees the total absence of collisions.
The tricky part in this method is having the appropriate $E_K$ / $D_K$ function. It must be a pseudo-random permutation; anything like a stream cipher is inadequate (including AES in CTR mode). If the number length ($n$) is 19 or less (but close to 19), then this is easily done with a 64-bit block cipher $B$ such as IDEA or the venerable 3DES (apparently, the IDEA patents expired, which is why it becomes recommendable again). Then, to encrypt a sequence $d$ of $n$ digits:
1. Encode $d$ into 64 bits (interpret $d$ as an integer, write down the value in binary); call $x$ the resulting sequence.
2. Set $x \leftarrow B_K(x)$ (encryption with the 64-bit block cipher).
3. Interpret $x$ as an integer value $e$.
4. If $e ≥ 10^n$, loop to step 2. Otherwise, define $E_K(d) = e$.
In plain words, we use the block cipher repeatedly until it gets us back to the appropriate space of integers between $0$ and $10^n-1$. If the block cipher is a secure pseudo-random permutation, the this process is also a secure pseudo-random permutation. The average number of iterations will be $2^{64}/10^n$; for instance, with $n = 16$, this will require on average about 1845 invocations of the underlying block cipher: a very small amount, since a cheap PC can do millions of those per second, with a single core.
With this method, predicting valid numbers with success probability higher than $10^{m-n}$ would require breaking the 64-bit block cipher (that is, there mere existence of such a predictor would be viewed as an unredeemable weakness of the block cipher). So, with IDEA or 3DES, no worry. With practical figures: with $n = 16$ (16-digit card numbers) and $m = 11$ (you have room for one hundred billions of valid numbers), then only one random number in 100000 will require actual database lookup.
If $n$ is much smaller (e.g. $n = 12$), the computational cost can become excessive, but the core principle can still be applied, provided that you design a custom block cipher which runs over sequences of digits -- this is a difficult task (don't do this at home ! I.e., do not deploy such a system in production without getting some professional advice from a trained cryptographer), but there are existing tools (see this question for details).
-
I don't believe there's any requirement to use the whole space of $n$-digit strings: knowing that numbers greater than $2^k$ are never used will only help an attacker by a factor of $10^n/2^k<2$, assuming that $k=\lfloor\log_210^n\rfloor$. That way, we can just work in the space of $k$-bit binary strings, and won't need to resort to any "hasty pudding tricks". (In any case, a bigger issue with this design is that, because it's based on symmetric encryption, any device that can validate a number can also generate one. This is obviously a problem if such a device might be cracked.) – Ilmari Karonen Oct 7 '11 at 17:48
If you just want to decrease database lookups, a simple solution is to add a Verhoeff digit. – Diego Oct 11 '11 at 3:52
Probably not; But the history of cryptography is littered with examples of broken systems that were once thought to be secure.
-
It can't be done. The generation algorithms are carefully chosen specifically to make this impossible.
-
1
Just out of curiosity (and partially playing devil's advocate), how do you know this? Are the algorithms publicly available or do you have insider knowledge? – mikeazo♦ Oct 3 '11 at 16:23
You don't need to be an insider to know that. It's like knowing that bridges are designed to support the weight of trucks. (If they weren't, "bridge collapses" would be the headline every day.) – David Schwartz Oct 3 '11 at 16:42
2
Actually, if bridges would collapse every day, this wouldn't be in the headlines anymore :-) – Paŭlo Ebermann♦ Oct 3 '11 at 16:57
@David Schwartz: People thought Vigenere was "unbreakable" for a long time, until someone broke it. The Germans thought Enigma was unbreakable too. – mikeazo♦ Oct 3 '11 at 17:11
@mikeazo: These are interesting stories precisely because they're so rare. In any event, it is well-known and widely discussed how to make this impossible. The best you can do is get the series and check digits right and guess the rest. – David Schwartz Oct 3 '11 at 17:31
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212065935134888, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/92945/is-there-a-discrete-version-of-de-lhopitals-rule/92949
|
# Is there a discrete version of de l'Hôpital's rule?
When considering asymptotics of runtime functions, you often have to find limits of quotients of discrete functions, e.g.
$\displaystyle\qquad \lim\limits_{n \to \infty} \frac{4^n}{\binom{2n}{n}\sqrt{n}}.$
While this particular case can easily be dealt with by Stirling's formula, I have been wondering. Mathematicians often like to use de l'Hôpital's rule, but it can obviously not be applied to the discrete case immediately (no mean value theorem). If---as in this case---you are lucky, you might find nice and well-studied continuations on the reals.
What to do in general, though? Is there a discrete version/relative of de l'Hôpital's rule, maybe using difference quotients?
-
7
– J. M. Dec 20 '11 at 13:41
@J.M. answer? (I'm almost sure that one link answers the question posed.) – Willie Wong♦ Dec 20 '11 at 13:56
Indeed. This is what I had in mind but did not quite see a proof for. – Raphael Dec 20 '11 at 14:02
## 2 Answers
Stolz–Cesàro seems to be what you're looking for. There are two forms:
1.
Let $a_n$ and $b_n$ be two sequences approaching $0$ as $n\to\infty$, with $b_n$ decreasing. Then,
$$\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$
if the second limit exists.
2.
Let $a_n$ and $b_n$ be two sequences, with $b_n$ unbounded and increasing. Then,
$$\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$
if the second limit exists.
-
Thanks! Too bad it does not seem to help dealing with the example I have, though (but I did not ask for that, of course). – Raphael Dec 20 '11 at 14:11
@Raphael: Well, that square root you added certainly throws a wrench on things... :) – J. M. Dec 20 '11 at 14:12
The limit is still nice, though. The original problem is to check wether $4^n \cdot \binom{2n}{n}^{-1} \sim \sqrt{n}$, therefore my edit. Don't see a way to use the theorem even without $\sqrt{n}$, though. – Raphael Dec 20 '11 at 14:14
5
Right, but any sort of L'Hopital rule won't work anyway, since both of your sequences grow exponentially, so taking the derivative will give you back essentially the same ratio. – Willie Wong♦ Dec 20 '11 at 14:31
The discrete version of L'Hôspital's rule, in my opinion, is Abelian theorems, including the L'Hôspital's rule, Silverman-Toeplitz theorem and its sepcial case, Stolz-Cesàro theorem.
On de Bruijn's Asymptotic methods in analysis, it's said that
A theorem which derives asymptotic information about some kind of average of a function from asymptotic information about the function itself, is called an Abelian theorem. If one can find a supplementary condition under which the converse of an Abelian theorem holds, then this condition is called a Tauberian condition, and the converse theorem is called a Tauberian theorem.
The amount you gave, as far as I've tried, couldn't be easily solved with these theorems.
Let $$a_n=\frac{4^n}{\binom{2n}n\sqrt n}$$ we have $$\ln a_{n+1}-\ln a_n=\frac12\ln(n+1)-\ln(n+\frac12)+\frac12\ln n=-\frac1{4n^2}+O\left(\frac1{n^3}\right)\tag1$$ Therefore $\ln a_n$ converges as $n\to\infty$. However, the preceding equation, the asymptotic behavior of the difference, is not enough to determine the limit value, even if the result is refined. Such efforts are generally unsuccessful.
However, if $S=\lim_{n\to\infty}\ln a_n$, we could determine the asymptotic behavior of $a_n-S$ through (1) easily, since $\ln a_n=S-\sum_{k\ge n}(\ln a_k-\ln a_{k+1})$.
Remark One could determine $S$ through Stirling's formula. There's another approach, more elementary, I think:
$$a_n^2=\left(\frac{(2n-1)!!}{(2n)!!}\right)^2(2n+1)\cdot\frac n{2n+1}\to\frac1\pi$$
by Wallis product, therefore $S=1/\sqrt\pi$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519051313400269, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65772?sort=oldest
|
## An extension of Gaussian Isoperimetry
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Gaussian isoperimetric inequality (Tsirelson,Sudakov, Borell) states that among all sets of given Gaussian measure in the n-dimensional Euclidean space, half-spaces have the minimal Gaussian boundary measure. Suppose we put an additional restriction on the set, that it should be symmetric about the origin. Then can we conclude that quarter-spaces (intuitively the first and third quadrant in 2-dimensions, say) have the minimal Gaussian boundary measure?
-
1
Also, if possible, could someone please suggest a more readable version of the Gaussian isoperimetry proofs, and possibly other related references on Gaussian measures (like lecture notes or surveys available free on the internet)? – Bratt May 23 2011 at 17:10
1
A bit off topic, but you could find this helpful: a nice review connecting concentration (and isoperimetric inequalities) to Markov chains. This allows to study discrete analogues of the picture. Yann Ollivier, A survey of Ricci curvature for metric spaces and Markov chains (pdf) yann-ollivier.org/rech/publs/surveycurvmarkov.pdf – Leonid Petrov May 24 2011 at 5:28
1
A little bit of computation shows that 'quarter-spaces' (or a symmetrization of halfspaces around the origin) is clearly not the best we can do. For example in two dimensions, just a circle of measure 1/2 has smaller boundary measure than the above set. But the question remains open. Also, thanks everyone, for the references. – Bratt May 24 2011 at 12:39
2
Following up on Ryan's suggestions, here's a paper by Barthe (subscription probably required) whose introduction suggests that the problem as you stated, and the analogous problem on the sphere, are open and difficult: journals.cambridge.org/action/… – Mark Meckes May 24 2011 at 15:01
Nice find, Mark. – Ryan O'Donnell May 25 2011 at 2:59
show 1 more comment
## 2 Answers
Answering more @Bratt's comment than the original question: Talagrand's book
people.math.jussieu.fr/~talagran/book.ps.gz
Seems quite nice.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
My guess is that the optimizer is actually a "strip"; i.e., a set of the form {$x : -t \leq x_1 \leq t$}. But I'm somewhat sure that the solution to this problem is not known. You might take a look at the discussion surrounding after Corollary 3.6 in this paper by Klartag and Regev:
http://eccc.hpi-web.de/report/2010/140/
Barthe may also have some relevant papers.
-
Suppose we are looking at sets of measure 1/2 say. Then t as above is root(2) erfinverse(.5) which is roughly .67449 The boundary measure in this case is .635553 Compared to a boundary measure of .588 in the case of a circle of measure 1/2. Assuming my calculations are correct. – Bratt May 24 2011 at 13:11
Yeah, that's very possible. So maybe a sphere is best in general? Another good question (which might suggest the answer) is whether the analogous isoperimetric problem on the surface of the sphere is solved. – Ryan O'Donnell May 24 2011 at 13:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179092049598694, "perplexity_flag": "middle"}
|
http://martin-thoma.com/how-to-check-if-a-point-is-inside-a-rectangle/
|
# Martin Thoma
## How to check if a point is inside a rectangle
September 7th, 2012
A rectangle
I’ve just found this interesting question on StackExchange:
If you have a rectangle ABCD and point P. Is P inside ABCD?
## The idea
The idea how to solve this problem is simply beautiful.
If the point is in the rectangle, it divides it into four rectangles:
Divided rectangle
If P is not inside of ABCD, you end up with somethink like this:
Point is outside of rectangle
You might note that the area of the four triangles in is bigger than the area of the rectangle. So if the area is bigger, you know that the point is outside of the rectangle.
## Formulae
If you know the coordinates of the points, you can calculate the area of the rectangle like this:
$$A_\text{rectangle} = \frac{1}{2} \left| (y_{A}-y_{C})\cdot(x_{D}-x_{B}) + (y_{B}-y_{D})\cdot(x_{A}-x_{C})\right|$$
The area of a triangle is:
$$A_\text{triangle} = \frac{1}{2} (x_1(y_2-y_3) + x_2(y_3-y_1) + x_3(y_1-y_2))$$
## Python
```def isPinRectangle(r, P):
"""
r: A list of four points, each has a x- and a y- coordinate
P: A point
"""
areaRectangle = 0.5*abs(
# y_A y_C x_D x_B
(r[0][1]-r[2][1])*(r[3][0]-r[1][0])
# y_B y_D x_A x_C
+ (r[1][1]-r[3][1])*(r[0][0]-r[2][0])
)
ABP = 0.5*(
r[0][0]*(r[1][1]-r[2][1])
+r[1][0]*(r[2][1]-r[0][1])
+r[2][0]*(r[0][1]-r[1][1])
)
BCP = 0.5*(
r[1][0]*(r[2][1]-r[3][1])
+r[2][0]*(r[3][1]-r[1][1])
+r[3][0]*(r[1][1]-r[2][1])
)
CDP = 0.5*(
r[2][0]*(r[3][1]-r[0][1])
+r[3][0]*(r[0][1]-r[2][1])
+r[0][0]*(r[2][1]-r[3][1])
)
DAP = 0.5*(
r[3][0]*(r[0][1]-r[1][1])
+r[0][0]*(r[1][1]-r[3][1])
+r[1][0]*(r[3][1]-r[0][1])
)
return areaRectangle == (ABP+BCP+CDP+DAP)```
Posted in Code | Tags: Geometry, Python by Martin Thoma on September 7th, 2012
You can leave a response, or trackback from your own site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874073028564453, "perplexity_flag": "middle"}
|
http://wiki.math.toronto.edu/TorontoMathWiki/index.php/2011-2012_Analysis_Applied_Math_Seminar
|
# 2011-2012 Analysis Applied Math Seminar
### From TorontoMathWiki
This page contains information about the Analysis and applied Math Seminar at the University of Toronto. The seminar meets regularly on Fridays, 1:10-2pm, 6183 Bahen Center.
The current (interim) organizers are Bob Jerrard (rjerrard [at] math) and Amir Moradifam (amir [at] math).
If you will be speaking, details on how to use the equipment in BA 6183 can be found here: BA6183_Video_Instructions.
Of related interest in Toronto (and sometimes cross-listed):
Previous Year's Seminars: 2010-11, 2009-10, 2008-09, 2007-08, 2006-07, 2005-06, 2004-05, 2003-04, 2002-03.
## April 13, Choksi, 13:10-14:00 @BA6183
| | | | | | | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|----------|-------------|--------|-----------|
| Rustum Choksi (McGill University) | 2011-2012 Analysis Applied Math Seminar | Friday | April 13 | 13:10-14:00 | BA6183 | |
| Title: Global Minimization and the Energy Landscape for a Variational Problem with Long-Range Interactions | | | | | | |
| Abstract: Energy-driven pattern formation induced by competing short and long-range interactions is common in many physical systems. A nonlocal perturbation (of Coulombic-type) to the well-known Ginzburg-Landau/Cahn-Hilliard free energy gives rise to a mathematical paradigm with a rich and complex energy landscape. In the first half of this talk, we discuss rigorous asymptotic results concerning global minimizers. In the second part, we discuss a few hybrid numerical methods for accessing ground states. | | | | | | |
| [ arXiv] | 2012_4_13_Choksi_Notes | | | | | 2012_4_13 |
## March 23, Deegan, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|----------|-------------|--------|-----------|
| Robert Deegan (University of Bristol) | 2011-2012 Analysis Applied Math Seminar | Friday | March 23 | 13:10-14:00 | BA6183 | |
| Title: Wet drop impact | | | | | | |
| Abstract: When a fast moving drop collides with a layer of fluid it a produces a splash, a spray of secondary droplets. There is a bewildering variety of splash morphologies and droplet distributions which manifest as the system parameters (droplet size and speed, layer depth, fluid properties) are varied. Despite this complexity, a splash begins with the formation of a sheet-like jet. There are at least two varieties of jets: the large and slow lamella jet and the small and quick ejecta jet. In this talk I will present our progress towards understanding the simplest of splashes, the so-called crown splash, which results from the disintegration of the lamella. I will also discuss our experimental results on the ejecta jet and the role of the surrounding gas on its evolution. | | | | | | |
| [ arXiv] | 2012_3_23_Deegan_Notes | | | | | 2012_3_23 |
## March 16, Lan, 13:10-14:00 @BA6183
| | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|----------|-------------|--------|-----------|
| Kunquan Lan (Ryerson University) | 2011-2012 Analysis Applied Math Seminar | Friday | March 16 | 13:10-14:00 | BA6183 | |
| Title: Nonzero positive solutions of systems of elliptic boundary value problems | | | | | | |
| Abstract: In this talk, I shall present some results on existence of nonzero positive (classic) solutions of systems of second order elliptic boundary value problems under some sublinear conditions involving the principle eigenvalues of the corresponding linear systems. Some results on eigenvalue problems of such elliptic systems are derived and generalize some previous results on the eigenvalue problems of systems of Laplacian elliptic equations. | | | | | | |
| [ arXiv] | 2012_3_16_Lan_Notes | | | | | 2012_3_16 |
## March 2, Francis, 13:10-14:00 @BA6183
| | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|---------|-------------|--------|----------|
| Bruce Francis (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | March 2 | 13:10-14:00 | BA6183 | |
| Title: Pursuit Laws for Mobile Robots | | | | | | |
| Abstract: This talk describes some recent research in control theory, in particular, the control of autonomous robots. The talk has three parts: 1) A brief motivation for the research and some experimental results. 2) The rendezvous problem with limited vision: Design a motion control law so that a group of mobile robots gather at a single location even though their on-board cameras are near-sighted. The proposed control law leads to coupled differential equations with non-Lipschitz right-hand sides. 3) Infinite chains of robots. In studying the formation of a very large number of robots, one approach is instead to model an infinite number of robots. The relevant question is what mathematical framework to take so that the infinite-chain model correctly describes the behaviour of the large-but-finite chain model. Studies to date take the state space to be the Hilbert space of square-summable sequences. The advantage is that there is a rich Fourier theory available if the formation is spatially invariant. But this Hilbert space formulation leads to anomalous behaviour. For example, an infinite chain of vehicles when displaced will return to their starting points even though the vehicles do not have global sensing capability and therefore could not in reality do so. This talk proposes a different mathematical framework and describes the progress made so far. The problem turns out to be related to some Tauberian theory of Diaconis and Stein. This is joint work with Avraham Feintuch, Math Department, Ben Gurion University of the Negev. | | | | | | |
| [ arXiv] | 2012_3_2_Francis_Notes | | | | | 2012_3_2 |
## February 17, Athavale, 13:10-14:00 @BA6183
| | | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-------------|-------------|--------|------------|
| Prashant Athavale (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | February 17 | 13:10-14:00 | BA6183 | |
| Title: Integro-differential equations and multiscale representations | | | | | | |
| Abstract: In this talk we will discuss various aniosotropic PDEs methods for applications to image processing. We will then discuss integro-differential equations inspired from (BV, L^2) and (BV, L^1) decompositions. Although, the original motivation came from a variational approach, the resulting IDEs can be extended using standard techniques from PDE-based image processing. We use filtering, edge preserving and tangential smoothing to yield a family of modified IDE models with applications to image denoising and image deblurring problems. | | | | | | |
| [ arXiv] | 2012_02_17_Athavale_Notes | | | | | 2012_02_17 |
## February 10, Zhou, 13:10-14:00 @BA6183
| | | | | | | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-------------|-------------|--------|-----------|
| Gang Zhou (ETH-Zurich) | 2011-2012 Analysis Applied Math Seminar | Friday | February 10 | 13:10-14:00 | BA6183 | |
| Title: On Singularity Formation Under Mean Curvature Flow | | | | | | |
| Abstract: In this talk I present our recent works, jointly with D.Knopf and I.M.Sigal, on singularity formation under mean curvature flow. By methods borrowed from dispersive equations and mathematical physics we present a very different way of studying it, and moreover obtain asymptotic of singularity formation on asymmetric surfaces for the first time. After reviewing known results I will compare our approaches to the old ones. Some key elements will be discussed. A few problems, which might be tackled by our techniques, will be formulated. | | | | | | |
| [ arXiv] | 2012_2_10_Zhou_Notes | | | | | 2012_2_10 |
## February 3, Wang, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|------------|-------------|--------|----------|
| Yun Wang (McMaster University) | 2011-2012 Analysis Applied Math Seminar | Friday | February 3 | 13:10-14:00 | BA6183 | |
| Title: Critical Sobolev Inequalities and Navier-Stokes Equations | | | | | | |
| Abstract: In this talk, some critical Sobolev inequalities are introduced. These inequalities are generalizations of Brezis-Gallouet-Wainger inequality. We apply such inequalities to the two-dimensional non-homogeneous incompressible Navier-Stokes problem and prove global existence of strong solutions. | | | | | | |
| [ arXiv] | 2012_2_3_Wang_Notes | | | | | 2012_2_3 |
## January 27, Berlyand, 13:10-14:00 @BA6183
| | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|------------|-------------|--------|-----------|
| Leonid Berlyand (Pennsylvania State University) | 2011-2012 Analysis Applied Math Seminar | Friday | January 27 | 13:10-14:00 | BA6183 | |
| Title: Flux norm approach to finite-dimensional homogenization approximation with non-separated scales and high contrast | | | | | | |
| Abstract: Classical homogenization theory deals with mathematical models of strongly inhomogeneous media described by PDEs with rapidly oscillating coefficients of the form $A(x/\epsilon)$, $\epsilon \to 0$. The goal is to approximate this problem by a homogenized (simpler) PDE with slowly varying coefficients that do not depend on the small parameter $\epsilon$. The original problem has two scales: fine $O(\epsilon)$ and coarse $O(1)$, whereas the homogenized problem has only a coarse scale. The homogenization of PDEs with periodic or random ergodic coefficients and well-separated scales is well understood. In a joint work with H. Owhadi (ARMA 2010) we consider the most general case of arbitrary $L^\infty$ coefficients, which may contain infinitely many scales that are not necessarily well-separated. Specifically, we study scalar and vectorial divergence-form elliptic PDEs with such coefficients. We establish two finite-dimensional homogenization approximations that generalize the {\it correctors} in classical homogenization. We introduce a flux norm and establish the error estimate in this norm with an explicit and {\it optimal} error constant {\it independent of the contrast} and regularity of the coefficients. A proper generalization of the notion of a cell problem in classical homogenization is the key issue in our consideration. Next we discuss most recent results (L. Zhang, Owhadi) on localized multiscale basis that allows for numerical implementation of our theoretical results and work in progress (with Owhadi and Zhang) on compactness of the solution space and new corrector results in classical periodic homogenization problem. | | | | | | |
| [{{{arxiv}}} arXiv] | 2012_1_27_Berlyand_Notes | | | | | 2012_1_27 |
## January 20, Serea, 13:10-14:00 @BA6183
| | | | | | | |
|----------------------------------------------|-----------------------------------------|--------|------------|-------------|--------|-----------|
| Oana Silvia Serea (Universite de Perpignan) | 2011-2012 Analysis Applied Math Seminar | Friday | January 20 | 13:10-14:00 | BA6183 | |
| Title: CANCELLED | | | | | | |
| Abstract: CANCELLED | | | | | | |
| [ arXiv] | 2012_1_20_Serea_Notes | | | | | 2012_1_20 |
## December 02, Krivodonova, 13:10-14:00 @BA6183
| | | | | | | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-------------|-------------|--------|------------|
| Lilia Krivodonova (University of Waterloo) | 2011-2012 Analysis Applied Math Seminar | Friday | December 02 | 13:10-14:00 | BA6183 | |
| Title: Developments in High-Order Discontinuous Galerkin Methods for Hyperbolic Conservation Laws | | | | | | |
| Abstract: A variety of physical phenomena in fluid mechanics, groundwaterflow, electromagnetics and other areas can be described by hyperbolicconservation laws. The discontinuous Galerkin methods (DGM) have become very popular in recent years due to their ability to accurately capture discontinuities often present in such problems. Their low dispersion and dissipation errors make them very suitable for long time computations. We describe our approach to computing high-order accurate solutions for time dependent problems on structured and unstructured meshes. We review several aspects of our recent work including a connection between high accuracy of the DGM methods, their restrictive CFL condition and the classical Pade approximants. Applications include examples from compressible fluid dynamics and electromagnetics | | | | | | |
| [ arXiv] | 2011_12_02_Krivodonova_Notes | | | | | 2011_12_02 |
## November 25, Hoell, 13:10-14:00 @BA6183
| | | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-------------|-------------|--------|------------|
| Nicholas Hoell (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | November 25 | 13:10-14:00 | BA6183 | |
| Title: Inverting the Attenuated X-Ray Transform | | | | | | |
| Abstract: In this talk we present methods for analytically inverting the attenuated ray transform in 2-dimensional settings. The method is based of a study of the transport equation generating the integral curves over which the unknown function is averaged. This problem first arose in the medical imaging modality SPECT and has recently been useful in the unique determination of interior permittivity and permeability parameters of a conductive body from external measurements. | | | | | | |
| [ arXiv] | 2011_11_25_Hoell_Notes | | | | | 2011_11_25 |
## November 18, Maggi, 13:10-14:00 @BA6183
| | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-------------|-------------|--------|------------|
| Francesco Maggi (Universita di Firenze) | 2011-2012 Analysis Applied Math Seminar | Friday | November 18 | 13:10-14:00 | BA6183 | |
| Title: Sharp Stability Estimates in Geometric Variational Problems | | | | | | |
| Abstract: Optimal stability estimates for isoperimetric and Plateau type problems are presented, together with some improvable results and open problems. | | | | | | |
| [ arXiv] | 2011_11_18_Maggi_Notes | | | | | 2011_11_18 |
## November 11, Westrich, 13:10-14:00 @BA6183
| | | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-------------|-------------|--------|------------|
| Matthias Westrich (McGill University) | 2011-2012 Analysis Applied Math Seminar | Friday | November 11 | 13:10-14:00 | BA6183 | |
| Title: Regularity of Eigenstates in Regular Mourre Theory | | | | | | |
| Abstract: We discuss an abstract method to prove that eigenstates, associated with possibly embedded eignvalues, of a self-adjoint operator $H$ are in the domain of the k'th power of a conjugate operator $A$. Conjugate means here that $A$ and $H$ have a positive commutator locally near the relevant eigenvalue in the sense of Mourre. The only requirement is $C^{k+1} (A )$ regularity of $H$. Regarding integer $k$, our result is optimal. Under a natural boundedness assumption on the multiple commutators, we prove that $e^{i\theta A}$ (the eigenstate) is analytic in a strip around the real axis. Natural applications are 'dilation analytic' systems satisfying a Mourre estimate, where our result can be viewed as an abstract version of a theorem due to Balsev and Combes (1971). As a new application we discuss the massive Spin-Boson model. | | | | | | |
| [ arXiv] | 2011_11_11_Westrich_Notes | | | | | 2011_11_11 |
## October 28, Bierri, 13:10-14:00 @BA6183
| | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|------------|-------------|--------|------------|
| Lydia Bierri [1] (University of Michigan) | 2011-2012 Analysis Applied Math Seminar | Friday | October 28 | 13:10-14:00 | BA6183 | |
| Title: From the Analysis of Einstein-Maxwell Spacetimes in General Relativity to Gravitational Radiation | | | | | | |
| Abstract: A major goal of mathematical General Relativity (GR) and astrophysics is to precisely describe and finally observe gravitational radiation, one of the predictions of GR. In order to do so, one has to study the null asymptotical limits of the spacetimes for typical sources such as binary neutron stars and binary black hole mergers. D. Christodoulou showed that every gravitational-wave burst has a nonlinear memory, displacing test masses permanently. In joint work with P. Chen and S.-T. Yau we investigated the Einstein-Maxwell (EM) equations in GR and proved that the electromagnetic field contributes at highest order to the memory effect. In this talk, we discuss the null asymptotics for spacetimes solving EM equations, compute the radiated energy and derive limits at null infinity and compare them with the Einstein vacuum (EV) case. The physical insights are based on geometric-analytic investigations of the solution spacetimes. | | | | | | |
| [ arXiv] | 2011_10_28_Bierri_Notes | | | | | 2011_10_28 |
## October 21, Egli, 13:10-14:00 @BA6183
| | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|------------|-------------|--------|------------|
| Daniel Egli [2] (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | October 21 | 13:10-14:00 | BA6183 | |
| Title: Anderson localization triggered by spin disorder | | | | | | |
| Abstract: The phenomenon of Anderson localization is studied for a class of one-particle Schrödinger operators with random Zeeman interactions. These operators arise as follows: Static spins are placed randomly on the sites of a simple cubic lattice according to a site percolation process with density $x$ and coupled to one another ferromagnetically. Scattering of an electron in a conduction band at these spins is described by a random Zeeman interaction term that originates from indirect exchange. It is shown rigorously that, for positive values of $x$ below the percolation threshold, the spectrum of the one-electron Schrödinger operator near the band edges is dense pure-point, and the corresponding eigenfunctions are exponentially localized. Localization near the band edges persists in a weak external magnetic field, $H$, but disappears gradually, as $H$ is increased. Our results lead us to predict the phenomenon of colossal (negative) magnetoresistance and the existence of a Mott transition, as $H$ and/or $x$ are increased. Our analysis is motivated directly by experimental results concerning the magnetic alloy $\mathsf{Eu}_x\mathsf{Ca}_{1-x}\mathsf{B}_6$. | | | | | | |
| [ arXiv] | 2011_10_21_Egli_Notes | | | | | 2011_10_21 |
## October 14, Tsugawa, 13:10-14:00 @BA6183
| | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|------------|-------------|--------|------------|
| Kotaro Tsugawa [3] (Nagoya University/University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | October 14 | 13:10-14:00 | BA6183 | |
| Title: Local well-posedness of the KdV equation with almost periodic initial data | | | | | | |
| Abstract: We consider the Cauchy problem of the KdV equation. The well-posedness in the Sobolev space of periodic functions has been intensively studied by many people. In this talk, we prove the local well-posedness in an almost periodic function space. The function space contains functions satisfying $f=f_1+f_2+...+f_N$ where $f_j$ is inthe Sobolev space of order $s>-1/2N$ of $a_j$ periodic functions. Note that $f$ is not periodic when the ratio of periods $a_i/a_j$ is irrational. The main tool of the proof is the Fourier restriction norm method introduced by Bourgain. | | | | | | |
| arXiv | 2011_10_14_Tsugawa_Notes | | | | | 2011_10_14 |
## October 7, Shao, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|-----------|-------------|--------|-----------|
| Arick Shao (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | October 7 | 13:10-14:00 | BA6183 | |
| Title: Breakdown Criteria for Nonvacuum Einstein Equations | | | | | | |
| Abstract: We extend a recent breakdown/continuation result of S. Klainerman and I. Rodnianski for the Einstein-vacuum equations to the Einstein-scalar and the Einstein-Maxwell equations. Roughly, the main theorem states that if an existing local solution of these equations satisfy certain uniform bounds for the second fundamental form, lapse, and matter field, then it can be further extended in time. This can also be reformulated as conditions that must be satisfied when such a solution blows up. In particular, in these nonvacuum settings, we encounter additional difficulties resulting from the nontrivial Ricci curvature and from the coupling between the Einstein and the matter field equations. | | | | | | |
| [ arXiv] | 2011_10_7_Shao_Notes | | | | | 2011_10_7 |
## September 30, Albin, 13:10-14:00 @BA6183
| | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|--------------|-------------|--------|------------|
| Pierre Albin [4] (University of Illinois at Urbana-Champaign) | 2011-2012 Analysis Applied Math Seminar | Friday | September 30 | 13:10-14:00 | BA6183 | |
| Title: The signature operator on stratified pseudomanifolds | | | | | | |
| Abstract: The signature operator of a Riemannian metric is an important tool for studying topological questions with analytic machinery. Though well-understood for smooth metrics on compact manifolds, there are many open questions when the metric is allowed to have singularities. I will report on joint work with Eric Leichtnam, Rafe Mazzeo, and Paolo Piazza on the signature operator on stratified pseudomanifolds and some of its topological applications. | | | | | | |
| [ arXiv] | 2011_09_30_Albin_Notes | | | | | 2011_09_30 |
## September 23, Galvao-Sousa, 13:10-14:00 @BA6183
| | | | | | | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|--------------|-------------|--------|------------|
| Bernardo Galvao-Sousa [5] (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | September 23 | 13:10-14:00 | BA6183 | |
| Title: Thin films for the Ginzburg-Landau model | | | | | | |
| Abstract: I will present recent results in collaboration with Stan Alama and Lia Bronsard on thin film London limits of the Ginzburg--Landau model. We obtain $\Gamma$--convergence results for the first and second critical fields under particular asymptotic ratios between the magnitude of the parallel applied magnetic field and the thickness of the film. For the first critical field, we study the optimal density of vortices via an obstacle problem for some examples to illustrate how the geometry of the domain will affect the position of vortices. | | | | | | |
| [ arXiv] | 2011_09_23_Galvao-Sousa_Notes | | | | | 2011_09_23 |
## September 16, Sutherland, 14:10-15:00 @BA6180
| | | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|--------------|-------------|--------|------------|
| Scott Sutherland [6] (SUNY Stonybrook) | 2011-2012 Analysis Applied Math Seminar | Friday | September 16 | 14:10-15:00 | BA6180 | |
| Title: Bounds on the cost of root finding | | | | | | |
| Abstract: We discuss a path-lifting method for finding an approximate zero of f(z)=0, where f is a complex polynomial, and show that the number of function evaluations required to locate a solution depends only on the geometry of the polynomial, not on the degree. | | | | | | |
| [ arXiv] | 2011_09_16_Sutherland_Notes | | | | | 2011_09_16 |
## September 16, Seis, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|--------------|-------------|--------|------------|
| Christian Seis [7] (University of Toronto) | 2011-2012 Analysis Applied Math Seminar | Friday | September 16 | 13:10-14:00 | BA6183 | |
| Title: Rayleigh-Bénard convection: Bounds on the Nusselt number | | | | | | |
| Abstract: We consider Rayleigh--Bénard convection as modelled by the Boussinesq equations in the infinite-Prandtl-number limit. We are interested in the scaling of the average upward heat transport, the Nusselt number $Nu$, in terms of the non-dimensionalized temperature forcing, the Rayleigh number $Ra$. Experiments, asymptotics and heuristics suggest that $Nu \sim Ra^{1/3}$. This work is mostly inspired by two earlier rigorous work on upper bounds of $Nu$ in terms of $Ra$: 1.) The work of Constantin and Doering establishing $Nu \lesssim Ra^{1/3} \ln^{2/3}Ra$ with help of a (logarithmically failing) maximal regularity estimate in $L^{\infty}$ on the level of the Stokes equation. 2.) The work of Doering, Reznikoff and Otto establishing $Nu\lesssim Ra^{1/3}\ln^{1/3}Ra$ with help of the background field method. We present two results: 1.) The background field method can be slightly modified to yield $Nu\lesssim Ra^{1/3}\ln^{1/15}Ra$ --- which is optimal for the background flield method. 2.) The estimates behind the background field method can be combined with the maximal regularity in $L^{\infty}$ to yield $Nu\lesssim Ra^{1/3}\ln^{1/3}\ln Ra$ --- an estimate that is only a double logarithm away from the supposed optimal scaling. This is joint work with Felix Otto. | | | | | | |
| [ arXiv] | 2011_09_16_Seis_Notes | | | | | 2011_09_16 |
## September 09, Zworski, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------|--------------|-------------|--------|------------|
| Maciej Zworski [8] (University of California at Berkeley) | 2011-2012 Analysis Applied Math Seminar | Friday | September 09 | 13:10-14:00 | BA6183 | |
| Title: A semiclassical proof of Quillen's theorem | | | | | | |
| Abstract: In this expository talk I give a semiclassical interpretation of Quillen's original proof of his 1968 theorem on decomposition of positive complex bi-homogeneous forms into sums of Hermitian squares. The result can be interpreted as a hermitian analogue of Hilbert's 17th problem and it was rediscovered by Catlin and D'Angelo in 1996. Both proofs are related to the Fourier–Bros–Iagolnitzer transform (FBI) transform, quantum harmonic oscillator and the calculus of Toeplitz operators. | | | | | | |
| [ arXiv] | 2011_09_09_Zworski_Notes | | | | | 2011_09_09 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8602732419967651, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/79408/evaluating-integral-of-greens-theorem?answertab=oldest
|
# Evaluating integral of Green's theorem
Applying Green's theorem, I've obtained a double integral of $$\iint_c 4ye^{-x^2 - y^2} \cos (2xy) dx dy = 0$$ along the curve $x^2 + y^2 \le R^2$.
Why is it equal to $0$?
The explanation I got was because "the integral in anti-symmetric (odd) in $y$ and the area of integration is symmetric in $y$."
Will anyone please tell me what does the above sentence means exactly? Thanks.
-
"Antisymmetric in $y$" means that $f(x,-y)=-f(x,y)$. Imagine cutting up the region of integration into a "left" and "right" portion. Note that the integral on the "left" is precisely the negative of the integral on the "right" due to the oddness. – J. M. Nov 6 '11 at 4:38
– J. M. Nov 6 '11 at 4:43
## 1 Answer
Maybe a few pics might be of use. Here's the surface whose volume you're trying to get:
Here are the "left" and "half" bits I was talking about in the comments:
This is the antisymmetry that was being referred to. The integrand on the left is precisely the negative of the integrand of the right. Due to this, the integral for the left section would be the negative of the integral for the right section, and adding those two integrals will yield the value of zero.
-
Hey! Thanks for the great graphics! really helped in my understanding. – adsisco Nov 6 '11 at 11:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951448380947113, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/interpolation?page=1&sort=votes&pagesize=15
|
# Tagged Questions
Questions on interpolation, the estimation of the value of a function from given input, based on the values of the function at known points.
2answers
163 views
### Generalization of $\frac{x^n - y^n}{x - y} = x^{n - 1} + yx^{n - 2} + \ldots + y^{n - 1}$
I thought about a generalization for the formula $$\frac{x^n - y^n}{x - y} = x^{n - 1} + yx^{n - 2} + \ldots + y^{n - 1}$$ It can be written as \frac{x^n - y^n}{x - y} = x^{n - 1} + yx^{n - 2} + ...
3answers
534 views
### Is it possible to have a rule which generates: 2, 4, 6, 8, 10, 12, 14, 16, -23?
This is on Lagrange Interpolations . . . Is it possible to have a rule which generates the sequence: 2, 4, 6, 8, 10, 12, 14, 16, -23? The hint that he gave us is to use Summation Products, the only ...
1answer
224 views
### interpolating the primorial $p_{n}\#$
The primorial $p_{n}\#$ is given by the product $p_n\# = \prod_{k=1}^n p_k$ (where $p_{k}$ is the $k$th prime) -- is there a natural (a la the gamma function $\Gamma(z)$) way of interpolating it for ...
2answers
164 views
### Negative value of $\sqrt[3]{20}$
Given $f(x)=\sqrt[3] x$, find an approximation of $\sqrt[3]{20}$ using Lagrange interpolation method. $x_0=0$, $x_1=1$, $x_2=8$, $x_3=27$ and $x_4=64$ $f(x_0)=0$, $f(x_1)=1$, $f(x_2)=2$, ...
1answer
260 views
### A Curious Binomial Sum Identity without Calculus of Finite Differences
Let $f$ be a polynomial of degree $m$ in $t$. The following curious identity holds for $n \geq m$, \begin{align} \binom{t}{n+1} \sum_{j = 0}^{n} (-1)^{j} \binom{n}{j} \frac{f(j)}{t - j} = (-1)^{n} ...
1answer
65 views
### Cauchy Integral Formula for Matrices
How do I evaluate the Cauchy Integral Formula $f(A)=\frac{1}{2\pi i}\int\limits_Cf(z)(zI-A)^{-1}dz$ for a matrix ...
1answer
185 views
### Is there a name for these polynomials?
Given $t \in \mathbb{R}[0,1]$, consider the following set of polynomials: \left[-{\left(t - 1\right)}^{2} t, {\left(t - 1\right)} {\left(t^{2} - t - 1\right)}, -{\left(t^{2} - t - ...
1answer
274 views
### Determining Coefficients of a Finite Degree Polynomial $f$ from the Sequence $\{f(k)\}_{k \in \mathbb{N}}$
Suppose $f$ is an unknown polynomial of degree $n$ (in one indeterminate) but the sequence $\{ f(k) \}_{k \in \mathbb{N}}$ is given. It is a nice exercise to show that one needs only the first $n+1$ ...
1answer
556 views
### Lyapunov's Inequality for Weak-Lp Spaces
Let $(X,\mu)$ be a measure space. Suppose that $0 < p_{0} < p < p_{1} < \infty$ and $\frac{1}{p} = \frac{1-\theta}{p_{0}} + \frac{\theta}{p_{1}}$ for some $\theta \in (0,1)$. If \$f \in ...
1answer
178 views
### What is the math used in Excel's GROWTH function?
I am trying to implement Microsoft Excel's GROWTH function in JavaScript. This function calculates predicted exponential growth by using existing data. What makes it tricky is that it must work with ...
0answers
55 views
### Runge's phenomen: interpolation error using Chebyshev nodes oscillates
We're trying to approximate the Runge function $f(x) = \dfrac{1}{1+25x^2}$ using Chebyshev nodes. When calculating the interpolation error, using different degrees ranging from 0 to 50, we get the ...
4answers
185 views
### What are some “natural” interpolations of the sequence $\small 0,1,1+2a,1+2a+3a^2,1+2a+3a^2+4a^3,\ldots$?
(This is a spin-off of a recent question here) In fiddling with the answer to that question I came to the set of sequences \$\qquad \small \begin{array} {llll} ...
3answers
236 views
### Are there smooth analogs to polynomial splines
Is possible to construct infinitely differentiable functions that interpolate through arbitrary points, the way polynomial splines do? If so, do they have a name and is there an algorithm for ...
1answer
157 views
### Polynomial interpolation
Let $P=[a,b]\times (c,d)$. Assume that we have given $n$ points $(x_1,y_1),...,(x_n,y_n)\in P$, such that $x_i\neq x_j$ for $i\neq j$; $i,j=1,...,n$. Does there exist a polynomial $f$ such that ...
1answer
310 views
### Hermite Interpolation of $e^x$. Strange behaviour when increasing the number of derivatives at interpolating points.
I am trying to understand Hermite Interpolation. Here is my pedagogical example. I want to approximate $f(x)=e^x$ on the domain $[-1,1]$ using Hermite interpolation. I choose the Chebyshev zeros ...
4answers
72 views
### Profinite and p-adic interpolation of Fibonacci numbers
On the topic of profinite integers $\hat{\bf Z}$ and Fibonacci numbers $F_n$, Lenstra says (here & here) For each profinite integer $s$, one can in a natural way define the $s$th Fibonacci ...
4answers
816 views
### Polynomial fitting where polynomial must be monotonically increasing
Given a set of monotonically increasing data points (in 2D), I want to fit a polynomial to the data which is monotonically increasing over the domain of the data. If the highest x value is 100, I ...
1answer
1k views
### linear interpolation in 3 dimensions
Say that I have 2 points in 3 dimensional space specified in Euclidean coordinates $p_0(x_0,y_0,z_0)$ and $p_1(x_1,y_1,z_1)$. How would I go about finding the coordinates of an unknown point that ...
1answer
1k views
### Natural cubic splines vs. Piecewise Hermite Splines
Recently, I was reading about a "Natural Piecewise Hermite Spline" in Game Programming Gems 5 (under the Spline-Based Time Control for Animation). This particular spline is used for generating a C2 ...
1answer
541 views
### MATLAB Hermite interpolation
Anyone know where I can find the Hermite interpolation algorithm in MATLAB. Which Hermite interpolation algorithm solves this? I need to calculate a polynomial. Example: ...
1answer
1k views
### 2D array downsampling and upsampling using bilinear interpolation
I am trying to understand how exactly the upsampling and downsampling of a 2D image I have, would happen using Bilinear interpolation. Now I am aware of how bilinear interpolation works using a 2x2 ...
1answer
43 views
### Linear, Bi-linear or better
I have been writing some code to do some interpolation of 2D data on an irregular grid. So far what I have done is: Triangulate the known points using Delaunay. Find the vertices of the triangles ...
1answer
61 views
### Nadirashvili surface
I'm referring to the article of N. Nadirashvili "Hadamard's and Calabi-Yau conjectures on negatively curved and minimal surfaces". In the proof of proposition 4.3 author use a theorem of Walsh. Now ...
1answer
83 views
### Fitting a surface to 2D measurements
I am looking for a way to fit a surface given a set of measured data $(x, y) \mapsto z$. A typical example would consist of anywhere between $10$ and $30$ measurements spread evenly over a disc. ...
1answer
135 views
### Polynomial interpolation of the residues of a rational function
Let $g(z) = a\prod_{i=1}^N (z-\lambda_i) \in \mathbb{Q}[z]$ be square-free. At each root $\lambda_i \in \mathbb{C}$, let $r_i$ denote the residue $\mathrm{Res}_{\lambda_i} 1/g(z)$. Let $I_g(z)$ ...
1answer
1k views
### What is the difference between natural cubic spline, Hermite spline, Bézier spline and B-spline?
I am reading a book about computer graphics. It is confusing about the various splines and their algorithms. What is the difference between natural cubic spline, Hermite spline, Bézier spline and ...
1answer
256 views
### How to minimize this function difference
Sorry about this somewhat lengthy introduction to my question. I thought it might be useful to know what I'm trying to do. I decided that I would like to have sequence of polynomials in \$\mathbb{P}_n ...
1answer
560 views
### What is Hermite data?
Using fairly simple language, what is Hermite data? I encountered it here, http://www.frankpetterson.com/publications/dualcontour/dualcontour.pdf and could not get an answer on standard StackExchange, ...
0answers
72 views
### Weak $L^1$ as real interpolation space between $L^p$-spaces?
Let $\Omega$ be a measure space. We denote $L^{p,q}$ the usual Lorentz space. We use a real interpolation method $(\cdot,\cdot)_{\theta,q}$. Suppose $1\leqslant p,q\leqslant \infty$. I know that if ...
2answers
481 views
### Need a formula for a quadratic spline
I'm trying to reproduce some results from a paper and I need an explicit formula for a specific quadratic spline to do so. The problem is, I've only got a plot of it. The quadratic spline is from ...
0answers
364 views
### Computation of coefficients of Lagrange polynomials
For our homework we should write a program, that creates Lagrange base polynomials $L_k(x)$ based on a few sampling points $x_i$. Now i am eager to develop a formula to be able to compute the ...
3answers
136 views
### Deriving an equation that satisfies many points
Say I have a collection of points, for example the following: (1, 167), (2, 11), (3, 255), etc Is it possible to construct an equation that satisfies all of ...
2answers
402 views
### given $y = a + bx + cx^2$ fits three given points, find and solve the matrix equation for the unknowns $a,b$, and $c$
Given $y = a + bx + cx^2$ fits three given points, find and solve the matrix equation for the unknowns $a$, $b$, and $c$. the equation fits the points $(1,0), (-1, -4),$ and $(2, 11)$ I really ...
1answer
298 views
### Natural Cubic Spline S on [0,2]
A Natural Cubic Spline S on $[0,2]$ is defined by: S(x)= $$S_0(x)=1+2x-x^3 \to 0 \leq x < 1$$ $$S_1(x)=2+b(x-1)+c(x-1)^2+d(x-1)^3 \to 1 \leq x \leq 2$$ Find b,c and d This question ...
2answers
544 views
### Why is Lagrange interpolation numerically unstable?
Here is my understanding of the polynomial interpolation problem: Interpolating by inverting the Vandermonde matrix is unstable because the Vandermonde matrix is ill-conditioned, so "difficult" to ...
1answer
47 views
### Interpolation with degree restriction
(using $f[x_1, ... , x_n]$ to denote the forward difference operator) I have a polynomial $P(x)$ interpolating $5$ points $x_0, ... , x_4$ and $2$ derivative values $x_0, x_3$ across an evenly spaced ...
1answer
128 views
### Why is this a linear interpolation?
Let $J_{k,n}$ be the dyadic partition of $[0,1]$, i.e. $n\in \mathbb{N}_0,k=1,\dots,2^n$, $J_{k,n}:=((k-1)2^{-n},k2^{-n}]$ and we denote with $\phi_{n,k}$ the Schauder functions over $J_{k,n}$, i.e. ...
1answer
203 views
### Lagrange's basis function and Interpolation
Let $x_0,...,x_n$ be distinct real numbers and $l_k(x)$ be the Lagrange's basis function. $δ_n = ∏^n _{k=0}(x-x_k)$. Prove that a - $\sum^n_{k=0}x^j_kl_k(x) ≡ x^j$. for $j = 0,1,...,n$ ...
3answers
170 views
### How can I find out 2 unknowns in a cubic equation?
I need to give a bit of a background first, so please bare with me. I have a set of values that represent servo motor position values. By default I end up with a large set of values and I'd like to ...
1answer
391 views
### Comparing the maximum error between Lagrange, Hermite, and Spline Interpolation Methods
I am reading Numerical Analysis by Atkinson. I am curious how do I choose the appropriate number of data points in each method so that I can make fair error comparisons? Some background: For each of ...
3answers
189 views
### creating smooth curves with f(0) = 0 and f(1) = 1
I would like to create smooth curves, which have f(0) = 0 and f(1) = 1. What I would like to create are curves similar to the gamma curves known from CRT monitors. I don't know any better way to ...
2answers
1k views
### Implementation of Monotone Cubic Interpolation
I'm in need to implement Monotone Cubic Interpolation for interpolate a sequence of points. The information I have about the points are x,y and timestamp. I'm much more an IT guy rather than a ...
1answer
46 views
### Is there a nice way to interpret this matrix equation that comes up in the context of least squares
So I am working on this problem with fitting a second degree polynomial of the form $y=a_1x^2+a_2x+a_3$ to four points using least squares. One of the parts of the problem is to write out the matrix ...
1answer
53 views
### A Question About Linear Interpolation
So lets say I have two points $A=(x_1, y_1, z_1)$ and $B=(x_2, y_2, z_2)$. $A$ and $B$ are each associated with some scalar value $K_1$ and $K_2$. $K_1$ is negative and $K_2$ is positive and all the ...
1answer
114 views
### Automatic calculation of the intersection of discrete curves
first of all, let me apologize for a poor math-english translation, I'll try my very best. I have the following situation: I have over 16.000 data files which I generated from a biometric ...
1answer
67 views
### Original Proof of Riesz-Thorin
Wikipedia says that Riesz proved the Riesz-Thorin theorem in 1926 without using any complex methods. Does anyone know where the original proof can be found? ...
1answer
103 views
### Divided difference coefficient of product of two functions
For any function $f$ and distinct reals $x_1,\ldots,x_n$, denote by $f[x_0,\ldots,x_n]$ the coefficient of $x^n$ of the minimal polynomial interpolating $f$ at $x_0,\ldots,x_n$. Let $f$ and $g$ be ...
1answer
266 views
### Determining whether a function is Piecewise Polynomial
I am trying to determine whether or not a function is piecewise polynomial. The function is as below: Let $\ X$ be a continuous random variable with support on $\ \Omega_x$, and with corresponding ...
3answers
301 views
### Interpolation to a power function
We have an experiment which have the variables $x$ and $y$. $x$ and $y$ can be measured into pair $(x_i,y_i)$. Now I'm finding a way to interpolate it into a power function $y=a+bx^c$. Which $a,b,c$ ...
2answers
81 views
### Interpolating polynomials
So I have this question on a homework and I just can't seem to figure it out. Let $f \in C^4 [0,1]$ and let $p$ be a polynomial of degree $\le 3$ such that $p(0) = f(0)$, $p(1) = f(1)$, \$p'(0) ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 128, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8930990099906921, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/21247/ramified-primes-in-the-chebotarev-density-theorem
|
## Ramified primes in the Chebotarev Density Theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to use the Chebotarev Density Theorem to say something about the Galois groups of a class of polynomials. To be more precise, by factoring a polynomial mod some prime p, I want to show that there is an element in the Galois group of the polynomial with a certain cycle structure. Unfortunately my knowledge of algebraic number theory is rather thin!
In the statement of the theorem, it is required that p be unramified. My question is this: if I can prove that there are no repeated factors in the factorization of the polynomial mod p, does it matter that I don't know whether p divides the discriminant in general or not?
In other words, is the requirement that p be unramified purely to avoid repeated factors, or is there more to it than that?
-
8
If there are no repeated factors of the polynomial modulo $p$ then certainly $p$ does not divide the discriminant. – Robin Chapman Apr 13 2010 at 19:01
## 3 Answers
Adam, the requirement that $p$ be unramified in the number field is to explain the existence of an element (really, conjugacy class) in the Galois group with a certain cycle structure on the roots of a generator for the number field. The way this element of the Galois group is constructed requires algebraic number theory, but it can be translated into a more elementary-sounding proposition about factoring a polynomial mod $p$ at the expense of giving up on being able to apply the result to a few primes for which the method really does work at a more technical level.
If $K = {\mathbf Q}(\alpha)$ and $\alpha$ is an algebraic integer with minimal polynomial $f(x)$ in ${\mathbf Z}[x]$, the elementary proposition is that if $p$ is a prime number such that $f(x) \bmod p$ is a product of distinct irreducibles with degrees $d_1,\dots,d_r$ then there's an element of the Galois group of the Galois closure of $K/{\mathbf Q}$ whose cycle structure on $\alpha$ and its ${\mathbf Q}$-conjugates consists of disjoint cycles of length $d_1,\dots,d_r$.
The more advanced proposition, which makes no reference to polynomials mod $p$, is that if $p$ is a prime number unramified in $K$, so necessarily $p{\mathcal O}_K = {\mathfrak p}_1\cdots {\mathfrak p}_r$ for some distinct primes ${\mathfrak p}_i$ with residue field degrees $d_i$, then there is an element of the Galois group of the Galois closure of $K/{\mathbf Q}$ whose permutation action on $\alpha$ and its ${\mathbf Q}$-conjugates is a product of disjoint cycles with lengths $d_i$.
The link between the elementary and advanced propositions is: $\text{disc}(f) = [{\mathcal O}_K:{\mathbf Z}[\alpha]]\text{disc}(K)$. This equation implies that if $f(x) \bmod p$ has distinct irreducible factors then $p$ doesn't divide $\text{disc}(f)$ and therefore also doesn't divide the discriminant of $K$, so $p$ is unramified in $K$. Moreover, $p$ doesn't divide that ring index, which implies that the shape of the factorization of $p{\mathcal O}_K$ matches the shape of the factorization of $f(x) \bmod p$. So under the condition that $f(x) \bmod p$ has distinct irreducible factors the elementary and advanced propositions are both applicable (their hypotheses are both satisfied) and lead to the same conclusion: both propositions imply the existence of an element of the Galois group with the same cycle structure as a permutation of the ${\mathbf Q}$-conjugates of $\alpha$. Primes at which the elementary proposition hold are always primes at which the advanced proposition holds, but not conversely: there can be primes $p$ which are unramified in $K$ (that is, $p$ doesn't divide $\text{disc}(K)$) but the reduced polynomial $f(x) \bmod p$ has multiple irreducible factors (that is, $p$ divides $\text{disc}(f)$), so the advanced proposition can be applied to this prime $p$ but the elementary one can not.
Incidentally, for a ramified prime $p$ in $K$ with ramification indices $e_1,\dots,e_r$ and respective residue field degrees $d_1,\dots,d_r$, it's natural to ask if there might be an element of the Galois group of the Galois closure of $K/{\mathbf Q}$ whose permutation action on the ${\mathbf Q}$-conjugates of $\alpha$ is a product of disjoint cycles where there are $e_i$ cycles of length $d_i$ for all $i$. I have a copy of a letter Serre sent to Thomas Hawkins in 2000 which outlines a method to give a counterexample where $[K:{\mathbf Q}] = 6$. That means this naive attempt to extend the Galois group existence technique to ramified primes doesn't generally work.
Here is an explicit example: $K = {\mathbf Q}(a)$ where $a^6 - 35a^4 + 3a^2 - 225 = 0$. This field has degree 6 and Galois group $S_4$ over ${\mathbf Q}$. This Galois group acts on the roots in the way $S_4$ naturally permutes the 6 two-elements subsets of {1,2,3,4}: this is an embedding of $S_4$ into $A_6$, which will be important.
Using PARI, 3 factors in the integers of $K$ as $P^2Q$ where $P$ and $Q$ both have residue field degree 2. Now if there were a "corresponding" element of the Galois group of $K$ over ${\mathbf Q}$ as dreamed above, it would permute the 6 roots as a product of three disjoint 2-cycles. But alas, that is not an even permutation of the roots and I already said the Galois group is $S_4$ acting on the roots as a subgroup of $A_6$, so entirely by even permutations. Thus we have a contradiction so there is no such "dream automorphism" associated to 3 in the Galois group.
By the way, this degree 6 polynomial did not come out of nowhere: it is related to a 3-adic approximation of another polynomial, but that connection would take longer to describe than I wish to write about here, as this answer is already pretty long.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Hi Adam,
I think you will be happy with Lenstra's short and beautiful text on the theorem, see here.
Look at fact 2.1.
-
This is not the answer you are looking for. Go read KConrad's.
-
1
This is a bit subtle. The definition of unramified is that there are no repeated prime ideals in the factorization of (p). If the ring of integers of K is of the form Z[a]/f(a), then this is equivalent to saying that f(x) has no repeated factors mod p. If Z[a]/f(a) is a subring of O_K, then f(x) having no repeated factors mod p implies that p is unrammified, but not vice versa. Fortunately, the direction Adam needs is the true one. – David Speyer Apr 13 2010 at 22:25
In the previous comment, "is a subring of O_K" should read "is a full rank subring of O_K". – David Speyer Apr 13 2010 at 22:28
7
This is not the Conrad you are looking for. – Craig Westerland Apr 14 2010 at 4:13
My comment was to a previous version of Ben's answer. – David Speyer Apr 14 2010 at 10:54
Ack! My brain was clearly not having a good night! – Ben Webster♦ Apr 14 2010 at 14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417761564254761, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/79030-changing-integral-rectangular-spherical.html
|
# Thread:
1. ## changing an integral from rectangular to spherical
I have a triple integral in terms of x,y and z and need to convert it to spherical coordinates.
it is the integral from 0 to 2 and then from 0 to (4 - x^2)^(1/2) inner bounds from 0 to (4 - x^2 - y^2)^(1/2) of (x^2 + y^2 + z^2)^(1/2) dz dy dx
Since the integral is of roe spherical will work well I think!!!
So, we will integrate roe^3 sin phi d roe d phi d theta
My lower bounds must be all zeros.
Can someone please go through the thinking process to determine my upper bounds?
Once I have these I can do the integration no problem. Without them I am stuck! Frostking
2. $\int_0^2 \int_0^{\sqrt {4-x^2}} \int_0^{\sqrt{4-x^2-y^2}} \sqrt{x^2+y^2+z^2} dz dy dx$
$\phi$ varies from 0 to $\pi.$.
$\theta$ varies from 0 to $2 \pi$.
$\rho$ varies from 0 to 2.
Hence the integral is:
$\int_0^{2\pi} \int_0^{\pi} \int_0^{2} \rho^3 sin \phi \ d \rho d \phi d \theta$
EDIT: I've changed the $\rho$ limits to how they should be.
3. Shouldn't you have $\rho$ ranging from 0 to 2?
--Kevin C.
4. Ah yes, it isn't a unit sphere is it!!
*doi*
5. ## limits of integral
Yes 0 to 2 pi is what the key has for roe but it has 0 to pi /2 for both the others and I still do not understand how to get any of these??? Any explanation would be very much appreciated. Frostking
6. I've only started doing these myself, hence why I made a mistake!
The limits of $\rho$, $\theta$ and $\phi$ are found from the definition of spherical polar co-ordinates (see the video below):
7. Your only dealing with 1/8 of a sphere (first octant only) and in the xy plane the region is a quarter of a circle so the limits should be
$\int_0^{\pi/2} \int_0^{\pi/2} \int_0^{2} \rho^3 sin \phi \ d \rho d \phi d \theta$
8. Sorry, but how is it $\frac{1}{4}$ of a circle?
9. Originally Posted by Showcase_22
Sorry, but how is it $\frac{1}{4}$ of a circle?
$\int_0^2 \int_0^{\sqrt {4-x^2}} \int_0^{\sqrt{4-x^2-y^2}} \sqrt{x^2+y^2+z^2} dz dy dx$ so from your limits
$0 \le x \le 2,\; 0 \le y \le \sqrt{4-x^2}$, a circle in the first quadrant, i.e. a 1/4 of a circle.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346178770065308, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/59295/list
|
## Return to Answer
3 added 7 characters in body
Yes, it's a general construction which is related to so-called Isbell conjugation.
Let $C$ be a small category. It is well-known that the free colimit cocompletion is given by the Yoneda embedding into presheaves on $C$, $y: C \to Set^{C^{op}}$. The presheaf category is also complete. Dually, the free limit-completion is given by the dual Yoneda embedding $y^{op}: C \to (Set^C)^{op}$. The co-presheaf category is also cocomplete.
Therefore there is a cocontinuous functor $L: Set^{C^{op}} \to (Set^C)^{op}$ which extends $y^{op}$ along $y$. This is a left adjoint; its right adjoint is the (unique up to isomorphism) functor $R: (Set^C)^{op} \to Set^{C^{op}}$ which extends $y$ continuously along $y^{op}$. This adjoint pair is called Isbell conjugation.
As is the case for any adjoint pair, this restricts to an adjoint equivalence between the full subcategories consisting, on one side, of objects $F$ of $Set^{C^{op}}$ such that the unit component $F \to R L F$ is an iso, and on the other side of objects $G$ of $(Set^C)^{op}$ such that the counit $L R G \to G$ is an iso. Either side of this equivalence gives the Dedekind-MacNeille completion of $C$. By the Yoneda lemma, $y: C \to Set^{C^{op}}$ factors through the full subcategory of DM objects as a functor $C \to DM(C)$ which preserves any limits that exist in $C$, and dually $y^{op}: C \to (Set^C)^{op}$ factors as the same functor $C \to DM(C)$ which preserves any colimits that exist in $C$.
Edit: Perhaps it might help to spell this out a little more. The classical Dedekind-MacNeille completion is obtained by taking fixed points of a Galois connection between upward-closed sets and downward-closed sets of a poset $P$. So, if $A$ is downward-closed (i.e., a functor $A: P^{op} \to \mathbf{2}$), and $B: P \to \mathbf{2}$ is upward-closed, we define
$$A^u = \{p \in P: \forall_{x \in P} x \in A \Rightarrow x \leq p\}$$
$$B^d = \{q \in P: \forall_{y \in P} y \in B \Rightarrow q \leq y\}$$
and one has
$$A \subseteq B^d \qquad \text{iff} \qquad A \times B \subseteq (\leq) \qquad \text{iff} \qquad B \subseteq A^u$$
We thus have an adjunction
$$(L = (-)^u: \mathbf{2}^{P^{op}} \to (\mathbf{2}^P)^{op}) \qquad \dashv \qquad (R = (-)^d: (\mathbf{2}^P)^{op} \to \mathbf{2}^{P^{op}})$$
and the poset of downward-closed sets $A$ for which $A = (A^u)^d$ is isomorphic to the poset of upward-closed sets $B$ for which $(B^d)^u = B$.
All of this can be "categorified" so as to hold in a general enriched setting, where the base of enrichment is a complete, cocomplete, symmetric monoidal closed category $V$. We may take for example $V = Set$. Analogous to the formation of $B^d$, we may define half of the Isbell conjugation $R: (Set^C)^{op} \to Set^{C^{op}}$ by the formula
$$R(G) = \int_{d \in C} \hom(-, d)^{G(d)}$$
where $\hom$ plays the role of the poset relation $\leq$, exponentiation or cotensor plays the role of the implication operator, and the end plays the role of the universal quantifier. The other half $L: Set^{C^{op}} \to (Set^C)^{op}$ is also defined, at the object level, by
$$L(F) = \int_{c \in C} \hom(c, -)^{F(c)}$$
(the right-hand side is a set-valued functor $C \to Set$; when we interpret this in $(Set^C)^{op}$, the end is interpreted as a coend, and the cotensor is interpreted as a tensor). In any event, given $F: C^{op} \to Set$ and $G: C \to Set$, we have natural bijections between morphisms
$$\{F \to R(G)\} \qquad \cong \qquad \{F \times G \to \hom\} \qquad \cong \qquad \{G \to L(F)\}$$
and the analogue of the MacNeille completion is obtained by taking "fixed points" of the adjunction $L \dashv R$, as described above by full subcategories where the unit and counit $F \to RLF$ and $LRG \to G$ become isomorphisms. These full subcategories are equivalent; one side of the equivalence is complete because it is the category of algebras for an idempotent monad associated with $RL$, and the other side is cocomplete because it is the category of coalgebras for an idempotent comonad associated with $LR$, and thus both sides are complete and cocomplete.
2 added 2676 characters in body; deleted 6 characters in body
Edit: Perhaps it might help to spell this out a little more. The classical Dedekind-MacNeille completion is obtained by taking fixed points of a Galois connection between upward-closed sets and downward-closed sets of a poset $P$. So, if $A$ is downward-closed (i.e., a functor $A: P^{op} \to \mathbf{2}$), and $B: P \to \mathbf{2}$ is upward-closed, we define
$$A^u = \{p \in P: \forall_{x \in P} x \in A \Rightarrow x \leq p\}$$
$$B^d = \{q \in P: \forall_{y \in P} y \in B \Rightarrow q \leq y\}$$
and one has
$$A \subseteq B^d \qquad \text{iff} \qquad A \times B \subseteq (\leq) \qquad \text{iff} \qquad B \subseteq A^u$$
We thus have an adjunction
$$(L = (-)^u: \mathbf{2}^{P^{op}} \to (\mathbf{2}^P)^{op}) \qquad \dashv \qquad (R = (-)^d: (\mathbf{2}^P)^{op} \to \mathbf{2}^{P^{op}})$$
and the poset of downward-closed sets $A$ for which $A = (A^u)^d$ is isomorphic to the poset of upward-closed sets $B$ for which $(B^d)^u = B$.
All of this can be "categorified" so as to hold in a general enriched setting, where the base of enrichment is a complete, cocomplete, symmetric monoidal closed category $V$. We may take for example $V = Set$. Analogous to the formation of $B^d$, we may define half of the Isbell conjugation $R: (Set^C)^{op} \to Set^{C^{op}}$ by the formula
$$R(G) = \int_{d \in C} \hom(-, d)^{G(d)}$$
where $\hom$ plays the role of the poset relation $\leq$, exponentiation or cotensor plays the role of the implication operator, and the end plays the role of the universal quantifier. The other half $L: Set^{C^{op}} \to (Set^C)^{op}$ is also defined, at the object level, by
$$L(F) = \int_{c \in C} \hom(c, -)^{F(c)}$$
(the right-hand side is a set-valued functor $C \to Set$; when we interpret this in $(Set^C)^{op}$, the end is interpreted as a coend, and the cotensor is interpreted as a tensor). In any event, given $F: C^{op} \to Set$ and $G: C \to Set$, we have natural bijections between morphisms
$$\{F \to R(G)\} \qquad \cong \qquad \{F \times G \to \hom\} \qquad \cong \qquad \{G \to L(F)\}$$
and the analogue of the MacNeille completion is obtained by taking "fixed points" of the adjunction $L \dashv R$, as described above by full subcategories where the unit and counit $F \to RLF$ and $LRG \to G$ become isomorphisms. These full subcategories are equivalent; one side of the equivalence is complete because it is the category of algebras for an idempotent monad associated with $RL$, and the other side is cocomplete because it is the category of coalgebras for an idempotent comonad associated with $LR$, and thus both sides are complete and cocomplete.
1
Yes, it's a general construction which is related to so-called Isbell conjugation.
Let $C$ be a small category. It is well-known that the free colimit cocompletion is given by the Yoneda embedding into presheaves on $C$, $y: C \to Set^{C^{op}}$. The presheaf category is also complete. Dually, the free limit-completion is given by the dual Yoneda embedding $y^{op}: C \to (Set^C)^{op}$. The co-presheaf category is also cocomplete.
Therefore there is a cocontinuous functor $L: Set^{C^{op}} \to (Set^C)^{op}$ which extends $y^{op}$ along $y$. This is a left adjoint; its right adjoint is the (unique up to isomorphism) functor $R: (Set^C)^{op} \to Set^{C^{op}}$ which extends $y$ continuously along $y^{op}$. This adjoint pair is called Isbell conjugation.
As is the case for any adjoint pair, this restricts to an adjoint equivalence between the full subcategories consisting, on one side, of objects $F$ of $Set^{C^{op}}$ such that the unit component $F \to R L F$ is an iso, and on the other side of objects $G$ of $(Set^C)^{op}$ such that the counit $L R G \to G$ is an iso. Either side of this equivalence gives the Dedekind-MacNeille completion of $C$. By the Yoneda lemma, $y: C \to Set^{C^{op}}$ factors through the full subcategory of DM objects as a functor $C \to DM(C)$ which preserves any limits that exist in $C$, and dually $y^{op}: C \to (Set^C)^{op}$ factors as the same functor $C \to DM(C)$ which preserves any colimits that exist in $C$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 94, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442651271820068, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/7358/causality-and-quantum-uncertainty/7361
|
# Causality and Quantum uncertainty [duplicate]
Possible Duplicates:
Why quantum entanglement is considered to be active link between particles?
Why can't the outcome of a QM measurement be calculated a-priori?
Why do some (the majority of?) physicists conclude non-determinism from quantum uncertainty?
If we can't measure something, it seems to me like it's just a reflection of our ignorance.
Yet, from what I've read and seen, physicists actually interpret that as a reflection of the underlying non-determinism.
I can't understand why this is. It seems to violate the fundamental axiom of science. Specifically causality.
According to Wikipedia's page on Uncertainty Principle:
(..) Certain pairs of physical properties, such as position and momentum, cannot be simultaneously known to arbitrarily high precision. (...) The principle implies that it is impossible to determine simultaneously both the position and the momentum of an electron or any other particle with any great degree of accuracy or certainty.
This seems like a statement about a limit of our instruments and knowledge.
I see no reason to conclude "the underlying nature of reality is inherently nondeterministic" from "we cannot make precise measurement".
It seems like Einstein was pretty much the only prominent physicist who rejected it.
Again, to quote Wikipedia:
Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality.
I find myself naturally agreeing with Einstein.
I see no logical pathway from "we can't measure nature" to "nature is random".
So why do physicists adopt this view?
Please provide an answer that is clear and simple, and not smothered by formality.
Please note: this question is not for debating. I'm genuinely asking why is this the prevalent point of view? I honestly see no logical pathway to that conclusion at all.
For example, consider the question "Is it day or night in Cairo right now?", and assume we don't know what causes day and night. If we try to find out the answer to this question at any point in time, there's 50% chance that it's day and 50% chance that it's night. This doesn't mean that there's no answer to the question until we check, it simply means we don't really know what causes day and night to occur in Cairo at any given point in time.
So why, oh why, would anyone conclude from this thought experiment, that Cairo has the fundamental property that day and night are non-deterministic in it?
It seems patently clear that there's an awful lot about sub-atomic particle that we don't know. If we knew more, perhaps we could come up with better ways to measure things.
EDIT:
To clarify what I mean by non-determinism:
If a particle has 30% chance of being "here", and 70% chance of being "there", then I would assume that there's some underlying reason that determines where the particle is. But the prevalent view (as I understand it) is that there's no underlying reason, the particle just happens to sort of "choose" to be "here" 30% of the time and "there" 70% of the time with no particular reason. (I find this view absurd)
-
3
– Marek Mar 21 '11 at 14:52
Hasn't this question been asked on this site before? Stackexchange software is set up so we don't have to keep answering the same questions. – Peter Shor Mar 21 '11 at 18:10
@Peter Shor, if you think it has been asked and answered before, give a link. I did search before posting. – hasen j Mar 21 '11 at 20:25
1
@Peter Shor For myself I prefer to think about the details of each question. I'm sure there are other Questions that are similar to this one, but small differences can lead to rather different Answers, or at least to me giving Answers that are different enough that I learn something. I hope and aim to hit the sweet spot where both the Questioner and I learn something, and perhaps even others, but it's good enough if one of us does. We individually have the option of not Answering if we don't like or don't know enough to address a Question. The flow of Questions is fascinating, to me. – Peter Morgan Mar 22 '11 at 2:37
2
Unlike the other stackexchanges I've seen, physics doesn't seem to be in the habit of closing questions because they are duplicates. This may be something to bring up on meta. – Peter Shor Mar 22 '11 at 12:36
show 5 more comments
## marked as duplicate by David Zaslavsky♦Mar 23 '11 at 0:17
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 3 Answers
A short answer (which has just become somewhat longer) is that there is more than just the Uncertainty Principle thought experiment involved in the non-determinism deduction. If it were only that then physicists might just conclude that it was some classical wave phenomenon (which it is in a way) that gave rise to the HUP. The key other factor at work is the mysterious object called $\Psi$ which obeys an equation and is said to describe all of (quantum) physics. That is all the atomic experiments (spectroscopy etc) can be calculated from it and its equation (called the Schrodinger equation).
The factor then is that $\Psi$ is not like other objects in traditional physics - these give a specific answer to a specific question. Thus a clock measures the time, and a calculation based on that can determine whether it is or is not night time in Cairo (the date and other physics input might be needed for this in general). But for physics involving $\Psi$ all that happens is a probability! So there is maybe a 80% probability of result X and 20% result not-X. EDIT: I shall have more to say on Einstein's position on this topic after some references at the end of the answer.
Clearly physicists want to know whether this is some kind of approximation (as Einstein viewed) or whether it is a final result. In some way the maximum possible answer being just a probability. You will find many Stack questions which take this to the next level involving subtle tests of this probability (like Bell's Theorem). Many things are known from these theorems and corresponding experiments, and $\Psi$ clearly gives non-deterministic results.
So the HUP is simply one consequence of the basic properties of $\Psi$ giving somewhat of an explanation of that result. If you want to study this more note that words like "determinism" and "non-determinism" would need to be analysed a little bit carefully.
EDIT: Some references:
The primary topic to study further is Einstein's thought experiment to advance his perspective. This is known as the Einstein-Podolsky-Rosen (EPR) experiment. It has become famous in exactly this area: http://en.wikipedia.org/wiki/EPR_paradox
Derived from this have been further theorems primarily: Bell's Theorem : http://en.wikipedia.org/wiki/Bell_theorem This theorem is subject to "debate" from several directions. It rules out what are called "hidden variables", subject to certain conditions.
This is an old stack question related to yours: Will Determinism be ever possible?
A more recent one related to the Bell Theorem topics is (which I havent studied yet):
Is contextuality required in quantum mechanics?
Another theorem related to this question is the Kocken-Specker (Conway) theorem, also discussed here: What does John Conway and Simon Kochen's "Free Will" Theorem mean?
EDIT (after some clarification on the Question):
So there are two parts to this: (a) Einstein's reaction to the points mentioned above and (b) why "most physicists" dont accept that reaction.
Einstein's reaction was to develop some thought experiments, essentially the EPR thought experiment, which has become famous. This makes the claim that even although QM is accurate and even complete in its own terms, it is incomplete as a physical theory. The general idea was to show that uncertainty in a value (or even the impossibility to obtain a value) was somehow just a consequence of Quantum Mechanics as it then stood (and still stands). The EPR thought-experiment was intended to show that there was other "physical information" out there which was not being captured by $\Psi$.
Initially I think that physicists just ignored this paper, as no experiments were done or further development was provided. There was also a theorem by John Von Neumann which supported the view that there could be no "hidden variables" - which is what the Einstein position seemed to imply. (A hidden variable might be if $\Psi(x) = \Psi(x,h)$ where somehow we only measure $\Psi(x)$ probabilistically, but $\Psi(x,h)$ is more deterministic - but we never see or determine h, maybe. A few options here as one sees.) This wasnt quite what EPR claimed, but Bohm did develop a version of QM which had a hidden field (not quite a variable) which stochastically determined things. This was a counterexample, of a sort, to the von Neumann theorem. This is now called the "Bohmian Interpretation" of QM and showed that "Interpretations" might exist which were different from the ones known in earlier days of QM.
Bell examined this argument in the 1960s with a Theorem to update von Neumann and clarify what was excludable by the maths of QM itself. The Kochen-Specker theorem also derived similar results in a special finite case. When the Bell theorem relations were tested experimentally it showed that the Bell theorem was correct experimentally. This is why most physicists consider this topic closed.
Some philosopher-physicists and some mathematical physicists like to study remaining loopholes in these theorems and their experiments, and so the topic is under subtle discussion from that perspective. Finally it is known that Quantum Mechanics is not Quantum Gravity so it is possible that some modification of QM might be required before it becomes QC. Maybe something in that modification will also add some subtletly to the discussion.
-
Pretty good but you might consider linking to some of those questions (or some other references), I guess. – Marek Mar 21 '11 at 14:47
@Marek, yes there are so many I will take a moment to find the best few and add some links as an Edit. – Roy Simpson Mar 21 '11 at 14:48
"all that happens is a probability" what makes anyone say for sure there's nothing else? It seems patently clear that there's an awful lot we don't know about sub-atomic particles, and future discoveries could bridge this gap. – hasen j Mar 21 '11 at 14:48
@hasen j, that latter is a good question, so I am working on some Stack links for you to study. This question was about the HUP, these other topics are related. – Roy Simpson Mar 21 '11 at 14:51
Your summary about Bell's Theorem is the closest thing to an answer. Maybe you could rewrite your answer and center it around that point? – hasen j Mar 21 '11 at 15:41
show 3 more comments
Hasen j hacker-not-engineer. I suspect the reason is that causality in the classical sense hasn't been found to be useful enough for Physics and Engineering to justify the extra baggage. It's enough for practical purposes to construct good models for the statistics of experimental data, and the most effective mathematics for doing this is the mathematics of Hilbert spaces.
Interpretations of quantum theory exist —such as the de Broglie-Bohm and the Nelson stochastic interpretation, but there are various others— that will more-or-less fulfil what looks rather like a wish for classical causality in your Question, but they add a relatively awkward layer of mathematics that doesn't, for most Physicists, add enough insight or ability to do better Physics, Mathematics, or Engineering. There are other detailed reasons why not many physicists use these interpretations, particularly concerning Special Relativity.
We can model an experiment in a way that is as close to classical as we feel like having, while staying in the quantum theory fold, by introducing increasingly detailed models of every part of an experimental apparatus, instead of just modeling small numbers of electrons or photons, etc. This is, very loosely, called "moving the Heisenberg cut". This is quantum theory's version of "hidden variables"; model refinement happens, it's just not done by introducing classical hidden variables, it's done by introducing quantum mechanical variables that weren't in the less refined models. Consequently, there's no strong need to introduce classical hidden variables, even though it's not impossible to do so. This is loosely tied to a currently popular interpretation of what happens in quantum mechanical measurements, decoherence, which says that classical properties emerge because of the huge numbers of degrees of freedom that surround quantum systems.
You should understand that I've insinuated some unconventional views in this presentation of the conventional view. Also, there are more people in the Physics community than you might think who have committed their whole working lives trying to find good reasons why the Physics community should do Physics more in a way that, crudely, Einstein would approve of, so far without much success. New approaches and arguments emerge fairly regularly, are considered seriously by people who would jump if they could see practical advantage in it, and are found not good enough. The de Broglie-Bohm approach is essentially a product of the 50s, the Nelson approach is essentially a product of the 60s, from the 80s we have the GRW approach (apologies to anyone whose favorites I've missed out); it's more difficult to give a name to an approach from the 90s and 00s that people might cite in 20 years time, but there are many candidates, however none of them is obviously more useful.
EDIT: Try looking at all the interpretations listed on the Wikipedia page http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics, and think whether there's a way in which any of them helps to make the construction of models more practical. The Wikipedia presentations are not necessarily the best available, but the best presentations are probably not in a different ballpark as far as the level of complexity involved is concerned. It's OK to be annoyed that there's not anything better available, but constructing something better is real hard. Not a lot different from the situation for large-scale software, perhaps, where the balance of features and simplicity is also hard to master. Wanna displace Microsoft? We have to go ahead and try.
EDIT(2): In response to your EDIT,
If a particle has 30% chance of being "here", and 70% chance of being "there", then I would assume that there's some underlying reason that determines where the particle is. But the prevalent view (as I understand it) is that there's no underlying reason, the particle just happens to sort of "choose" to be "here" 30% of the time and "there" 70% of the time with no particular reason.
As I see it, this introduces complications, because what the conventional view is depends on whether you put a question in terms of particles or in terms of quantum fields. Consider this restatement, which changes the single reason that you invoke to cause events, a particle, into a confluence of multiple or a continuum of reasons, which is much more appropriate to a field,
If there is a 30% chance of observing an event "here", and a 70% chance of observing an event "there", then I would assume that there are underlying reasons that determine whether we observe an event "here" or "there".
There is rather too little interpretation of quantum field theory, but it's to QFT that Physicists retreat when people press for details. The state of the quantum field describes where we can expect to see events (OK, to stay in the conventional I should say particles, so you can stop reading now, but it's fairly widely understood that the concept of a particle in QFT is theoretically problematic, whereas statistics of recorded events are measured) when we use a particular experimental apparatus. Now, if you want a reason why events happen "here" or "there", you can, if you're careful to understand that a quantum field is an operator-valued distribution, not a classical field, say that the statistics we observe are because of the quantum field, which is everywhere between the preparation apparatus and the events in the measurement apparatus. Art Hobson is the only person I know who has published on this, although some of my papers falteringly hint towards this kind of approach. Art's papers are not perfect, but they're available on his web-site, try "Teaching quantum physics without paradoxes" and try others if you like that. Brigitte Falkenburg's book, Particle Metaphysics may be too much Philosophy for most people, but I find it a very good counterpoint to Art Hobson.
This says only that the statistics of the events are caused by the quantum field. I'm ambivalent about this, I'd prefer to say only that the quantum field describes or models the statistics of the events, but you do what you like. If you want to know what causes individual events, then I'm going to leave you on your own. I think something more satisfactory than deBB-type trajectories for fields can be done in the field context, but I haven't done it. Actually, I've been trolling around here at PhysicsSE for a week or so while I recoup my spirits for a new thrash at issues that halted me a few weeks ago (I've been 20 years at it, so don't hold your breath). My approach is certainly not conventional, and you can find so many other approaches out there that there is no reason at all to read what I have to say about it. If we stay with conventional Physics, however, as you see in some of the other Answers, we can hardly engage with your Question at all.
-
I can see that I shall have to study this Nelson approach after all the marketing it has received recently! – Roy Simpson Mar 21 '11 at 16:41
Roy, haha! It was just at the front of my mind, I'm not marketing it. It's something that is not well-known but I think to be "academically well-rounded" (if one wants to be!) one should understand that it's possible. I think it's fair, however, that it's not that well-known. – Peter Morgan Mar 21 '11 at 16:57
Peter, since you are looking for a criticism (or at least comment) on your answer: basically you are advocating a statistical interpretation of QM and leaving open the interpretation of what happens to individual particles. The OP really wanted to know (I think) whether "all physicists" now believed in a non-deterministic interpretation of QM as the fundamental one, and if so why. So your answer slightly dodges this. Incidentally havent some Answers been deleted from here since we wrote ours? – Roy Simpson Mar 22 '11 at 14:04
@Roy, Yep, a couple have gone. Fine if Isaac doesn't want to go at it, but I'm curious. I'm not (specially) advocating a statistical interpretation, but the OP effectively asks why Physicists don't, for the most part, go for causal, non-statistical interpretations, and I tried to rock and roll with that. Usefulness for engineering, in some broad sense, is key, IMO, although Decoherence is rather causal, and the persistence of the causal phrase "Particle Physics" seems telling. Isaac's EDIT shifts the question somewhat, so I let myself ride that wave. Your Answer is useful, and I like it. – Peter Morgan Mar 22 '11 at 14:34
I would have thought that Engineers would be quite happy with a fully deterministic explanation of the underlying physics were such available. I think that you are really saying that the statistical QM maths is good enough for getting on with the engineering, which seems to be true enough; although another Stack Q on difficulties in Quantum Computing is making me rethink that point. – Roy Simpson Mar 22 '11 at 14:41
show 1 more comment
In order to test the speed and position of a particle, you need to hit it with a photon. In doing so, you change the outcome of where the particle is, where it's going and its speed. Only by not measuring it does the particle maintain its true characteristics. So, in this way, it is not merely the inaccuracy of the detector which causes the uncertainty, but the act of measuring in itself.
-
That doesn't answer the question at all; it doesn't even remotely address it. Surely the particle has a certain speed and a certain momentum, regardless of whether or not you can measure it. – hasen j Mar 21 '11 at 18:58
Ahhhh I see where the problem is now. You think (if I've read you right) that quantum mechanics or the uncertaincy principle says that the particle has no velocity or location or momentum or whatever. I get the confusion now. But you see, the HUP is all about the measuring of (sub-atomic) particles. It's not saying that there's no velocity or position, but that when one measures it, that act changes the condition of it. Also, remember that this only applies to the smallest of particles, not every-day phenomena like "is it daylight in Cairo". – user2694 Mar 21 '11 at 19:22
I don't have a problem with that. My problem is the way people interpret that. It seems the most prevalent interpretation is that particles really act according to a probability function, not anything else. In that sense, their behavior is random. It could be here or there, but there's no reason why. I think it's not random, I think it's determined by factors we don't know. But the prevalent view among physicists seems to be that: no, it's just non-deterministic. (see my "cairo day/night" example in the question). – hasen j Mar 21 '11 at 20:24
@hasenj - probabilistic functions do not imply randomness. And why do you place more reliance on you "thinking it's determined by factors" than on many scientists and mathematicians working on the theory to try and understand the issue? – Rory Alsop Mar 22 '11 at 15:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9678173661231995, "perplexity_flag": "middle"}
|
http://cstheory.stackexchange.com/questions/14370/a-simple-proof-that-decidability-of-typability-in-system-f-lambda-2-implies
|
A simple proof that decidability of typability in System F ($\lambda 2$) implies decidability of type checking?
Suppose we don't know Joe B. Wells's result from 1994 that both typability and type checking are undecidable in System F (AKA $\lambda 2$). In Barendregt's Lambda calculi with types (1992) I found a proof due to Malecki 1989 that type checking implies typability. This is because
exists $\sigma$ such that $M:\sigma$
is equivalent to
$(\lambda xy.y)M : (\alpha\rightarrow\alpha)$
(This is because if a term is typable in System F then all its subterms are.)
Is there a simple proof the other way around? That is, a proof that typability implies type checking in System F?
-
1 Answer
As far as I know, showing that this direction is the hard part of Wells proof! At least this is what Pawel (Urzyczyn) explained to me a few years back.
Apparently it is not too hard to show that type checking is undecidable; the hard part is showing that this implies undecidability of type reconstruction! Indeed there are some cases in which the first is undecidable and the second decidable: see e.g. Dowek 1993.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290876388549805, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/23930/what-happens-to-the-energy-when-waves-perfectly-cancel-each-other/30953
|
# What happens to the energy when waves perfectly cancel each other?
What happens to the energy when waves perfectly cancel each other (destructive interference)? It appears that the energy "disappear" but the law of conservation of energy states that it can't be destructed. My guess is that the kinetic energy is transformed into potential energy. Or maybe it depends on the context of the waves where do the energy goes? Can someone elaborate on that or correct me if I'm wrong?
-
1
– Tobias Kienzler Apr 18 '12 at 7:12
3
You can't make waves perfectly cancel each other everywhere, because then energy isn't conserved. – Ron Maimon Jun 25 '12 at 19:47
This whole question and it's answers are frightening and saddening. – Colin K Jun 29 '12 at 3:34
## 10 Answers
We treated this a while back at University...
First of all, I assume you mean global cancellation, since otherwise the energy that is missing at the cancelled point simply is what is added to points of constructive interference: Conservation of Energy is only global.
The thing is, if multiple waves globally cancel out, there are actually only two possible explanations:
• One (or more) of the sources is actually a drain and converts wave energy into another form of energy, (e.g. whatever is used to generate the waves in sources, like electricity, and also as Anna said, very often heat)
• You are calculating with parts of an mathematical expansion which are only valid when convoluted with a weight function or distribution. For example, plane waves physically don't exist (But when used in the Fourier Transform they are still very useful) because their total energy is infinite
-
Maybe the question can simply be answered by the observation that a wave like
$$\Psi(x,t)=A \cos(x)-A \cos(x+\omega\ t),$$
where the two cosines cancel at periodic times $$t_n=\frac{2\pi}{\omega}n\ \ \longrightarrow\ \ \Psi(x,t_n)=0,$$ still has nonvanishing kinetic energy, if it looks something like $$E=\sum_\mu\left(\frac{\partial \Psi}{\partial x^\mu} \right)^2+\ ...$$
You would really have to construct an example.
Since non-dissipative waves whose equations of motions can be formulated by a Lagrangian will have an energy associated to them, as you say, you'd have to find a sitation/theory without an energy quantity. The energy is related to the wave by its relation to the equation of motion. So if the energy is defined as that which is constant because of time symmetriy and you don't have such a thing, then there is no question.
Also don't make the mistake and talk about about two different waves with different energy. If you have a linear problem, the wave will be "one wave" in the energy expression, whereever its parts may wander around.
edit: See also the other answer(s) for a discussion of a more physical reading of the question.
-
1
@Pygmalion: $\partial_\mu$ is supposed to be just a derivative like $\frac{\partial}{\partial t}$. If $\Phi(t)=\sin(t)$, then at $t=0$ you have $\Phi=0$, but $(\frac{\partial \Phi(t)}{\partial t})^2=1$. And yeah, you can extend my example to $\Psi(x,t)=\cos(k_1x-\omega_1t)+\cos(k_2x-\omega_2t)$ and not much will change but the expression for $t_n$. And as I said in the answer, if you describe the waves by one theory with supperposition, then you usually need to talk only about one field. There are not two different electric field energies. It's a linear theory. PS: You can edit comments btw. – Nick Kidman Apr 17 '12 at 19:43
@Pygmalion: And as I said before, if you take a linear theory like eletrodynamics, you add up all parts of the field and then compute the kinetic equation of that. I.e. $(\partial(\psi_1+\psi_2))^2$, not $(\partial\psi_1)^2+(\partial\psi_2)^2$. (side note: Since the form of the fields, related to the kinetic energy by the equation of motion, are not arbitrary you can't just write down a kinetic energy with a nonlinear additive partition propery and choose an example like $\cos$. These are phases which, in physics typically come from a spectral analysis of a linear differential operator.) – Nick Kidman Apr 17 '12 at 20:05
I understand your answer and notation now, but I had to cleared it up in my head my way. Also, maybe I was too sleepy around midnight yesterday ;) – Pygmalion Apr 18 '12 at 8:01
Waves always travel. Even standing waves can always be interpreted as two traveling waves that are moving in opposite directions (more on that below).
Keeping the idea that waves must travel in mind, here's what happens whenever you figure out a way to build a region in which the energy of such a moving wave cancels out fully: If you look closely, you will find that you have created a mirror, and that the missing energy has simply bounced off the region you created.
Examples include opals, peacock feathers, and ordinary light mirrors. The first two reflect specific frequencies of light because repeating internal structures create a physical regions in which that frequency of light cannot travel - that is, a region in which near-total energy cancellation occurs. An optical mirror uses electrons at the top of their Fermi seas to cancel out light over a much broader range of frequencies. In all three examples the light bounces off the region, with only a little of its energy being absorbed (converted to heat).
A skip rope (or perhaps a garden hose) provides a more accessible example. First, lay out the rope or hose along its length, then give it quick, sharp clockwise motion. You get a helical wave that travels quickly away from you like a moving corkscrew. No standing wave, that!
You put a friend at the other end, but she does not want your wave hitting her. So what does she do? First she tries sending a clockwise wave at you too, but that seems to backfire. Your wave if anything seems to hit harder and faster. So she tries a counterclockwise motion instead. That seems to work much better. It halts the forward progress of the wave you launched at her, converting it instead to a loop. That loop still has lots of energy, but at least now it stays in one place. It has become a standing wave, in this case a classic skip-rope loop, or maybe two or more loops if you are good at skip rope.
What happened is that she used a canceling motion to keep your wave from hitting her. But curiously, her cancelling motion also created a wave, one that is twisted in the opposite way (counterclockwise) and moving towards you, just as your clockwise wave moved towards her. As it turns out, the motion you are already doing cancels her wave too, sending it right back at her. The wave is now trapped between your two cancelling actions. The sum of the two waves, which now looks sinusoidal instead of helical, has the same energy as your two individual helical waves added together.
I should note that you really only need one person driving the wave, since any sufficiently solid anchor for one end of the rope will also prevent the wave from entering it, and so end up reflecting that wave just as your friend did using a more active approach. Physical media such as peacock features and Fermi sea electrons also use a passive approach to reflection, with the same result: The energy is forbidden by cancellation from entering into some region of space.
So, while this is by no means a complete explanation, I hope it provides some "feel" for what complete energy cancellation really means: It's more about keeping waves out. Thinking of cancellation as the art of building wave mirrors provides a different and less paradoxical-sounding perspective on a wide variety of phenomena that alter, cancel, or redirect waves.
-
Wherever the +100 came from (I'm assuming a person, not a bot), thanks! I am deeply appreciative of such a nice bit of positive feedback! – Terry Bollinger Jul 2 '12 at 3:14
Just in case anyone (e.g. student) would be interested in the simple answer for mechanical waves:
CASE 1 (global cancellation): Imagine that you have crest pulse moving right and equally large though pulse moving left. For a moment they "cancel", e.g. there is no net displacement at all, because two opposite displacements cancel out. However, velocities add up and are twice as large, meaning that all the energy in that moment is stored within kinetic energy.
Instructive and opposite situation happens, when crest pulses meet. For a moment, displacements add up and are twice as large, meaning that all the energy in that moment is stored within potential energy, as velocities on the other hand cancel out.
Because wave equation is linear differential equation, you can superpose different waves $\psi_{12} = \psi_1 + \psi_2$. As a consequence, after meeting both crest pulses or pair crest / though pulses keep traveling if nothing has had happened.
It is instructive, that you can add velocities separately of amplitudes, as $\dot{\psi}_{12} = \frac{\partial}{\partial t} (\psi_1 + \psi_2) = \dot{\psi}_1 + \dot{\psi}_2$. So even if amplitudes do cancel out at a given moment ($\psi_1 + \psi_2 = 0$), speeds do not ($\dot{\psi}_1 + \dot{\psi}_2 \ne 0$).
It is just as if you see that oscillator is in a equilibrium position at a given moment. That does not mean that it is not oscillating, as it still might posses velocity.
If we generalize written above: in any wave you have exchange of two types of energy: kinetic vs. potential, magnetic vs. electrical. You can make such two waves that one of the energies cancels, but the other energy will become twice as big.
CASE 2 (local cancellation): In case of spatial interference of two continuous waves there are areas of destructive and areas of constructive interferences. Energy is no longer uniformly distributed in space, but in average it equals added up energies of two waves. E.g. looking at standing waves, there is no energy at nodes of the standing waves, while at crests energy is four times the energy of one wave - giving a space average of twice the energy of one wave.
More engineer-like explanations can be found here: http://van.physics.illinois.edu/qa/listing.php?id=1891
-
When two waves interfere totally destructively the energy turns into heat.
Two sound waves, the temperature of the medium will go up and energy is conserved because it turns into incoherent kinetic energy of the molecules of the medium.
Two water waves, ditto.
I have been trying unsuccessfully to find a reference about two electromagnetic waves with total destructive interference and energy conservation.
If it is an interference pattern, as observed in a detecting medium, screen etc. and one gradually goes to total destructive interference then again the medium will be heated.
If it is two electromagnetic waves in vacuum, I expect the energy to go into soft infrared invisible photons, as there is no ether to take it up, but may be wrong. One would need QED to see how this could happen, imo. Figure 6.8 allows for photon photon scattering and the change in frequency but the probability is small. Or maybe there cannot be complete destructive interference in vacuum, the Heisenberg uncertainty principle ( delta(E)*delta(t) always allowing for constructive ones. Or the truth is in between that the energy propagates and we can only see interference if the beams impinge on a medium, which will take care of energy conservation .
Edit Thanks to @HelderVelez for the link of the antilazer. Saves me thinking of an experiment.
When the alignment was right, the light waves canceled each other out. The silicon absorbed the light and converted it to another form of energy, like heat or electrical current.
I had not thought of current, but heat is right there.
-
2
I guess it's worth pointing out that this sort of thing can only happen in nonlinear theories. – Mark Eichenlaub Apr 17 '12 at 20:27
4
Two EM waves which interfere destructively will also interfere constructively in another region, such that energy is conserved globally. – user2963 Apr 17 '12 at 21:14
@zephyr Would it be fair to say generally that it is impossible to create two particles with exactly opposite wave functions for the whole universe for some, say, exclusion principle? Then it is absolutely necessary that somewhere there is a constructive interference. – Pygmalion Apr 17 '12 at 22:01
Downvote -- As Mark says, wave energy can't just turn into heat just because you want it to, there has to be a mechanism--a nonlinear interaction. There is no reason to expect that nonlinear interactions of exactly the correct strength will magically appear in all situations to transfer exactly the appropriate amount of energy. – Steve B Apr 18 '12 at 0:14
If destructive interference just shifted light to infrared, then all those interferometers I've got wouldn't work very well... – Colin K Apr 18 '12 at 3:36
show 16 more comments
http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-27-11-2468
Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy–momentum conserved? Michelle Wynne C. Sze, Quirino M. Sugon, Jr., and Daniel J. McNamara JOSA A, Vol. 27, Issue 11, pp. 2468-2479 (2010)
We added the two elliptically polarized waves and computed the energy–momentum density of their sum. We showed that energy and momentum are not generally conserved, except when the two waves are moving in opposite directions. We also showed that the momentum of the superposition has an extra component perpendicular to the propagation directions of both waves. But when we took the time-average of the energy and momentum of the superposition, we found that the time-average energy and momentum could also be conserved if both waves are circularly polarized but with opposite handedness, regardless of the directions of the two waves. The non-conservation of energy and momentum of the superposition of two elliptically polarized plane waves is not due to the form of the plane waves themselves, but rather to the accepted definitions of the electromagnetic energy and momentum. Perhaps we may need to modify these definitions in order to preserve the energy–momentum conservation. In our computations, we restricted ourselves to the superposition of two waves with the same frequency.
-
thanks, an interesting calculation behind a paywall – anna v Apr 25 '12 at 4:38
I did some reaserch into this and after a lot of reading I came acrose a site with a good description of what you are talking about so here's a link. http://skullsinthestars.com/2010/04/07/wave-interference-where-does-the-energy-go/ If you go to this site it should answer your question and I hope it helps.
-
From my answer here-PSE-anti-laser-how-sure-we-are-that-energy-is-transported
The Poyinting vectors, and the momenta vectors as the E, B fields are symetric. When we do 'field shaping' with antenae aggregates we simply use Maxwell eqs and go with waves everytime. When we got near a null in energy in some region of space we dont get infrared radiation to 'consume' the canceled field. E,B vectors additive: Light+Light=0
Antenae in sattelites (vaccum) work the same way as the ones at Earth surface to shape the intensity of the field.
Because the "Poyinting vectors" add to null there is no doubt, imo, that energy vanish.
See the antilaser experiment.
We dont have theory? Then we must rethink.
IMO energy is not transported. What is propagating is only an excitation of the medium (we call it photons) and energy is already 'in site' (vacuum, or whatever name we call the medium).
-
This figure shows two common situations.
The top is an example where the waves are coming from different directions--one from "S1", one from "S2". Then there is destructive interference in some areas ("nodes") and constructive interference in others ("hot spots"). The energy has been redistributed but the total amount of energy is the same.
The bottom is an example where the two sources S3 and S4 are highly directional plane-wave emitters, so that they can destructively interfere everywhere they overlap. For that to happen, the source S4 itself has to be sitting in the field of S3. Then actually what is happening is that S4 is absorbing the energy of S3. (You may think that running the laser S4 will drain its battery, but ideally, the battery can even get recharged!)
-
What about momentum conservation? Do you have a link for an experiment on this? – anna v Apr 19 '12 at 14:19
The Poynting_vector
In physics, the Poynting vector represents the directional energy flux density (the rate of energy transfer per unit area, in Watts per square metre, W·m−2) of an electromagnetic field.
If the antilaser antilaser experiment is performed in the vacuum there is no thermal dissipation, and the Poynting vectors are opposed, and cancel, for the same field intensity and with the fields out-of-phase. For plane waves (WP, link above):
"The time-dependent and position magnitude of the Poynting vector is" : $\epsilon_0cE_0^2\cos^2(\omega t-\mathrm{k\cdot r})$ and the average is different of zero for a single propagating wave, but, for two opposing plane waves of equal intensity and 100% out-of-phase the instantaneous Poynting vector, that measures the flux of energy, is the vector $\vec{S}(t)=\mathrm{\vec{0}}$.
If you have one electromagnetic beam at a time then work can be done. If you have two in the above conditions then no work can be extracted. (Energy is canceled, destroyed, ;)
BUT, things can be more complicated then described by the eqs, because a physical emmiter antenna also behaves as a receiving antenna that absorbs and reradiates etc, ... changing and probably trashing my first oppinion.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430378079414368, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/60436-maximum-minimum-values.html
|
# Thread:
1. ## Maximum and minimum values
find the max and min values of the function f(x,y,z,t)= (x+y+z+t) subject to the constraint $x^2+y^2+z^2+t^2=49$
I have never calculated for t before and I'm thrown off as how to do it.
You get no critical points from the original equation so I'm assuming it would all be on the boundary. Please help
thank you
2. Originally Posted by koalamath
find the max and min values of the function f(x,y,z,t)= (x+y+z+t) subject to the constraint $x^2+y^2+z^2+t^2=49$
I have never calculated for t before and I'm thrown off as how to do it.
You get no critical points from the original equation so I'm assuming it would all be on the boundary. Please help
thank you
Have you learned the method of Lagrange multipliers? The fact that t is in it just means you have 4 variables to solve for.
3. no i havent'
4. ## Thank you mr fantastic
ok so i found lambda to be $+or- 1/7$ There fore x,y,z,t would $+or-7/2$
This leads to a max of 14 at $(2/7,2/7,2/7,2/7)$ and a min at $(-2/7,-2/7,-2/7,-2/7)$
correct?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214754104614258, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/48306/the-poles-of-impurity-systems-greens-function?answertab=oldest
|
# the poles of impurity system's Green's function
Denote the pure system as system 1, with both continuum and discrete eigen energy. $G_0$ is its Green's function.
After introducing some impurities, we call the resultant system system 2 with new Green's function $G$, and $T$ is T matrix.
We have $G=G_0 + G_0 T G_0$
My question is, since the poles of Green's function are eigen energy of the system, and from the above equation, we find that all the poles of $G_0$ will also be the poles of $G$, does this mean the eigen energy of system 2 share the same eigen energy as its pure counterpart system 1? That is to say, both system 1's continuum and discrete eigen energy do not change in the presence of impurity?
Is there any possibility the $T$ matrix can cancel some of $G_0$'s poles?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929316520690918, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/21818/the-dual-cloud-chamber-paradox
|
# The Dual Cloud Chamber Paradox
2012-04-07 Addendum: The Dual Cloud Chamber Paradox
Two 10m diameter spheres $A$ and $B$ of very cold, thin gas have average atomic separations of 1nm. Their atoms are neutral, but ionize easily and re-radiate if a relativistic ion passes nearby. Both clouds are seeded with small numbers of positive and negative ions. The clouds collide at a relative speed of $0.994987437c$, or $\gamma=10$. Both are sparse enough to minimize collisions and deceleration. Both show passage of each other's ions. (a) What do distant observers moving parallel to $A$ and $B$ observe during the collision? (b) Are their recordings of this event causally consistent?
The answer to (a) requires only straightforward special relativity, applied from two perspectives. For the traveler moving in parallel to cloud $A$, cloud $B$ should appear as a Lorentz contracted oblate spheroid with one tenth the thickness of cloud $A$, passing through cloud $A$ from right to left. If you sprinkled cloud $A$ with tiny, Einstein-synchronized broadcasting clocks, the $A$-parallel observer would observe a time-stamped and essentially tape-recording like passage of the $B$ spheroid through cloud $A$.
The $B$-parallel observer sees the same sort of scenario, except with cloud $A$ compressed and passing left-to-right through cloud $B$. If $B$ is sprinkled with its own set of Einstein synchronized broadcasting clocks, the $B$p-parallel observer will record a time-stamped passage of the compressed $A$ through $B$.
So, am I the only one finds an a priori assumption causal self-consistency between these two views difficult to accept? That is, while it may very well be that a recording of a flattened $B$ passing through cloud $A$ in $A$-sequential time order can always be made causally self-consistent with a recording of a flattened $A$ passing passing through through cloud $B$ in $B$-sequential order, this to me is a case where a mathematically precise proof of the information relationships between the two views would be seem like a good idea, if only to verify how it works. Both views after all record the same event, in the sense that every time a clock in $A$ or $B$ ticks and broadcasts its results, that result becomes part of history and can no longer be reversed or modified.
It's tempting to wonder whether the Lampa-Terrell-Penrose Effect might be relevant. However, everything I've seen about L-T-P (e.g. see the video link I just gave) describes it as an optical effect in which the spheres are Lorentz contracted at the physical level. Since my question deals with fine-grained, contact-like interactions of two relativistic spheres, rather than optical effects at a distance, I can't easily see how L-T-P would apply. Even if it did, I don't see what it would mean.
So, my real question is (b): Does an information-level proof exist (not just the Lorentz transforms; that part is straightforward) that the $A$-parallel and $B$-parallel recordings of dual cloud chamber collision events will always be causally consistent?
2012-03-03: This was my original version of the question, using muonium clocks
My SR question is how to predict length contraction and time dilation outcomes for two interacting beams of neutral particles. The two beams are:
1. In different inertial (unaccelerated) reference frames for the duration of measurement part of the experiment.
2. Intermixed at a near-atomic level so that frame-to-frame photon exchange times are negligible. An example would be two intersecting beams of relatively dense, internally cold muonium atoms.
3. Clock-like even at the atomic level (e.g., decaying atoms of muonium).
4. Part of a single causal unit. By this I mean that there is a sufficient short-range exchange of photons between the two beams to ensure that state changes within each beam have an entropic effect, however small, on nearby components of the other beam. This makes their interactions irreversible in large-scale time. An example would be decay of an anti-muon in one frame transferring energy to nearby muonium atoms in the other frame. Simply illuminating the intersection region with light frequencies that would interact with muonium in both of the beams would be another option.
5. Observed by someone who is herself isolated from the rest of the external universe.
Muons generated by cosmic rays and traveling through earth's atmosphere provide a helpful approximation the above experiment. The muons provide the first beam, and the atmosphere forms the second one, which in the case of muons is shared by the observer.
Such muons have tremendously extended lifespans, which is explained by invoking time dilation (but not length contraction) for the muon frame as viewed from the atmosphere frame. Conversely, length contraction (but not time dilation) is invoked to describe the view of the atmosphere from the perspective of the muon frame. Since this results in the atmosphere appears greatly compressed in the direction of travel, the muons simply travel a shorter distance, thereby ensuring the same decay result (causal symmetry) for both views of the interacting frames.
My question then is this:
For the thought experiment of intersecting, causally linked beams of muonium, what parameters must the observer take into account in order to predict accurately which of the two intersecting muonium beams will exhibit time dilation, and which one will exhibit length contraction?
2012-03-04 Addendum by Terry Bollinger (moved here from comment section):
Sometimes asking a question carefully is a good way to clarify one's thinking on it. So, I would now like to add a hypothesis provoked by own question. I'll call it the local observer hypothesis: Both beams will be time dilated based only on their velocities relative to the observer Alice; the beam velocities relative to each other are irrelevant. Only this seems consistent with known physics. However, it also implies one can create a controllable time dilation ratio between two beams. I was trying to avoid that. So my second question: Are physical time dilation ratios ever acceptable in SR?
2012-03-06 Addendum by Terry Bollinger:
Some further analysis of my own thought problem:
A $\phi$ set is a local collection of clock-like particles (e.g. muons or muonium) that share a closely similar average velocity, and which have the ability to intermix and exchange data with other $\phi$ sets at near-atomic levels, without undergoing significant acceleration. A causal unit $\chi = \{\phi_0 ... \phi_n\}$ is a local collection of $(n+1)$ such $\phi$ sets. By definition $\phi_0$ contains a primary observer Alice, labeled $\aleph_0$, where $\aleph_0 \subset \phi_0$.
Each $\phi_i$ has an associated XYZ velocity vector $\boldsymbol{v_i} = (\phi_0 \rightarrow \phi_i$) that is defined by the direction and rate of divergence of an initially (nominally) co-located pair of $\phi_0$ and $\phi_i$ particles, with the $\phi_0$ particle interpreted as the origin. The vector has an associated magnitude (scalar speed) of $s_i=|\boldsymbol{v_i}|$.
Theorem: If $\phi_i$ includes a subset of particles $\aleph_i$ capable of observing Alice in $\phi_0$, then $\aleph_i$ will observe $\phi_0$ as length contracted along the axis defined by $\boldsymbol{v_i}$. Conversely, Alice ($\aleph_0$) will observe each $\phi_i$ as time dilated (slowed) based only on its scalar speed $s_i$, without regard to the vector direction. This dependence of time dilation on the scalar $s_i$ means that if $\phi_a$ has velocity $\boldsymbol{v_a}$ and $\phi_b$ has the opposite velocity $\boldsymbol{-v_a}$, both will have the same absolute time dilation within the causal unit (think for example of particle accelerators).
Analysis: This $\chi = \{\phi_i\}$ framework appears to be at least superficially consistent with data from sources such as particle accelerators, where time dilation is observable in the variable lifetimes of particles at different velocities, and where absolute symmetry in all XYZ directions is most certainly observed. Applying observer-relative time dilation to fast particles (e.g. in a simple cloud chamber) is in fact such a commonly accepted practice that I will assume for now that it must be valid.
(It is worth noting that while particle length contraction is typically also assumed to apply to fast-moving particles, serious application of length contraction in a causal unit would cause the particles to see longer travel paths that would shorten their lifespans. This is the same reason why muons are assumed to see a shorter path through a length-contracted atmosphere in order to reach the ground.)
My updated questions are now:
(1) Is my $\chi = \{\phi_i\}$ framework for applying SR to experiments valid? If not, why not?
(2) If $\chi = \{\phi_i\}$ is valid, what property makes the observer $\aleph_0$ in $\phi_0$ unique from all other $\aleph_i$?
-
A reference frame is just a way of assigning coordinates to points in spacetime, nothing more. It seemed like you were mixing up the meanings of "frame" and "beam" a bit so I straightened that out. – David Zaslavsky♦ Mar 4 '12 at 0:56
Good clarification; thanks David! – Terry Bollinger Mar 5 '12 at 0:15
you just don't have intuition for failure of simultaneity--- this is the universal thing that trips up people's intuitive understanding of relativity. The two clouds passing through each other leave a "wake" at different spots in the different frames because the "wake" is spacelike (superluminal) and changes slope and speed depending on your motion. – Ron Maimon Apr 8 '12 at 23:08
So: If all of this is just a matter of good intuition, a mathematically precise proof should be trivial, yes? – Terry Bollinger Apr 9 '12 at 15:23
## 4 Answers
Have a look at the images below. Both sides show cloud A moving through cloud B as you would expect. Now at the left side we are going to collect 5 slices which are at equal time in the restframe of A, and at the right side we collect 5 slices which are at equal time in the restframe of B. This corresponds with the rotation of the time axis in these two reference frames.
As you can see, the left image shows a highly contracted cloud B inside the cloud A while the right image shows a highly contracted cloud A inside cloud B, exactly as you describe. Now note that both are part of the same 4D reality. What you see are two different 3D spaces cut out of the same 4D space-time.
Hans
-
Hans, thanks! (My first attempt to comment on this and another seem to have disappeared; I think I didn't save them right.) Your answer is very clearly presented, and you have understood my setup well. I'll be looking at your answer a lot more closely (not tonight, multiple distractions). Again, thanks. – Terry Bollinger Apr 11 '12 at 3:05
My thanks for the comment and four entries for this competition! All were excellent reading, and I am deeply appreciative of the serious thought everyone put into this one to help me in my head scratching. The winner, as you've likely guessed by where I'm placing this comment, is Hans de Vries. Reading through his answer instantly made me understand my own question in some new and very thought-provoking ways. (One hint: 4D objects rotate around invariant 2D planes, and objects in different frames appear rotated to each other in a plane of motion and time.) There will be more DC2P bounties! – Terry Bollinger Apr 16 '12 at 3:08
– Hans de Vries Apr 16 '12 at 3:34
Hans, thanks! I will enjoy going through it! – Terry Bollinger Apr 16 '12 at 4:52
Naturally there's no "universal" time in the SR, the time difference of two events may be different in different reference frames, and in particular it may be negative (i.e. order of the events may be different as well).
However there are two distinct things:
1. The time of the event, according to a specific reference frame's coordinate + time system.
2. The time at which an observer in a particular reference frame receives the information about the event.
As you probably know, there's a maximal speed at which the information may be passed according to SR, which is the speed of light in vacuum (or any other particles with no rest mass). Hence an observer in a particular reference frame does not register an event the same moment it occurs in its reference frame. There's a delay, which depends on the distance between the event and the observer (in the context of its reference frame). So, when you talk about two events that occur at different locations - this should be taken into account to realize what the observer actually "sees".
Accounting for this one resolves all the causality-related paradoxes, such as a "ladder paradox".
More information here. See "Causality and prohibition of motion faster than light"
-
valdo, thanks, I'll read through this more carefully. The ladder (or barn-pole paradox) is a delightful classic. In fact, I embed Einstein-synchronized clocks within the two clouds precisely to make the time-space rotations of such barn-pole thought experiments more explicit and truly 4D over time (points are too easy to transform). 3D networks of Einstein clocks moving through local times trace out "virtual Euclidean" spaces with ++++ like signatures. Such regions must interact via the deeper and more symmetric -+++ hyperbolic space of Minkowski to give a final and causally consistent answer. – Terry Bollinger Apr 11 '12 at 2:56
I'll make an argument intended to make the Lorentz transformation more natural. The reason this seems so odd is because you're thinking about it from the point of view of matter. This is a natural thing to do because people are made out of matter. But Einstein's work is much simpler if you think about things from the point of view of light. With light, the natural speed is always $c$. And there's an argument in favor of treating matter as if its natural speed were also $c$. If you follow that argument, then the Lorentz transformations follow.
The Standard Model of elementary particles represents the fermions (matter) as chiral fields. These are a Dirac field (which can model, for example, the combination fields of electron and positron) which are split into "chiral" or "handed" halves. These left and right handed fields are a little familiar to undergraduate physics as their equivalents appear when light is circularly polarized into left or right handed light.
To turn an electron into a pure right handed or left handed field, one accelerates the electron in the direction of its spin (or the opposite direction) to the speed of light. This is of course impossible except in the limit. But the underlying notion is clear, the Standard Model is built from components that naturally travel at light speed.
So your intuition will understand the situation better if you think of matter as being the weird and bizarrely behaved stuff. Light acts perfectly normally, just what you'd expect of waves in a universe where the wave speed is $c$.
If you look at the problem this way, you will see that the transformation of the pictures from one clump to the other involves a great deal of complexity. You have to arrange for matter to do some pretty crazy things in order to get all those "stationary" clocks.
On the other hand, if you think of the problem as one where fields that travel at a natural speed of $c$ interact, then one naturally chooses a single reference frame and carries out all calculations in that frame of reference. Of course, since we are made out of fields that move at speed $c$, we cannot detect that reference frame. Since we cannot detect it, it is not "preferred" by our laws of physics. We could just as easily have chosen the "wrong" rest frame, our calculation would give the same result either way because waves can measure absolute wave speeds or absolute wave frequencies or absolute wave lengths; they can only measure differences and the results are the laws of Lorentz transform.
As an example, suppose you are a small localized wave. You cannot know your absolute speed against the background (say water for instance, though this is not a good example). When another wave comes by, all you can do is say what their wavelength is compared to your own wavelength. But your own wavelength gets changed when you are in fast moving water. That's because the time that a wave saves going downstream (with the current) is less than the time it loses going upstream.
Suppose the speed of your waves is $c$, and you need to travel some distance $l$. The time required to go there and back will be $t = l/c+ l/c = 2l/c$. Now suppose that there is a current at speed $v$. This will speed you up one way and slow you down the other. So the time required becomes: $$t = l/(c+v) + l/(c-v) = \frac{l(c-v)}{(c+v)(c-v)} + \frac{l(c+v)}{(c+v)(c-v)}$$ $$= \frac{2cl}{c^2-v^2} = \frac{2l}{c}\frac{1}{1-v^2/c^2}\;\;\textrm{so}$$ $$t = \frac{2l}{c}\gamma^2$$ where $\gamma = (1-v^2/c^2)^{-1/2}$ is the usual relativistic gamma factor. Looked at relativistically, you can rearrange the calculation to be $(t/\gamma) = 2(l\gamma)/c$. To get the old result $t=2l/c$, you have to apply two changes. You apply length contraction so that $l\to l/\gamma$ and time dilation so that $t\to t\gamma$. Thus the Lorentz transformation is natural for beings made of waves, living in a universe of waves, who can use only waves to measure.
So much for why it is that Lorentz transformations are natural. Now as to why causality can have different rest frames. If you accept that everything must be made out of waves, then the wave speed itself gives a limit to causality. Nothing can happen before a wave gets there. So you are free to choose a reference frame so long as your reference frame is slower than light it cannot a violation of causality. (Because casusality uses waves.)
-
Carl, I managed to lose my first comment somehow, oops. Your answer is interesting. I have to admit I have no idea how you brought QFT chirality into it though! So, I'll read through it a lot more carefully later; I'm curious as to your argument. Thanks again for a nicely detailed answer. – Terry Bollinger Apr 11 '12 at 3:08
In my answer I will use the known properties of the light. The light propagates isotropically wrt the medium irrespective of the speeds of the source and of the receiver and has a constant value if measured by a non-accelerating observer. The source is Einstein (I.1.2 §2 of 1905 paper) :
Every ray of light moves in the "stationary" co-ordinate system with the definite velocity V, the velocity being independent of the condition, whether this ray of light is emitted by a body at rest or in motion
The other source is the online book of Hans de Vries where relativity is very well explained and where we can see (chapter 4, I think) that there exists a real length Lorentz contraction and not only an apparent one. The other source is the pos-Einstein paper Cosmological Principle and Relativity - Part I (arxiv) (is poison..;)
...generalizing Relativity Principle to position... .., and analysed the space-time structure. Special Relativity space-time is obtained, with no formal conflict with Einstein analysis, but fully solving apparent paradoxes and conceptual difficulties, including the simultaneity concept and the long discussed Sagnac effect. ...
I dont see any problem in causality and I've done a nice image (my 1st in Inkscape) to show how I see the problem. If you think that this is a paradox to the relativistic guys then you should also see PSE - Twin paradox - observers counter orbiting pose a serious problem to the relativistic minds (only those that can only think with an equation in front of the eyes).
-
Helder, interesting, I'll look more closely. The "shrink" issue is extremely interesting. That is precisely the one that John Bell caused a minor furor over with his accelerating spacehips-linked-by-string thought problem. About half the SR experts he talked were absolutely positive the string would not break, and the other half were absolutely positive it would break! Neither answer violates the Lorentz transforms, but you need to be very, very careful about setup. (Bell's answer? The string will break. Bell was a very deep thinker on many such questions, not just on the Bell Inequality.) – Terry Bollinger Apr 11 '12 at 2:29
Helder, an important clarification: If each 10m ball of thin gas is seen as a ball by its pacing spaceship, then Lorentz contraction must apply equally to both the ship and the gas ball. Otherwise you get an immediate logical paradox in which there are two versions of length: The ship version, and the ball version. So both the ship and the gas ball must be contracted, or neither. What you described is the case of "isolated Lorentz, but no overall Lorentz." That one can indeed be constructed, but only by separately accelerating each molecule in the gas, as in the Bell ship paradox. – Terry Bollinger Apr 12 '12 at 0:16
@Terry You said that they are spheres, and I assumed that they are geometrical spheres when seen at rest. When seen in motion, inspite of been side by side, ship and cloud, this one is seen as coming from behind, and elongated. As they are formed by a thin gas (1nm sep) I presumed that they are free ensemble of particles without electromagnetic interaction. In the case of the spaceship the molecules are electromagnetically bonded and so all the spaceship will be contracted. As Carl said, and I subscribe, matter is 'wavy', is light, and we think that they will suffer a 'contraction'. – Helder Velez Apr 12 '12 at 2:31
Helder, thanks. I think we are in synch? But just to make sure I'm expressing my intent clearly as possible, my thought process was this: Assume the two ships are already in motion. Shortly before closest approach, each ship creates its sphere of diffuse gases. Each sphere is at rest relative to its spaceship, and each is 10m in diameter relative to that spaceship. My intent is to max out the symmetry so that each ship perspective is as close to an exact (180 rotated) replica of the other as possible. BTW, after taxes (sigh) I'll put out some further discussion and figures on this question. – Terry Bollinger Apr 13 '12 at 19:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944485604763031, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/28421?sort=votes
|
## The Jacobi Identity for the Poisson Bracket
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is well known that if $M, \Omega$ is a symplectic manifold then the Poisson bracket gives $C^\infty(M)$ the structure of a Lie algebra. The only way I have seen this proven is via a calculation in canonical coordinates, which I found rather unsatisfying. So I decided to try to prove it just by playing around with differential forms. I got quite far, but something isn't working out and I am hoping someone can help. Forgive me in advance for all the symbols.
Here is the setup. Given $f \in C^\infty(M)$, let $X_f$ denote the unique vector field which satisfies $\Omega(X_f, Y) = df(Y) = Y(f)$ for every vector field $Y$. We define the Poisson bracket of two functions $f$ and $g$ to be the smooth function $\{f, g \} = \Omega(X_f, X_g)$. I can show that the Poisson bracket is alternating and bilinear, but the Jacobi identity is giving me trouble. Here is what I have.
To start, let's try to get a handle on $\{ \{f, g \}, h\}$. Applying the definition, this is given by $d(\Omega(X_f, X_g))X_h$. So let's try to find an expression for $d(\Omega(X,Y))Z$ for arbitrary vector fields $X, Y, Z$.
Write $\Omega(X,Y) = i(Y)i(X)\Omega$ where $i(V)$ is the interior product by the vector field $V$. Applying Cartan's formula twice and using the fact that $\Omega$ is closed, we obtain the formula
$$d(\Omega(X,Y)) = (L_Y i(X) - i(Y) L_X) \Omega$$
where $L_V$ is the Lie derivative with respect to the vector field $V$. Using the identity $L_V i(W) - i(W) L_V = i([V,W])$, we get:
$$(L_Y i(X) - i(Y) L_X) = L_Y i(X) - L_X i(Y) + i([X,Y])$$
Now we plug in the vector field $Z$. We get $(L_Y i(X) \Omega)(Z) = Y(\Omega(X,Z)) - \Omega(X,[Y,Z])$ by the definition of the Lie derivative, and clearly $(i([X,Y])\Omega)(Z) = \Omega([X,Y],Z)$. Putting it all together:
$$d(\Omega(X,Y))Z = Y(\Omega(X,Z)) - X(\Omega(Y,Z)) + \Omega(Y, [X,Z]) - \Omega(X, [Y,Z]) + \Omega([X,Y], Z)$$
This simplifies dramatically in the case $X = X_f, Y = X_g, Z = X_h$. The difference of the first two terms simplifies to $[X_f, X_g](h)$, and we get:
$$\{\{f, g\}, h\} = [ X_f, X_g ](h) + [ X_f, X_h ](g) - [ X_g, X_h ](f) - [ X_f, X_g ](h) = [ X_f, X_h ](g) - [ X_g, X_h ](f)$$
However, this final expression does not satisfy the Jacobi identity. It looks at first glance as though I just made a sign error somewhere; if the minus sign in the last expression were a plus sign, then the Jacobi identity would follow immediately. I have checked all of my signs as thoroughly as I can, and additionally I included all of my steps to demonstrate that if a different sign is inserted at any point in the argument then one obtains an equation in which the left hand side is alternating in two of its variables but the right hand side is not. Can anybody help?
-
## 5 Answers
The Jacobi identity for the Poisson bracket does indeed follow from the fact that $d\Omega =0$.
I claim that (twice) the Jacobi identity for functions $f,g,h$ is precisely $$d\Omega(X_f,X_g,X_h) = 0.$$
To see this, simply expand $d\Omega$.
You will find six terms of two kinds:
• three terms of the form $$X_f \Omega(X_g,X_h) = X_f \lbrace g,h \rbrace$$
• and three terms of the form $$\Omega([X_f,X_g],X_h).$$
To deal with the first kind of terms, notice that from the definition of $X_f$, for any function $g$, $$X_f g = \lbrace g, f \rbrace.$$ This means that $$X_f \Omega(X_g,X_h) = \lbrace \lbrace g,h \rbrace, f \rbrace.$$
To deal with the second kind of terms, notice that $$\iota_{[X_f,X_g]}\Omega = [L_{X_f},\iota_{X_g}]\Omega,$$ but since $d\Omega=0$, $$L_{X_f}\Omega = d \iota_{X_f}\Omega = 0,$$ and hence $$\iota_{[X_f,X_g]}\Omega = d \iota_{X_f}\iota_{X_g}\Omega = d\lbrace g,f\rbrace,$$ whence $$\Omega([X_f,X_g],X_h) = d\lbrace g,f\rbrace (X_h) = \lbrace\lbrace g,f\rbrace, h \rbrace.$$
Adding it all up you get twice the Jacobi identity.
-
I had to take care of some other stuff before I could come back to this and work through the details, but everything seemed to check out. Thanks! I wonder where the mistake lay in my original approach, which used more or less the same tools. – Paul Siegel Jun 21 2010 at 20:47
There is no mistake in your calculations, as far as I can see. The last equation you write down is certainly correct. It simply requires extra work to derive the Jacobi identity from that point. I am afraid that the only way I can see to do this is basically to undo some of your identities and get back to something akin to what I wrote in my answer. – José Figueroa-O'Farrill Jun 21 2010 at 23:00
2
An alternative after only proving $\iota_{[X_f,X_g]}\Omega=d\lbrace g,f\rbrace$ is to observe that the derivative of $\lbrace\lbrace f,g\rbrace h\rbrace+\lbrace\lbrace h,f\rbrace g\rbrace+\lbrace\lbrace g,h\rbrace f\rbrace$ is zero by the ordinary Jacobi identity. So it's locally constant. Since it's linear and locally defined, it must be zero (pick a point and kill it somewhere else with a bump function). – Zack Apr 24 2011 at 7:23
@Josè Figueroa-O'Farrill: When, for an arbitrary almost-symplectic manifold, we again construct the bracket, is correct that $d\omega(X_f,X_g,X_h)$ is equal to the Jacobiator $J(f,g,h)$? or I am making same mistake? – Giuseppe Apr 24 2011 at 16:57
@Giuseppe: you are correct. Even if $\omega$ is not closed, if you define the hamiltonian vector field $X_f$ for a smooth function $f$ by $\omega(X_f,Y) = Yf$ for all vector fields $Y$, then again you will find that $d\omega(X_f,X_g,X_f)$ is (perhaps up to a sign) the "Jacobiator" of $f,g,h$. – José Figueroa-O'Farrill Apr 26 2011 at 3:53
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I wanted to add this as a comment to Jose's answer but it seems that I cannot do that as a new user.
For any bivector field $\sigma$, you can define a bracket on smooth functions by $\{f,g\} = \sigma(df, dg)$. This bracket is skew and will automatically satisfy the Liebniz rule. It will satisfy the Jacobi identity precisely when $[\sigma, \sigma] = 0$, where $[\cdot,\cdot]$ is the Schouten bracket. This point of view is important, for example, in defining Poisson cohomology.
Now suppose you take $\sigma = \omega^{-1}$, where $\omega$ is a nondegenerate 2-form. Jose's calcuation shows that $[\sigma, \sigma] = 0$ iff $d\omega = 0$.
-
I find your answer to be the right complement to Jose's answer. Thanks. – Giuseppe Apr 23 2011 at 21:37
Here's the way it's done in John Lee's Smooth Manifolds book:
$${\small \iota_{X_{\lbrace f,g\rbrace}}\omega = d\lbrace f,g \rbrace=d(X_gf) = d(\mathcal{L}_{X_g}f) =\mathcal{L}_{X_g}df=\mathcal{L}_{X_g}(\iota_{X_f}\omega)=\mathcal{\iota}_{[X_g,X_f]}\omega + \mathcal{L}_{X_g}\omega=\mathcal{\iota}_{[X_g,X_f]}\omega}$$ which by nondegeneracy of $\omega$ implies the desired result.
-
Oh woops. Didn't get to the punchline. I showed $$X_{\lbrace f,g\rbrace} = [X_g,X_f].$$ From here, do what Zack said. – Dmitri Gekhtman Oct 18 at 2:39
Perhaps one of the following references is helpful for you: a) page 12 of http://sundoc.bibliothek.uni-halle.de/habil-online/04/04A736/t3.pdf (unfortunately in German language), b) http://arxiv.org/PS_cache/physics/pdf/0210/0210074v1.pdf.
-
Let $(M,\omega)$ be a quasi-symplectic manifold.
Being $\omega$ a non-degenerate $2$-form on $M$, for any $f\in C^{\infty}(M)$ there exists a unique $X_f\in\mathcal{X}(M)$ such that $df=i(X_f)\omega$. The map $f\in C^{\infty}(M)\to X_f\in\mathcal{X}(M)$ is obviously $\mathbb{R}$-linear.
We introduce the pseudo-Poisson bracket over $C^{\infty}(M)$ by defining $\{f,g\}=X_f(g)\equiv \omega(X_g,X_f)$, for any $f$ and $g$ smooth function on $M$.
Immediately by definition $\{\cdot,\cdot\}:C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)$ is an antisymmetric $\mathbb{R}$-bilinear map.
We introduce also a sort of Jacobiator $J:C^{\infty}(M)\times C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)$ by defining $J(f,g,h)=\{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}$. This is a trilinear antisymmetric map which measure how much the Jacobi identity for the bracket is not satisfied.
I outline two different approaches.
1)$d\omega=0$ implies the Jacobi identity for $\{\cdot,\cdot\}$.
The Jacobi identity for the pseudo-Poisson bracket can be easily rewritten as (*)$X_{\{f,g\}}=[X_f,X_g]$, for any $f$ and $g$ smooth functions on $M$. So $C^{\infty}(M),\{\cdot,\cdot\})$ is a Lie algebra if and only if the map $(C^{\infty}(M),\{\cdot,\cdot\})\ni f\to X_f\in(\mathcal{X}(M),[\cdot,\cdot])$ is a homomorphism of $\mathbb{R}$-algebras.
Being $d\omega=0$, by the H.Cartan's formula we get that a smooth vector field $X$ on $(M,\omega)$ is symplectic, i.e. $\mathcal{L}(X)(\omega)=0$, if and only if it is locally hamiltonian, i.e. $d.i(X)\omega=0$.
Now the condition (*) is a consequence of the much more strong statement:
Theorem.If $Y$ and $Z$ are symplectic vector fields on $(M,\omega)$, i.e. $\mathcal{L}(Y)(\omega)=\mathcal{L}(Z)(\omega)=0$, then $[Y,Z]=-X_{\omega(Y,Z)}$, i.e. $[Y,Z]$ is a hamiltonian vector field with $-\omega(Y,Z)$ as Hamilton function.
Proof. $i([Y,Z])\omega=\mathcal{L}(Y).i(Z)\omega-i(Z).\mathcal{L}(Y)\omega=d.i(Y).i(Z)\omega+i(Y).d.i(Z)\omega=d(\omega(Z,Y))$.
(Having used the hypothesis, the H.Cartan's formula, and the formula $[\mathcal{L}(Y),i(Z)]=i([Y,Z])$).
2)$(d\omega)(X_f,X_g,X_h)=J(f,g,h)$, for any $f$,$g$,and $h$ smooth functions on $M$.
By Palais' expression of exterior derivative through Lie derivatives, we can express $(d\omega)(X_f,X_g,X_h)$ as the sum of two terms obtained respectively by $\mathcal{L}(X_f)(\omega(X_g,X_h)$ and by $\omega(X_f,[X_g,X_h])$ summing over the ciclic permutations of $(f,g,h)$.
Now $\mathcal{L}(X_f)(\omega(X_g,X_h)=-\{f,\{g,h\}\}$ and $\omega(X_f,[X_g,X_h])=\{g,\{h,f\}\}+\{h,\{f,g\}\}$, and so we get the thesis.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 103, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925418496131897, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=472075
|
Physics Forums
## Maximum angle of inclined plane before falling off the plane
I've been thinking about this for a model I'm devising. Assuming that you have an object of mass m, height h, coefficient of friction u, how large can you make the angle between the ground and the inclined plane. Otherwise, at what angle does the torque from the center of mass of the object overcome the force that is keeping the object on the inclined plane?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Gold Member Welcome to PF! If the incline is completely flat and the coefficient of friction is for static friction, then you can find the largest angle for which the mass wont slide by noting that the maximum force along the slide is usually modeled simply as the normal force times the coefficient, and you can equate that with the force from gravity down the incline, that is, $$\mu F_n = \mu mg \cos(\alpha) = F_g = mg \sin(\alpha) \ \Rightarrow \ \tan(\alpha) = \mu$$ where $\alpha$ is the angle and $\mu$ the coefficient of static friction.
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Hi Sinnaro! Welcome to PF! I'm not sure whether you're talking about sliding (which as Filip Larsen says depends on whether the tangent exceeds the coefficient of static friction), or toppling. If it's toppling, then all that matters is whether a vertical line through the centre of mass goes outside the base.
## Maximum angle of inclined plane before falling off the plane
To clarify: it is assumed that the mass is sliding down the plane. I'm looking for the angle (as the angle approaches 90 degrees) for which the mass will no longer be in contact with the plane (falls off).
Picture of what I'm talking about. Assume that the mass looks similar to my drawing (tall vertical height with wide base):
http://i.imgur.com/kIwSt.png
Thread Tools
Similar Threads for: Maximum angle of inclined plane before falling off the plane
Thread Forum Replies
Introductory Physics Homework 1
Introductory Physics Homework 1
Introductory Physics Homework 3
Introductory Physics Homework 3
Introductory Physics Homework 3
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050267934799194, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/172841/integral-of-cdf-equals-expected-value
|
# Integral of CDF equals expected value
The question as below...
Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
in the case that, $X$ has a
a) discrete distribution b) continuous distribution
I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.
Thanks for the help!
-
1
In the two cases, it's a rewritting of the sum. Start from the RHS, that you can express in the first case as an integral of a sum and in the second as a double integral, then switch them. This is allowed because all the quantities are non-negative. – Davide Giraudo Jul 19 '12 at 13:42
2
This question was asked here previously. Check and you will find a more detailed answer. Either here or on CV. – Michael Chernick Jul 19 '12 at 14:21
2
– Dilip Sarwate Jul 19 '12 at 15:38
## 2 Answers
For every nonnegative random variable $X$, whether discrete or continuous or a mix of these, $$X=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,$$ hence $$\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.$$
-
Copied from Cross Validated / stats.stackexchange:
where $S(t)$ is the survival function equal to $1- F(t)$. The two areas are clearly identical.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198598265647888, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/38800?sort=votes
|
## From an integral equation to a differential equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
I am wondering whether it is possible to convert the following integral equation to a partial differential equation.
where $J_0(t,x)$ is some given nonnegative function and $\nu>0$ is a constant. It is clear $t\ge 0$.
The aim is to solve this equation. To convert it to PDE is just one possible way to solve it, since latter we can use the hopefully the fundamental solutions.
My current solution is
But I am not sure whether it is right or not.
Thanks for any comments or hints!
-
Anand: You can directly type $\LaTeX$ when you're asking or answering a question on MathOverflow, with the usual dollar signs (e.g., $x + 3 = y$). – Tom LaGatta Sep 15 2010 at 15:48
@Tom, I tried once with a big formula. But it failed. So I edit latex and convert to jpg file. I will try next time. Thank you. :-) – Anand Sep 15 2010 at 16:22
## 1 Answer
Of course. When $\nu=1$, if you apply the operator $\partial_t-\partial^2_{xx}$ to the last integral you obtain precisely $f(t,x)$ so the equation is $$f_t - f_{xx} = (\partial_t-\partial^2_{xx}) J_0^2 + f.$$
EDIT: you seem to know already the answer, so I stop here :) You edited your question when I was writing my answer...
By the way, if you want to solve the PDE just set $f(t,x) = e^{t} g(t,x)$ and the equation in $g$ is a homogeneous heat equation. This sounds like some textbook exercise, I musr say
-
Thanks Peiro, I met a contradiction in my research. I am now debugging. I took my solution above for granted before. Now I am suspicious for that. That's why I am asking the question. :-) – Anand Sep 15 2010 at 11:38
As for solving this problem, we can also use Fourier transform. :-) I want to make sure the integral equation and PDE are equivalent, with the above initial conditions. Thanks Prof. D'Ancona for your answer. :-) – Anand Sep 15 2010 at 12:31
Dear Prof. D'Ancona, what kind of initial condition should we pose on this problem? I think it might be $f(0,x)=J^2_0(0,x)$. However, in some cases, it is reasonable to ask when $J_0(0,x)=\delta_0(x)$. Then in this case, what does it mean for $\delta^2_0$? Thank you for your help! – Anand Sep 16 2010 at 13:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194245934486389, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/08/16/associated-metric-spaces-and-absolutely-continuous-measures-i/?like=1&source=post_flair&_wpnonce=55e0e6033e
|
# The Unapologetic Mathematician
## Associated Metric Spaces and Absolutely Continuous Measures I
If $\mathfrak{S}$ is the metric space associated to a measure space $(X,\mathcal{S},\mu)$, and if $\nu$ is a finite signed measure that is absolutely continuous with respect to $\mu$. Then $\nu$ defines a continuous function on $\mathfrak{S}$.
Indeed, if $E\in\mathcal{S}$ is any set with $\mu(E)<\infty$, then $E$ represents a point of $\mathfrak{S}$, and $\nu(E)$ defines the value of our function at this point. If $F\subseteq\mathcal{S}$ is another set representing the same point, then $\mu(E\Delta F)=0$. By absolute continuity, $\nu(E\Delta F)=0$ as well, and so $\nu(F)=\nu(E)$. Thus our function doesn’t depend on the representative we use.
As for continuity at a point $E$, given an $\epsilon>0$, we want to find a $\delta>0$ so that if $\mu(E\Delta F)<\delta$ then $\lvert\nu(E)-\nu(F)\rvert<\epsilon$. We calculate
$\displaystyle\begin{aligned}\lvert\nu(E)-\nu(F)\rvert&=\lvert\nu(E\setminus F)-\nu(F\setminus E)\rvert\\&\leq\lvert\nu(E\setminus F)\rvert+\lvert\nu(F\setminus E)\rvert\\&\leq\lvert\nu(E\Delta F)\rvert+\lvert\nu(E\Delta F)\rvert\\&=2\lvert\nu(E\Delta F)\rvert\\&\leq2\lvert\nu\rvert(E\Delta F)\end{aligned}$
Since $\nu$ is finite, we know that for every $\epsilon>0$ there is a $\delta>0$ so that if $\lvert\mu\rvert(E\Delta F)=\mu(E\Delta F)<\delta$ then $\lvert\nu\rvert(E\Delta F)<\epsilon$. Using this $\delta$, our assertion of continuity follows.
Now, if $\{\nu_n\}$ is a sequence of finite signed measures on $X$ that are all absolutely continuous with respect to $\mu$, and if the limit $\lim_n\nu_n(E)$ exists and is finite for each $E\in\mathcal{S}$, then the sequence is uniformly absolutely continuous with respect to $\mu$. That is,
For any $\epsilon>0$ we can define the set
$\displaystyle\mathfrak{E}_k=\bigcap\limits_{m=k}^\infty\bigcap\limits_{n=k}^\infty\left\{E\in\mathfrak{S}\bigg\vert\lvert\nu_n(E)-\nu_m(E)\rvert\leq\frac{\epsilon}{3}\right\}$
Since each $\nu_n$ is continuous as a function on $\mathfrak{S}$, each of these $\mathfrak{E}_k$ is a closed set. Since the sequence $\{\nu_n(E)\}$ always converges to a finite limit, it must be Cauchy for each $E$, and so the union of all the $\mathfrak{E}_k$ is all of $\mathfrak{S}$. Thus the countable union of these closed subsets has an interior point. But since $\mathfrak{S}$ is a complete metric space, it is a Baire space as well. And thus one of the $\mathfrak{E}_k$ must have an interior point as well.
Thus there is some $k_0$, some radius $r_0$, and some set $E_0$ so that the ball $\{E\in\mathfrak{S}\vert\rho(E,E_0)<r_0\}$ is contained in $\mathfrak{E}_{k_0}$. Let $\delta$ be a positive number with $\delta<r_0$, and so that $\lvert\nu_n(E)\rvert<\frac{\epsilon}{3}$ whenever $\mu(E)<\delta$ and $1\leq n\leq k_0$ This $\delta$ will suffice (by definition) for all $n$ up to $k_0$. We will show that it works for higher $n$ as well. Note that if $\mu(E)<\delta$, then
$\displaystyle\begin{aligned}\rho(E_0\setminus E,E_0)=\mu\left((E_0\setminus E)\Delta E_0\right)&=\mu(E_0\cap E)\leq\mu(E)<\delta<r_0\\\rho(E_0\cup E,E_0)=\mu\left((E_0\cup E)\Delta E_0\right)&=\mu(E\setminus E_0)\leq\mu(E)<\delta<r_0\end{aligned}$
so $E_0\setminus E$ and $E_0\cup E$ are both inside $\mathfrak{E}_{k_0}$. And so we calculate
$\displaystyle\lvert\nu_n(E)\rvert\leq\lvert\nu_{k_0}(E)\rvert+\lvert\nu_n(E_0\cup E)-\nu_{k_0}(E_0\cup E)\rvert+\lvert\nu_n(E_0\setminus E)-\nu_{k_0}(E_0\setminus E)\rvert$
The first term is less than $\frac{\epsilon}{3}$ by the definition of $\delta$. The second and third terms are less than or equal to $\frac{\epsilon}{3}$ because $E_0\cup E$ and $E_0\setminus E$ are in $\mathfrak{E}_{k_0}$. Since the same $\delta$ works for all $n$, the absolute continuity is uniform.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 1 Comment »
1. [...] Metric Spaces and Absolutely Continuous Measures II Yesterday, we saw that an absolutely continuous finite signed measure on a measure space defines a [...]
Pingback by | August 17, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 72, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467788934707642, "perplexity_flag": "head"}
|
http://nrich.maths.org/7282/clue
|
nrich enriching mathematicsSkip over navigation
### Iff
Prove that if n is a triangular number then 8n+1 is a square number. Prove, conversely, that if 8n+1 is a square number then n is a triangular number.
### Weekly Problem 10 - 2007
The square of a number is 12 more than the number itself. The cube of the number is 9 times the number. What is the number?
### Weekly Problem 32 - 2007
One of these numbers is the largest of nine consecutive positive integers whose sum is a perfect square. Which one is it?
# Generating Triples
##### Stage: 4 Challenge Level:
Here is an extract from Charlie's table:
Where is the $5^2$?
Where is the $12^2$?
Where is the $13^2$?
Where might we find a similar set of related square numbers?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062780737876892, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/51341/current-induced-when-dropping-a-magnet-through-a-coil
|
# Current induced when dropping a magnet through a coil
When graphing the induced current in a coil while a magnet is dropped through it why is the total area equal to 0? The area represents the charge in the coil but why must the resultant flow of charge be 0?
-
## 1 Answer
The EMF induced in the coil is given by:
$$\varepsilon=-N\frac{d\Phi_B}{dt}$$
where $d\Phi_B$ is the magnetic flux through the coil and $N$ is the number of windings. The current through the coil is given by Ohm's law:
$$I=\frac{\varepsilon}{R}=-\frac{N}{R}\frac{d\Phi_B}{dt}$$
where $R$ is the resistance of the coil. The total charge $C$ having gone through a conductor over a period of time is the time integral of the current:
$$C=\int^{t_2}_{t_1}Idt=-\frac{N}{R}\int^{t_2}_{t_1}\frac{d\Phi_B}{dt}dt=-\frac{N}{R}\left(\Phi_B(t_2)-\Phi_B(t_1)\right)$$
Assuming $\Phi_B$ has roughly the same value when the magnet has gone through the coil, $t_2$, as when it was dropped, $t_1$, this will be zero.
-
Are those two times, $t_1$ and $t_2$ referring to when the magnet is far above the coil, and far below the coil respectively? – kηives Jan 15 at 23:54
@kηives The formulas are valid for arbitrary times $t_1$ and $t_2$, but I guess they need to be when the magnet is far above and far below for the question to make sense. – jkej Jan 16 at 0:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535626173019409, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/97523/segments-of-voronoi-diagrams-on-smooth-manifolds-are-they-geodesics
|
## Segments of Voronoi Diagrams on smooth manifolds. Are they geodesics?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $S$ be a patch of a smooth 2-manifold in $\mathbb{R}^3$, and pick two distinct points $a,\ b \in S$. Let $c$ be the set of points on $S$ equidistant to $a$ and $b$, where distance is defined by shortest paths on the surface. Call $c$ a bisector.
A bisector may be non-manifold at points (a torus with a long, thin rod attached has pairs of points with such bisectors). A bisector is, in a sense, a generalized kind of medial axis or Voronoi Diagram.
To illustrate, given a few points on $S$ we may construct a so called geodesic Voronoi Diagram. It's easy to see that it is made of segments of bisectors:
The resulting figure looks analogous to a classical Voronoi Diagram. How far does the analogy extend?
Q. Suppose we have a connected 1-manifold segment of the bisector of $a$ and $b$. Is it a segment of a geodesic on $S$?.
Credit:
The Illustration is from Franz Wolter's web article "Cut Locus and Medial Axis in the Euclidean Space and on Surfaces"
I also shamelessly copied from Joseph O'Rourke's related MO post.
-
## 5 Answers
It is a theorem of JK Beem (1975) Pseudo-Riemannian Manifolds with Totally Geodesic Bisectors, that bisectors are geodesics if and only if the manifold has constant curvature.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The simplest example would seem to be an egg of revolution, not an ellipsoid, with one point the North Pole at the pointy end, while the other is the pole at the broad end. The bisector is a circle of revolution, a parallel, but not an equator..
Note that here, if we take a plane containing the axis of revolution and intersect with the figure, the resulting meridian is a geodesic, while also being a bisector for any pair of points symmetric across the plane.
A less symmetric example starts with the xy plane in $\mathbf R^3,$ then introducing a radially symmetric hill with support within, say, the standard unit disk. The bisector of the points $(8,0)$ and $(10,0)$ is still the geodesic $x=9.$ However, the bisector of the points $(-2,0)$ and $(2,\frac{1}{2})$ is a little peculiar near the origin.
-
More of a comment than an answer. Chapter 5 of William Goldman's book `Complex Hyperbolic geometry' has a great detail of information on the bisectors in complex hyperbolic space (so real dimension 2n, n>1) and has many pictorial representations of them. They are beautiful objects. Goldman asserts that there are no real codimension 1 totally geodesic submanifolds in complex hyperbolic space so the bisectors cannot be totally geodesic. The bisectors of complex hyperbolic space are minimal surfaces, all congruent to each other.
-
@Richard: I took the liberty of adding a figure from Goldman's 1998 preview of the book, which I found online (I don't have the book itself). – Joseph O'Rourke May 21 2012 at 20:14
See also the earlier MO question "Delaunay triangulations and convex hulls," where I posted this image:
Of course here, as per the theorem Igor quotes, the bisectors are geodesics.
-
Some theoretic reaults on bisecotr and Voronoi diagram on 2-manifold triangular surface are presented in:
Yong-Jin Liu, Zhan-Qing Chen, Kai Tang. Construction of Iso-contours, Bisectors and Voronoi Diagrams on Triangulated Surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 33, No. 8, pp. 1502-1517, 2011.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126962423324585, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/62454/list
|
## Return to Question
3 added 10 characters in body
I've been trying lately to understand Fontaine's rings of periods, $B_{\mathrm{dR}}$, $B_{\mathrm{cris}}$, etc. However, I have a really hard time understanding and appreciating how to think about and use these. These rings seems so incredibly complicated and unintuitive it boggles my mindand ; I can never seem to be able to remember their construction.
So, how do I think about, learn and use these obscure objects? (Do I really need to know all the details in their construction to appreciate and use them?)
In addition, I know the context and Fontaine's original motivation for considering these rings, but have they found any unexpected uses outside their intented intended domain?
2 edited tags
1
# Fontaine's rings of periods
I've been trying lately to understand Fontaine's rings of periods, $B_{\mathrm{dR}}$, $B_{\mathrm{cris}}$, etc. However, I have a really hard time understanding and appreciating how to think and use these. These rings seems so incredibly complicated and unintuitive it boggles my mind and I can never seem to remember their construction.
So, how do I think about, learn and use these obscure objects? (Do I really need to know all the details in their construction to appreciate and use them?)
In addition, I know the context and Fontaine's original motivation for considering these rings, but have they found any unexpected uses outside their intented domain?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9729682207107544, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/2619/whats-the-exact-connection-between-bosonic-fock-space-and-the-quantum-harmonic/2623
|
# What's the exact connection between bosonic Fock space and the quantum harmonic oscillator?
Let's suppose I have a Hilbert space $K = L^2(X)$ equipped with a Hamiltonian $H$ such that the Schrödinger equation with respect to $H$ on $K$ describes some boson I'm interested in, and I want to create and annihilate a bunch of these bosons. So I construct the bosonic Fock space
$$S(K) = \bigoplus_{i \ge 0} S^i(K)$$
where $S^i$ denotes the $i^{th}$ symmetric power. (Is this "second quantization"?) Feel free to assume that $H$ has discrete spectrum.
What is the new Hamiltonian on $S(K)$ (assuming that the bosons don't interact)? How do observables on $K$ translate to $S(K)$?
I'm not entirely sure this is a meaningful question to ask, so feel free to tell me that it's not and that I have to postulate some mechanism by which creation and/or annihilation actually happens. In that case, I would love to be enlightened about how to do this.
Now, various sources (Wikipedia, the Feynman lectures) inform me that $S(K)$ is somehow closely related to the Hilbert space of states of a quantum harmonic oscillator. That is, the creation and annihilation operators one defines in that context are somehow the same as the creation and annihilation operators one can define on $S(K)$, and maybe the Hamiltonians even look the same somehow.
Why is this? What's going on here?
Assume that I know a teensy bit of ordinary quantum mechanics but no quantum field theory.
-
2
Hello Qiaochu, welcome to physics.SE! Nice question and I hope we can expect many more :-) – Marek Jan 8 '11 at 3:31
What is $S^i(K)$ @Qiaochu ? – user346 Jan 8 '11 at 5:10
1
@space_cadet: the i^{th} symmetric power, i.e. the Hilbert space of states of i identical bosons. – Qiaochu Yuan Jan 8 '11 at 13:14
Ah ok. In the physics literature $H$, almost always, denotes the Hamiltonian and $S$ the action. – user346 Jan 8 '11 at 13:25
2
$H$ on $Sym^2(K)$ is really $H\otimes 1 + 1 \otimes H$ and likewise for $a$ and $a^\dagger$. So for example the energy is the sum of the (uncoupled) energies. You might have expected $H\otimes H$, for example, but $H$ generates an infinitesimal translation in time. Exponentiating gives the expected result on the propogator $U = exp(tH)$ as $U\otimes U.$ – Eric Zaslow Jan 8 '11 at 20:20
show 3 more comments
## 3 Answers
Let's discuss the harmonic oscillator first. It is actually a very special system (one and only of its kind in whole QM), itself being already second quantized in a sense (this point will be elucidated later).
First, a general talk about HO (skip this paragraph if you already know them inside-out). It's possible to express its Hamiltonian as $H = \hbar \omega(N + 1/2)$ where $N = a^{\dagger} a$ and $a$ is a linear combination of momentum and position operator). By using the commutation relations $[a, a^{\dagger}] = 1$ one obtains basis $\{ \left| n \right >$ | $n \in {\mathbb N} \}$ with $N \left | n \right > = n$. So we obtain a convenient interpretation that this basis is in fact the number of particles in the system, each carrying energy $\hbar \omega$ and that the vacuum $\left | 0 \right >$ has energy $\hbar \omega \over 2$.
Now, the above construction was actually the same as yours for $X = \{0\}$. Fock's construction (also known as second quantization) can be understood as introducing particles, $S^i$ corresponding to $i$ particles (so HO is a second quantization of a particle with one degree of freedom). In any case, we obtain position-dependent operators $a(x), a^{\dagger}(x), N(x)$ and $H(x)$ which are for every $x \in X$ isomorphic to HO operators discussed previously and also obtain base $\left | n(x) \right >$ (though I am actually not sure this is base in the strict sense of the word; these affairs are not discussed much in field theory by physicists). The total hamiltonian $H$ will then be an integral $H = \int H(x) dx$. The generic state in this system looks like a bunch of particles scattered all over and this is in fact particle description of a free bosonic field.
-
1
I realize I left your original Hamiltonian $H$ out of the discussion. I'll add that to the answer later. For now note that $x$ is in no way special in the above, we could have used other "basis" of $K$ like momentum and in particular energy basis of the $H$. In that case the relevant states for $S(K)$ become $\left | n_0 n_1 \cdots \right>$ with $n_i$ telling us how many particles are in the state with energy $E_i$. – Marek Jan 8 '11 at 4:07
@Marek: thanks! I would definitely appreciate some pointers about exactly what to do with the original Hamiltonian. Some follow-up questions: are the creation and annihilation operators observables? Is number going to turn out to be a conserved quantity in the general case? – Qiaochu Yuan Jan 8 '11 at 14:05
@Marek: and one more question. Given an observable A on K, what's the corresponding observable on S(K)? I can think of a few different possibilities and I'm not sure which one physicists actually use. – Qiaochu Yuan Jan 8 '11 at 14:27
1
@Qiaochu: true, but I thought you were asking how to promote observables from $K$ to $S(K)$. $N(\lambda)$ are completely new operators than need the structure of $S(K)$ to be defined. As for interactions: well, that is a topic for a one-semester course in quantum field theory so I recommend you ask this as a separate question. But in short: in general any $H_I$ is possible. But physical ones need to conserve energy, momentum and in fact complete Poincaré symmetry. So one uses representations of Poincaré group to restrict possible choices of $H_I$. – Marek Jan 8 '11 at 15:24
1
– Marek Jan 8 '11 at 15:30
show 3 more comments
Reference: Fetter and Walecka, Quantum Theory of Many Particle Systems, Ch. 1
The Hamiltonian for a SHO is:
$$H = \sum_{i = 0}^{\infty}\hbar \omega ( a_i^{+} a_i + \frac{1}{2} )$$
where $\{a^+_i, a_i\}$ are the creation and annihilation operators for the $i^\textrm{th}$ eigenstate (momentum mode). The Fock space $\mathbf{F}$ consists of states of the form:
$$\vert n_{a_0},n_{a_1}, ...,n_{a_N} \rangle$$
which are obtained by repeatedly acting on the vacuum $\vert 0 \rangle$ by the ladder operators:
$$\Psi = \vert n_{i_0},n_{i_1}, ...,n_{i_N} \rangle = (a_0^+)^{i_0} (a_1^+)^{i_1} \ldots (a_N^+)^{i_N} \vert 0 \rangle$$
The interpretation of $\Psi$ is as the state which contains $i_k$ quanta of the $k^\textrm{th}$ eigenstate created by application of $(a^+_k)^{i_k}$ on the vacuum.
The above state is not normalized until multiplied by factor of the form $\prod_{k=0}^N \frac{1}{\sqrt{k+1}}$. If your excitations are bosonic you are done, because the commutator of the ladder operators $[a^+_i,a_j] = \delta_{ij}$ vanishes for $i\ne j$. However if the statistics of your particles are non-bosonic (fermionic or anyonic) then the order, in which you act on the vacuum with the ladder operators, matters.
Of course, to construct a Fock space $\mathbf{F}$ you do not need to specify a Hamiltonian. Only the ladder operators with their commutation/anti-commutation relations are needed. In usual flat-space problems the ladder operators correspond to our usual fourier modes $a^+_k \Rightarrow \exp ^{i k x}$. For curved spacetimes this can procedure can be generalized by defining our ladder operators to correspond to suitable positive (negative) frequency solutions of a laplacian on that space. For details, see Wald, QFT in Curved Spacetimes. Now, given any Hamiltonian of the form:
$$H = \sum_{k=1}^{N} T(x_k) + \frac{1}{2} \sum_{k \ne l = 1}^N V(x_k,x_l)$$
with a kinetic term $T$ for a particle at $x_k$ and a pairwise potential term $V(x_k,x_l)$, one can write down the quantum Hamiltonian in terms of matrix elements of these operators:
$$H = \sum_{ij} a^+_i \langle i \vert T \vert j \rangle a_i + \frac{1}{2}a^+_i a^+_j \langle ij \vert V \vert kl \rangle a_l a_k$$
where $|i\rangle$ is the state with a single excited quantum corresponding the action of $a^+_i$ on the vacuum. (For details, steps, see Fetter & Walecka, Ch. 1).
I hope this helps resolves some of your doubts. Being as you are from math, there are bound to be semantic differences between my language and yours so if you have any questions at all please don't hesitate to ask.
-
Can you explain the notation in that last formula? What are the b_i? – Qiaochu Yuan Jan 8 '11 at 18:00
@qiaochu that was a typo. Its fixed now. – user346 Jan 8 '11 at 20:29
1
As recently as 10 years ago Welecka was still teaching as William & Mary. It's worth taking his course. Any course. Or even going to see a talk. Really. – dmckee♦ Jan 8 '11 at 23:00
Suppose, as you do, that $K$ is the space of states of a single boson. Then the space of states of a combined system of two bosons is not $K\otimes K$ as it would be if the two bosons were distinguishable, it is the symmetric subspace which you are denoting as $S^2$. Your sum over all $i$, which you denote $S$, is then a HIlbert space (state space) of a new system whose states contain the states of one-boson system, a two-boson system, a three-boson system, etc. except not an infinite number of bosons. (that is not included in the space $S$). And your space $S$ includes superpositions, for example if $v_1$ is an element of $S$ (a state of one boson) and if $v_3 \in S^3$ (a state of a three boson system) then $0.707 v_1 - -.707 v_3$ is a state which has a fifty per cent. probability of being one boson, if the number of particles is measured, and a fifty per cent. probability of being found to be three bosons. That is the physical meaning of Fock space. It is the state space on which the operators of a quantum field act.
As already remarked by Eric Zaslow, if $H$ is the Hamiltonian of the h.o. $K$, then by definition, $H\otimes I + I \otimes H$ is the Hamiltonian on $S^2$, etc. on each $S^i$. Then one sums them all up to get a Hamiltonian on the direct sum $S$.
Unless this Hamiltonian is perturbed, the number of particles is constant, obviously, since it preserves each subspace $S^i$ of $S$. So there will be no creation or annihilation of pairs of particles. If this field comes into interaction with an extraneous particle, the Hamiltonian will be perturbed of course.
It is connected with second quantisation as follows: if you have a classical h.o. and quantise it, you get $K$. If you now second quantise $K$, you get $S$ which can be regarded as a quantum field. Sir James Jeans showed, before the quantum revolution, that the classical electromagnetic field could be obtained from the classical mechanics h.o. as a limit of more and more classical h.o.'s not interacting with each other, and this procedure of second quantisation is a quantum analogue. It is not the same procedure as if you start with a classical field and then quantise it. But it is remarkable that you can get the same anser either way, as JEans noticed in the classical case. That is, you started with a quantum one-particle system and passed to Fock space and got the quantum field theory corresponding to that system. But we could have started with a classical field and quantised it, and gotten the quantum field that way.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381653666496277, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/52041/proof-for-p-gamma-pmu
|
# Proof for $p=\gamma_Pmu$
As I'm reading about Relativistic Momentum, my book states the following:
$$p=m \frac{\Delta x}{\Delta t}=m\frac{\Delta x}{\sqrt{(1-u^2/c^2)}\Delta t}=\frac{mu}{\sqrt{1-u^2/c^2}}=\gamma_Pmu$$
"Whether this is the 'correct' expression for p depends on whether the total momentum P is conserved when the velocities of a system of particles are transformed with the Lorentz velocity transformation equations. The proof is rather long and tedious..."
I'm interested in seeing the proof that they are describing as "long and tedious."
-
1
excellent question IMO :-) – David Zaslavsky♦ Jan 24 at 6:59
To be clear: Let $\Lambda$ be a Lorentz transformation and $P=\sum_i p_i$ the total momentum. Is the question whether $\Lambda(P) = \sum_i \Lambda(p_i)$? You would need to prove that Lorentz transformations act linearly on the momentum. This is trivial in the four-vector formalism where you extend the momentum 3-vector $p_i$ by adding a time component - the energy: $p_{i\mu} = (E_i, p_i)$. If you just focus on the spatial components it becomes hard because of the $\gamma$ factors and the energy gets involved. The easiest case to check would be when the momenta and the boost are all co-linear. – Michael Brown Jan 24 at 7:42
Your first $m$ looks like "relativistic" mass, your second $m$ rest mass, and therefore should be labelled differently. – Larry Harson Jan 24 at 19:15
Check my anwser. It might be the long way you want. – 71GA Jan 26 at 13:44
## 3 Answers
First you need to make sure that newton's momentum isn't conserved here so you test this in a case of two balls travelling towards eachother. What you will notice is that momentum is conserved in coordinate system $xy$ and is not conserved in coordinate system $x'y'$.
Here is how you prove that momentum is conserved in $xy$ ($p_1$ is momentum before collision and $p_2$ is momentum after collision):
$$\scriptsize \begin{split} p_1 &= \left[m_A v_{A1x} + m_{B} v_{B1x}\, \bigl| \, 0\right] = \left[m v- m v\, \bigl| \, 0\right] = m\left[v-v \, \bigl| \, 0 \right]=m\left[0 \, \bigl| \, 0 \right] \\ p_2 &= \left[m_A v_{A2y} + m_{B} v_{B2y} \, \bigl| \, 0 \right] = \left[m v - mv \, \bigl| \, 0 \right] = m\left[v-v \, \bigl| \, 0 \right]=m\left[0 \, \bigl| \, 0 \right] \\ \\p_1 &= p_2 \end{split}$$
I used quotes like $[~~|~~]$ just to sepparate $x$ and $y$ components of momentum like this $p_1=[p_{1x}|p_{1y}]$.
Now you proove that momentum in coordinate system $x'y'$ isn't conserved. Here you must pay a great deal on coordinate system travelling from left to right at all times - even after the collision (see the picture). This means that after collision both balls will allso have $x'$ components. Here is the proof of inequality:
Before collision: $$\scriptsize \begin{split} p_1' &= \left[ m_A v_{A1x}' + m_B v_{B1x}'\, \biggl| \, 0 \right] = \left[ m_A 0 + m_B \left( \frac{v_{B1x} - u}{1-v_{B1x}\frac{u}{c^2}} \right)\, \biggl| \, 0 \right]= m \left[\left( \frac{-v - v}{1+ v \frac{v}{c^2}} \right) \, \biggl| \, 0 \right] =~~~\\ &= m\left[ - 2v \left( \frac{1}{1+ \frac{v^2}{c^2}}\right) \, \biggl| \, 0 \right] \end{split}$$
After collision: $$\scriptsize \begin{split} p_2' &= \left[-2mv \, \biggl| \,m_A v_{A2y}' + m_B v_{B2y}'\right]=\\ &=\left[ -2mv \, \biggl| \, m_A \left( \frac{v_{A2y}}{\gamma \left(1 - v_{A2x} \frac{u}{c^2}\right)} \right) + m_B \left( \frac{v_{B2y}}{\gamma \left(1 - v_{B2x} \frac{u}{c^2}\right)} \right) \right]=\\ &= \left[ -2mv \, \biggl| \, m \left( \frac{v}{\gamma \left(1 - 0 \frac{v}{c^2}\right)} \right) - m \left( \frac{v}{\gamma \left(1 - 0 \frac{v}{c^2}\right)} \right)\right]= m\left[ -2v \, \biggl| \, \left( \frac{v}{\gamma} \right) - \left( \frac{v}{\gamma} \right)\right]=\\ &= m \left[ -2v \, \biggl| \, 0 \right] \end{split}$$
You will notice that equations differ for a factor of $1/(1+ v^2/c^2)$ and it means that classical momentum is not appropriate for relativity as it isnt conserved in all coordinate systems.
Now you write down an inequality just found out, do some algebra and you predict that you can change sign $\neq$ with $=$ if you multiply both sides of inequality equation by some function... i used notation $\gamma(v)$.
$$\scriptsize \begin{split} m\left[ -2v \left( \frac{1}{1+\frac{v^2}{c^2}} \right) \, \biggl| \, 0 \right] &\neq m \left[-2v \, \biggl| \, 0 \right]\\ -2 mv \left( \frac{1}{1+\frac{v^2}{c^2}} \right) &\neq -2 mv\\ \frac{-2 mv}{1+\frac{v^2}{c^2}} &\neq -2 mv\\ \gamma(v) \, \frac{-2 mv}{1+\frac{v^2}{c^2}} &= -2 mv \, \gamma(v)\\ \end{split}$$
We found out that the function $\gamma(v)$ is actually like this (i don't know exactly how scientists calculated it or derived it, but they found it):
$$\scriptsize \gamma(v) = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$
CAUTION: $v$ is not a relative speed of coordinate systems mooving relative to eachother (which i denote usualy by $u$) but it is a full speed of an object in some coordinate system.
Now you only need a proof to proove that this function is the one. And here is the finale. Because in $\gamma(v)$ there is a full speed you first calculate full speed $v$ for an object before and after collision in coordinate system $x'y'$ and then use those to proove that if you multiply inequality equation above with $\gamma(v)$ you get equality. I choose only one object (that is ball with mass $m_A$) and check its momentum before and after the collision:
Full speed of $m_A$ in $x'y'$ before collision:
$$\scriptsize v = -2v \left( \frac{1}{1+\frac{v^2}{c^2}} \right)= \frac{-2v}{1+\frac{v^2}{c^2}}\\$$
Full speed of $m_A$ in $x'y'$ after collision:
$$\scriptsize \begin{split} v&=\sqrt{(-v)^2 + \left(\frac{v}{\gamma}\right)^2}=v \sqrt{1+ \frac{1}{\gamma^2}} =\\ &= v \sqrt{ 1 + \left(1 - \frac{u^2}{c^2}\right)} = v \sqrt{2 - \frac{v^2}{c^2}} \end{split}$$
And the proof that momentum of ball with mass $m_A$ is conserved now:
$$\begin{split} \gamma \! \left(\frac{-2v}{1+\frac{v^2}{c^2}}\right) \, \frac{-2mv}{1 + \frac{v^2}{c^2}} &= -2mv \, \gamma \! \left(v \sqrt{2-\frac{v^2}{c^2}}\right)\\ \frac{1}{\sqrt{1- \frac{\left\{-2v/\left(1+v^2\!/\!c^2\right)\right\}^2}{c^2}}} \, \frac{-2mv}{1 + \frac{v^2}{c^2}} &= -2mv \, \frac{1}{\sqrt{1-\frac{\left\{v\sqrt{2-v^2\!/\!c^2}\right\}^2}{c^2}}}\\ \frac{1}{\sqrt{1- \frac{\left\{-2v/\left(1+v^2\!/\!c^2\right)\right\}^2}{c^2}}} \, \frac{1}{1 + \frac{v^2}{c^2}} &= \frac{1}{\sqrt{1-\frac{\left\{v\sqrt{2-v^2\!/\!c^2}\right\}^2}{c^2}}}\\ \frac{1}{1- \frac{\left\{-2v/\left(1+v^2\!/\!c^2\right)\right\}^2}{c^2}} \, \frac{1}{\left(1 + \frac{v^2}{c^2}\right)^2} &= \frac{1}{1-\frac{\left\{v\sqrt{2-v^2\!/\!c^2}\right\}^2}{c^2}}\\ \frac{1}{1- \frac{4v^2}{c^2 \left(1+v^2\!/\!c^2\right)^2}} \, \frac{1}{\left(1 + \frac{v^2}{c^2}\right)^2} &= \frac{1}{1-\frac{v^2\left(2-v^2\!/\!c^2\right)}{c^2}}\\ \frac{1}{\frac{c^2 \left(1+v^2\!/\!c^2\right)^2-4v^2}{c^2 \left(1+v^2\!/\!c^2\right)^2}} \, \frac{1}{\left(1 + \frac{v^2}{c^2}\right)^2} &= \dots\\ \frac{{\bigl(1+\frac{v^2}{c^2}\bigl)}^2}{\frac{c^2 \left(1+v^2\!/\!c^2\right)^2-4v^2}{c^2}} \, \frac{1}{\left(1 + \frac{v^2}{c^2}\right)^2} &= \dots\\ \frac{1}{\frac{c^2 \left(1+v^2\!/\!c^2\right)^2-4v^2}{c^2}} &= \dots\\ \frac{1}{\frac{c^2 \left(1+2v^2\!/\!c^2+v^4\!/\!c^4\right)-4v^2}{c^2}} &= \dots\\ \frac{1}{\frac{c^2+2v^2+v^4\!/\!c^2-4v^2}{c^2}} &= \dots\\ \frac{1}{\frac{c^2-2v^2+v^4\!/\!c^2}{c^2}} &= \dots\\ \frac{1}{\frac{c^2}{c^2} - 2\frac{v^2}{c^2} + \frac{v^4}{c^4}} &= \dots\\ \frac{1}{1 - \frac{v^2}{c^2} \left( 2 - \frac{v^2}{c^2} \right)} &= \dots\\ \frac{1}{1 - \frac{v^2 \left( 2 - v^2\!/\!c^2 \right)}{c^2}} &= \frac{1}{1-\frac{v^2\left(2-v^2\!/\!c^2\right)}{c^2}}\\ \end{split}$$
Wohoho now we have equality all of a sudden. Well thats what we wanted right? This is the proof that equation $p=\gamma(v)mv$ is the one to replace $p=mv$.
This is really step by step using easy algebra. I used ˝$\dots$˝ where i didn't change anything compared to a line above.
CAUTION: If you read this you own me a beer hehe. This and more of stuff like this will be in my new book. It is $1/4$ done at the moment.
-
The answer to this question depends on what you mean by "derive." If you were to define relativistic momentum by the expression $\gamma m v$, then you could show, in real, physical experiments, that the relativistic momentum of an isolated system of particles is conserved. Moreover, one can show mathematically that this conservation law is Lorentz-covariant in the sense that it holds in all inertial frames, and that it reduces to the Newtonian expression at low speeds.
The covariance property is what the quote is referring to as far as I can tell, and I think this is what Michael Brown is referring to in his comment as well. The idea is that in writing an expression for the relativistic momentum, we want one that leads to conservation of this quantity in all inertial frames, and one that reduces to the Newtonian expression $mv$ in the limit of small velocities. If we were to find an expression for relativistic momentum that weren't to satisfy these criteria, then we would be inclined to call it "incorrect."
Having said this, here is how you would proceed with the proof of Lorentz-covariance of relativistic momentum in the case of a single space dimension and two particles colliding (the generalization to higher dimensions and more particles is more tedious but doesn't really add to understanding).
First, one shows that relativistic momentum transforms as follows under a change of inertial frame from frame $S$ to frame $S'$:
$p' = \gamma(p - v E/c^2)$
where $p$ and $p'$ are the momenta in the respective frames, $v$ is the relative velocity of the frames, and $E$ is the energy ($\gamma m c^2$). Then one notes that if we have conservation of energy and momentum in frame $S$
$p_{1i} + p_{2i} = p_{1f} + p_{2f}, \qquad E_{1i} + E_{2i} = E_{1f} + E_{2f}$
then in $S'$ one has
$p'_{1f} + p'_{2f} = \gamma(p_{1f} + p_{2f} - v(E_{1f}+E_{2f})/c^2) =\gamma(p_{1i} + p_{2i} - v(E_{1i}+E_{2i})/c^2) = p'_{1i} + p'_{2i}$
so that momentum is conserved in $S'$, and similarly for energy.
Please let me know if there are typos. Hope this helps!
Cheers!
-
$$p=m \frac{\Delta x}{\Delta t}=m_o\frac{\Delta x}{\sqrt{(1-u^2/c^2)}\Delta t}=\frac{m_ou}{\sqrt{1-u^2/c^2}}=\gamma_Pm_ou$$
m is relativistic mass m$_o$ is rest mass m= m$_o\gamma$ where the $\gamma$ is lorentz factor
-
2
This doesn't answer the question. Saying that relativistic momentum can be written as $mv$ where one has defined $m = m_0 \gamma$ is the same as simply saying that relativistic momentum equals $\gamma m_0 v$. Also, the concept of relativistic mass is antiquated. The mass of the object IS it's rest mass; there's no need to introduce a notion of relativistic mass in relativity. – joshphysics Jan 24 at 16:51
@joshphysics "no need to introduce a notion of relativistic mass in relativity." then can you tell me what is difference between m and m$\gamma$ introduce a notion of relativistic mass in relativity is necessary. – king Jan 24 at 19:07
I recommend avoiding relativistic mass as long as possible. – 71GA Jan 24 at 19:48
1
@king In the modern treatment, $m$ denotes what you're calling the rest mass of the particle, and $\gamma m = \frac{1}{\sqrt{1-v^2/c^2}}m$. When I say that there's no need to introduce the notion of relativistic mass, I'm not saying that you're somehow forbidden from calling the quantity $\gamma m$ the "relativistic mass" of a particle, but this often leads people to conceptual confusion (in my experience), and one can precisely state every result in relativity, and perform any calculation he/she would like, without introducing such a term. So why not insist on terminological simplicity? – joshphysics Jan 24 at 22:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412528872489929, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/74388?sort=votes
|
## Extensions to the Golden-Thompson inequality?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $A$ and $B$ be two Hermitian matrices. The famous Golden-Thompson inequality states that
$$\text{tr}(e^{A+B}) \le \text{tr}(e^Ae^B)$$
However, for determinants we have equality
$$\det(e^{A+B}) =\det(e^Ae^B)$$
I was wondering if similar results can be shown, if instead of trace and determinant, we use any of the other fundamental scalar functions of a matrix (e.g., trace is $\phi_1(X) :=\sum_i \lambda_i(X)$; $\phi_2(X)=\sum_{i \neq j} \lambda_i(X)\lambda_j(X)$, determinant is $\phi_n$)
PS: Please feel free to add more tags, if you deem it to be necessary.
-
## 1 Answer
This is theorem IX.3.5 in "Matrix analysis" by R. Bhatia (Graduate Texts in Mathematics, 169). See also corollary IX.3.6 and theorem IX.3.7. The Golden-Thompson inequality holds when $Tr$ is replaced with a function $f$ which satisfies $f(XY)=f(YX)$ and $|f(X^{2m})|\le f(|XX^{\ast}|^m)$ for all $m\geq 1$. Such functions can be the elementary symmetric functions in the eigenvalues as in your question, the product of the $k$ largest eigenvalues (in absolute value) etc.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8629586100578308, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/196660-finding-terms-geometric-progression.html
|
# Thread:
1. ## finding the terms of a geometric progression
Hi,if we are given two equidistant terms of a geometric progression and also the sum of its terms , then how can we find the actual terms of the G.P? e.g if we are given the 3rd and last terms of the G.P as 12 and 48 and the sum of terms is 393 then the number of terms is 7 and they are: 3,6,12,24,48,96,192 .
I am aware of the following relations:
1. The sum of terms of G.P = a(r^n-1)/(r-1)
2. Product of equidistant terms of a G.P = product of extremes.
But am not sure how to apply them here to get the result.
Thanks.
2. ## Re: finding the terms of a geometric progression
Originally Posted by pranay
if we are given the 3rd and last terms of the G.P as 12 and 48 and the sum of terms is 393 then the number of terms is 7 and they are: 3,6,12,24,48,96,192 .
How are 12 and 48 the "3rd and last" terms of the sequence? Isn't 192 the last term?
I am not sure what is given in your problem. I assume you are given that 12 are 48 are terms equidistant to the extremes and that the sum of terms is 393. Are you given the sequence length and the fact that 12 is the third term?
3. ## Re: finding the terms of a geometric progression
Can you make us a clear question here. Those terms add up to 381.
4. ## Re: Finding the terms of a geometric progression
Originally Posted by pranay
Hi,if we are given two equidistant terms of a geometric progression and also the sum of its terms , then how can we find the actual terms of the G.P? e.g if we are given the 3rd and last terms of the G.P as 12 and 48 and the sum of terms is 393 then the number of terms is 7 and they are: 3,6,12,24,48,96,192 .
I think you are talking about the middle term of a GP with an odd number of terms. (The middle term is equidistant from the first and last terms (with respect to position in the progression) but we don't call it the "equidistant term"; that's bad English. ) By the way, if the middle term of a progression is the 3rd term, then the progression has 5 terms, not 7. Your GP is simply $3,6,12,24,48$ (and the sum of terms is $93$ (not 393)).
Originally Posted by pranay
2. Product of equidistant terms of a G.P = product of extremes.
Better way to put it: Square of middle term of a GP = product of first and last terms.
Right, now that we've cleared things up, we can answer your question. The 3rd and 5th terms of a GP are $12$ and $48$ respectively, and the sum of the 5 terms is $93.$ Find the GP.
Solution:
Let the first term be $a.$ Using the fact that the square of the middle term is equal to the product of first and last terms, $12^2=144=48a$ $\implies$ $a=3.$ To find the common ratio, let it be $r.$ Equating the 3rd term from formula, $3r^2=12$ $\implies$ $r=\pm2.$ But if $r=-2$ then, using the sum-of-terms formula, the sum of the GP would be $33$ and not $93.$ Hence $r=2$ and the GP is $3,6,12,24,48.$
5. ## Re: finding the terms of a geometric progression
Originally Posted by emakarov
How are 12 and 48 the "3rd and last" terms of the sequence? Isn't 192 the last term?
I am not sure what is given in your problem. I assume you are given that 12 are 48 are terms equidistant to the extremes and that the sum of terms is 393. Are you given the sequence length and the fact that 12 is the third term?
i am extremely sorry ..given are the 3rd and 3rd last terms of the g.p(equidistant from extremes) .Here 12 and 48 . and the sum of terms is 381.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381418228149414, "perplexity_flag": "head"}
|
http://nrich.maths.org/2282/note
|
### Days and Dates
Investigate how you can work out what day of the week your birthday will be on next year, and the year after...
### Plum Tree
Label this plum tree graph to make it totally magic!
### Magic W
Find all the ways of placing the numbers 1 to 9 on a W shape, with 3 numbers on each leg, so that each set of 3 numbers has the same total.
# More Number Pyramids
### Why do this problem?
This problem offers students the opportunity to notice patterns, make conjectures, explain what they notice and prove their conjectures. Generalisation provokes the need to use algebraic techniques such as collecting like terms and representing number sequences algebraically.
### Possible approach
This problem follows on nicely from Number Pyramids
What follows could be done in a classroom with students working on paper, or in a computer room so that students can make use of the interactivity and spreadsheet.
Start by showing the interactivity:
"I'm going to type in a number (2), and I'd like you to watch what happens. Can you work out what is going on? Do you notice anything interesting?"
Allow students a short time to discuss in pairs what they saw.
"In a moment, I'm going to type in the number 7. Can you predict what will happen?"
Give pairs a little time to discuss and decide, then show what happens.
"In a while, I'm going to ask you to share anything interesting you have noticed, and any questions that have arisen. You might want to try some more examples to test out your ideas or to give you more data before looking for patterns. Or you might like to think about different ways of representing what's going on in the pyramid."
After students have had plenty of time to explore, bring the class together and share noticings and conjectures. If no-one has considered using algebra, this would be a good time to suggest representing the bottom left corner with $n$ for example, and working out the other entries in terms of $n$.
Once the class have an algebraic expression for the top number, this can be used in two ways:
• Can they explain why it's impossible for some numbers to appear at the top (when an integer is entered at the bottom)?
• Given a top number, can they use their expression to find what number should be entered at the bottom to generate it?
"In these number pyramids, the bottom layer is always a set of consecutive numbers, but there's no reason why the bottom layer couldn't be any other number sequence - starting at 13 and going up in 4s for example. Is there a quick way to work out what the top number will be? Explore some different sequences and use algebra to help you predict and explain what happens."
If students are in a computer room, this spreadsheet can be used to explore different number sequences.
### Key questions
Can you work out what is going on in this pyramid of numbers?
What do you notice about the numbers on each row of the pyramid?
How do we know that $8x+12$ is always a multiple of $4$ but never a multiple of $8$?
(for integer values of $x$)
### Possible extension
Given the top number and either the starting number or the difference between the numbers on the bottom layer, can students work out the missing piece of information?
### Possible support
Students could work on Number Pyramids first in order to gain some familiarity with the structure underlying the problem.
The group could be split so that some investigate sequences that go up in 2s, some 3s, some 4s and so on. Then the class could come together to share what they have found out before generalising to any sequence.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237105250358582, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/62974-help-area-region-print.html
|
Help with area of region
Printable View
• December 2nd 2008, 06:18 PM
khuezy
Help with area of region
Find the area of the region bounded by the x-axis and the cardiod r=1+cos(theta) from 0 to pi.
Can anyone give me some tips on how to approach this problem?
Thanks in advance.
• December 3rd 2008, 12:55 AM
shawsend
That's just the area of a region in polar coordinates right?
$A=1/2\int_{\theta_1}^{\theta_2} \left[f(\theta)\right]^2 d\theta$
Tips you say? The best I can think is to find this section in your calculus book, work through the one or two examples they'll have, then work through yours. :)
All times are GMT -8. The time now is 04:12 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321728944778442, "perplexity_flag": "middle"}
|
http://sumidiot.wordpress.com/tag/hardy-and-wright/
|
# ∑idiot's Blog
The math fork of sumidiot.blogspot.com
## Posts Tagged ‘hardy and wright’
### Hardy and Wright, Chapters 12 and 13
June 27, 2009
Guess I kept getting distracted from posting about our meeting last week about Chapter 12. So here it is (what I remember of it – that was a long time ago), with notes from today’s meeting as well.
Honestly, we didn’t talk too much about the content of chapter 12, directly. The three of us were fairly comfortable with most of it from our algebra classes a few years ago. But Chris asked “How many of these quadratic extensions are Euclidean domains?” He had looked into this before the meeting, and found that it is an active topic of research, and that some results are known. I pointed out that we’d get to see some of them in Chapter 14.
Eric mentioned that the norm $N(a+b\sqrt{m})=a^2-mb^2$ could be attained in a more general fashion. Namely, these sets of “numbers” are elements of degree 2 extensions of $\mathbb{Q}$. That means they are a 2 dimensional vector space over $\mathbb{Q}$. And each element $a+b\sqrt{m}$ acts on that vector space as a linear transformation. By writing down a matrix for that transformation and taking it’s determinant, you recover the norm. He thought these ideas had further generalizations, but I don’t remember how much he told us about.
I do remember him telling us that this result that $1-\rho$ is a prime ($\rho$ being the primitive third root of unity, prime in what HW denote $k(\rho)$) was also somewhat generalizable. Somehow, 1-(thing) is often a prime, or so he said. I tried thinking about $1-\rho$ in the complex plane, but had no idea what the geometry would tell me about it being prime, or vice-versa. Eric and I talked about it and the relation to the hexagonal lattice, but didn’t get too far.
So that was what our talk inspired by Chapter 12 covered (at least, that’s what I remember of it). Today, Chris and I met and talked briefly about Chapter 13. Again, neither of us had too much to say specifically about the reading. I mentioned that I had just used the results about all the Pythagorean triples to solve a Project Euler problem.
From the chapter notes, I saw that the result about $x^3+y^3+z^3=0$ having no integer solutions was given as an exercise in something by Landau. I asked Chris if he knew how to phrase any of it in terms of ideals, since he’s an algebraist, but he didn’t. I also asked if some of the manipulations in 13.6, specifically the dividing by $Z$ at some point, was inspired by some sort of projective space to affine space conversion. Dividing by capitals $Z$s apparently makes me think that. Chris allowed how it might be the case, and that algebraic geometry probably could come in here. But neither of us really had much specific to say.
Finally, I relayed the historical anecdote about the integers that are the sum of two cubes in two different ways. The story about Hardy visiting Ramanujan in the hospital, saying that his cab number (1729) wasn’t particularly interesting, and Ramanujan pointing out that it was the smallest integer expressible as the sum of two (positive) cubes in two distinct ways. I didn’t relay how I remembered noticing that 1729 was one of the house numbers in the movie Untraceable. I don’t know if they did that on purpose or not, but it’s there.
Tags:hardy and wright, number theory
### Hardy and Wright, Chapter 11 (part 2)
June 16, 2009
Today we finished off Chapter 11. We worked through some of the proofs, and discussing the meaning of some of theorems, and a good time was had by all.
In 11.10 it is mentioned that there are some notable constant multiples besides $\sqrt{5}$ and $2\sqrt{2}$ in the bounding inequalities on approximating irrationals by rationals. However, the text doesn’t mention what they are, which I thought was unfortunate. I also wondered what sort of numbers are the troublesome examples for the constant $2\sqrt{2}$. That is, the troublesome number for $\sqrt{5}$ is the golden ratio (or anybody whose continued fraction ends in a string of 1s), so what numbers do it for $2\sqrt{2}$. I think we decided that probably it was not a single number, but more like… any number whose continued fraction expansion is just lots of 1s and 2s. The more ones, the worse the number, in some sense. But as long as there are infinitely many twos, maybe you start running into, or getting close to, this $2\sqrt{2}$ bound.
We talked a little bit of our way through the proof that almost all numbers have arbitrarily large “quotients” (the $a_n$ in the continued fraction). I tried to dig up some memories from my reading of Khinchin’s book about how to picture some of the intervals and things in the proof. I have this pictures in my head of rectangles over the interval $[1/(n+1),1/n]$ of height the length of the interval (I guess that makes them squares, huh?). So the biggest rectangle is the one between 1/2 and 1, and they get smaller as you move left. Then each rectangle is split up again, this time with the rectangles getting smaller as you move to the right (within one of the first-stage rectangles). The first set of rectangles correspond, somehow, to the first term of continued fractions, and the second (smaller) rectangles correspond to the second term. Probably I should dig out that book and try to figure out what this picture actually says, but for now… that’s the picture I have in my head.
We were all a little bit slow in understanding some of the later proofs about things like the discussion in 11.11: “Further theorems concerning approximations”. But we also didn’t seem interested enough to really dive in to it.
In the section on simultaneous approximations, Eric mentioned that similar things are done in other contexts (like, perhaps, $p$-adics). When you have valuations, you prove a weak (single) and strong (simultaneous) theorem about approximations. While we were talking about it, I wondered if there was some analogy to the distinction between continuous (at each point in an interval) and uniformly continuous (on that interval). It seems like there maybe should be.
Finally, we spent a while digging through the proof that $e$ is transcendental. Mostly because I was stubbornly refusing to believe I wasn’t being lied to throughout the proof. Setting $h^r=r!$ and then “plugging $h$ into” polynomials really made me uncomfortable. As we went, I joked about things not having any actual meaning. Eventually Chris and Eric pointed out that they do, actually, have meaning. This “plugging $h$ in” thing is actually giving you an integer (if your polynomial has integer coefficients). That calmed me down a bit. I still feel like I don’t understand the proof at all, and certainly couldn’t explain even an outline of it. Eric said similar things, but asked if we should have expected that somehow. Eric also mentioned that these sorts of formal manipulations with things that look wrong can sometimes be ok, and that it was something related to umbral calculus. He showed us an identity (Vandermonde’s) associated with binomial coefficients that does similar sorts of symbolic trickery. Which apparently I should now go read some more about.
I had printed out a paper about the continued fraction expansion of $e$ (which maybe was pointed out to me in this comment), which talked about Pade approximations. Some of the things looked somewhat similar to what was going on in the proof that $e$ is transcendental (which the paper said was where they came from), but I couldn’t explain the paper well during out meeting (since I don’t understand it well enough), and we ran out of time.
Tags:approximation, continued fraction, hardy and wright, number theory, transcendental, umbral calculus
### Hardy and Wright, Chapter 11 (part 1)
June 6, 2009
Now that we’ve got continued fractions under our belt, from chapter 10, we can go on and start looking at “Approximation of Irrationals by Rationals”, chapter 11. One of the (many) cool things about continued fractions, is that they provide “best” rational approximations. We decided, yet again, to split the chapter into two weeks.
In our meeting today, discussing the content of 11.1-11.9, we spent most of our time trying to sort out some typos and see how a few of the inequalities came about. In particular, a typo on page 211, in the theorem that at least one in three consecutive convergents is particularly close to a starting irrational, took us quite a while to sort out.
Eric brought up a comment from the chapter notes that is quite fascinating. The first several sections talk about “the order of an approximation”. Given an irrational $\xi$, is there a constant $K$ (depending on $\xi$) so that there are infinitely many approximations with $|p/q-\xi|<K/q^n$? This would be an order $n$ approximation. In theorem 191, they show that an algebraic number of degree $n$ (solution to polynomial of that degree) is not approximable to any order greater than $n$ (which seems to be a slightly weaker (by 1) statement than Lioville’s Approximation Theorem). The note Eric pointed out was about Roth’s theorem which states that, in fact, no algebraic number can be approximated to order greater than 2. According to the Mathworld page, this earned Roth a Fields medal.
This reminded me about some things I had seen about the irrationality measure of a number. Roth’s theorem, reworded, says something like: every algebraic number has irrationality 1 (in which case it is rational) or 2. So if a number has irrationality measure larger than 2, you know it is transcendental. Apparently, finding the irrationality measure of a particular value is quite a trick. According to the Mathworld page, $e$ has irrationality measure 2, so you can’t use that to decide about it being transcendental.
The whole thing is interesting, as pointed out in H&W, because you think of algebraic numbers as sort of nice (it doesn’t get much nicer than polynomials), but, in terms of rational approximations, they are the worst.
Tags:continued fraction, hardy and wright, irrationality measure, liouville, number theory, roth, transcendental
Posted in Play | 15 Comments »
### Hardy and Wright, Chapter 10
May 29, 2009
We decided to split the reading of chapter 10 into two weeks (chapter 9 here, in case you missed it). It’s a longish chapter, and I really like continued fractions (though I’m not particularly sure why, they’re just fun) and some of the other readers thought it might be worth it to spend more time reading it carefully.
Our first meeting covered the first few sections, which only involved basic definitions, and the theorem that every real number has an essentially unique continued fraction expansion, and the expansion is finite if and only if the number is rational. Eric stated that he was unimpressed so far, and didn’t see what I was so fascinated by. None of us seemed to have any questions about the reading, so I gave a glimpse of things to come (relation of periodic continued fractions to quadratics, and rational approximations). I also mentioned that there are some interesting tie-ins to the “modular group” ($SL_2(\mathbf{Z})$), Farey sequences, and Ford circles (which have come up before). Eric has been reading about hypergeometric series, and said there are some interesting formulas there related to continued fractions. He also asked if there was some relation to surreal numbers, because continued fractions approximate numbers from the left and right, alternatingly.
We picked up, the second week, in section 10.10 “A lemma”, defining an equivalence relation on reals. The relation works out to be that two numbers are equivalent if the tail of their continued fractions are the same. Chris corrected a misinterpretation Eric brought up, about canonical representatives of equivalence classes. I had wondered if the equivalence meant that, in terms of periodic continued fractions representing “quadratic” numbers, two numbers $(a_1+\sqrt{b})/d_1$ and $(a_2+\sqrt{b})/d_2$ would always be equivalent. In fact, I thought I had decided they were. But an example in the book shows that this is not the case ($\sqrt{5}=[2,\dot{4}]$ while $(\sqrt{5}+1)/2=[\dot{1}]$, dots representing the repeating part). Eric pointed out that two points were related if there are in the orbit of the modular group acting on $\mathbb{R}$ as a subset of $\mathbb{C}$, acting as linear fractional transformations.
We spent a little while talking about periodic continued fractions, how the two directions of the proof that they are equivalent to “quadratics” go. I think the proof that any quadratic has a periodic continued fraction is fascinating. It gives no indication how long the period will be, or when it will start.
Next I mentioned that there’s a convenient algorithm for finding the continued fraction for a “quadratic surd”, and that I intend to post some python code here implementing it (and other fun functions for playing with continued fractions). While it’s essentially the normal algorithm, taking floors and then reciprocals, there’s some convenience in having quadratics around, because you can “rationalize the numerator” and sorts of things. Not mentioned in the text, but stated at both Wikipedia and Mathworld (links below), is that Lagrange showed that the continued fraction for $\sqrt{D}$ has a period smaller than $2D$, that the period begins after a single non-repeating term, and that the last term in the period is twice $a_0$ (the first term of the continued fraction). All of these things are true of the examples given in the text. And, while finding links… holy crap! the repeating part, besides the last numeral, is palindromic! Is there no end to the fascination!?
I’ll go ahead and just direct you to the Wikipedia page on (periodic) continued fractions, and similarly the Mathworld page (periodic) continued fractions. All (and undoubtedly many others) make for fascinating reading.
Our next main focus was on approximation by convergents. Chris pointed out how remarkable the final theorem is, that any time a fraction is sufficiently close to a number (in terms of it’s denominator), it is automatically a convergent. I mentioned one thing I read about in Rademacher’s “Higher Mathematics from an Elementary Point of View” (which I love), which was that the existence of infinitely many $p/q$ such that $|p/q-x|<\frac{1}{2q^2}$ (corollary of theorem 183) can be interpreted as saying that a vertical line at $x$ passes through infinitely many Ford circles.
I then tried to explain the difference between Theorems 181 and Theorems 182, and point out that there are two reasonable definitions of “closest rational approximation”. I had read about these in Khinchin’s “Continued Fractions” (which I also love). I bumbled it a bit, but believe I was saying true things throughout. Basically, the story goes… convergents are best rational approximations in the stronger sense (thm 182), and mediants of successive convergents are best rational approximations in the weaker sense (thm 181). In fact, choose an irrational (for convenience) $x$, and let $\square$ denote the operation “mediant”. For any $n$, define $m_{n,1}=(p_n/q_n)\square (p_{n+1}/q_{n+1})$, and then iteratively $m_{n,k}=m_{n,k-1}\square (p_{n+1}/q_{n+1})$. The last of these mediants that is on the same side of $x$ as $p_n/q_n$ will be $p_{n+2}/q_{n+2}$. Continued fractions rock.
It’s really best to think about these lemmas with an actual continued fraction example. I, personally, used $61/45=[1;2,1,4,3]$, and looked at the mediants $m_{1,k}$, between the first and second convergent.
We finished with me trying to explain something that I thought was quite surprising. Let $f_k(x)=n_k(x)/k$ denote the closest rational to $x$, with denominator $k$ (let’s not require the fraction in reduced terms). I was quite surprised, honestly (and convinced Eric he should be too), that for a chosen $x$, the sequence of such rationals will not be successively better approximations to $x$. Having had the chance to go through an example with Eric, and then a few hours to mull it over, I’ve since realized this it not particularly surprising at all. Suppose $x$ lies in $[1/4,1/3]$. Half of these $x$ will be better approximated by 1/3 than 1/4.
So, anyway, I guess that’s all I have to say about continued fractions by now. Perhaps Eric will show us sometime about fun relationships between hypergeometric series and continued fractions. If you haven’t already stopped reading this post to go find all sorts of other interesting things to read about continued fractions, either online or in Rademacher’s or Khinchin’s books, you can now.
Tags:continued fraction, farey sequence, ford circle, hardy and wright, lagrange, modular group, number theory, rational approximation
Posted in Uncategorized | 1 Comment »
### Hardy and Wright, Chapter 9
May 14, 2009
Continuing on, we talked about “The Representation of Numbers by Decimals” this week. I thought the first few sections were fun, in the precision and care used to prove things like “Rational numbers have repeating decimals, and vice versa”.
Chris and I both, apparently, took a minute to digest the example that 29310478561 is divisible by 7, at the end of section 9.5. I feel like in my first year of undergrad I learned about this, or perhaps a similar, test for division by 7. I thought that instead of taking the sum of digits (like you would for mod 3 or mod 9), or alternating sum (for mod 11), you could take some sort of weighted sum of the digits, where the weights depended on $7^n\pmod{10}$ for various $n$. Looking around online while writing this post, it seems I was (mis-)remembering the method of Pascal, listed on this page at Mathworld. This Mathworld page mentions several other tests, which look interesting. My search turned up another as well, over at God Plays Dice.
Eric mentioned that he’s read a lot about the game of Nim (section 9.8) because Conway writes about it in On Numbers and Games, and Eric likes surreal numbers.
I like the section on “Integers with missing digits” (section 9.9), because I like to show my calculus students that the sum of the reciprocals of such numbers is a convergent series (even though, writing out the first several terms, it looks like you haven’t thrown out many terms from the harmonic series). This is known as the Kempner Series, which I first learned about in the book Gamma, by Havil.
We concluded with a little discussion on normal numbers, the last few sections of the chapter. It seems we all ran out of steam while reading this chapter, so we didn’t get through the proofs in these sections. But the ideas are interesting, and the results are fun. According to Wikipedia, there is a conjecture that all irrational algebraic numbers are normal, even though there is no known proof that any particular irrational algebraic number is normal. I remain a little confused about the definition of normal, actually. “We say that $x$ is normal in the scale of $r$ if all of the numbers $x,rx,r^2x,\ldots$ are simply normal in all of the scales $r,r^2,r^3,\ldots$“. I think the idea with multiplying by these powers and changing the scale is that you are looking at longer and longer digit sequences, instead of just single digits (which would be simply normal). I was a little unclear about why you need to multiply $x$ by all of those powers of $r$, but I guess if you don’t then you won’t ever get any digits (in the scale $r^n$) greater than $r^{n-1}$, perhaps?
Update 20090515: I completely forgot to mention another thing we talked about, and I had promised the group I’d link to. Section 9.9, on “Integers with missing digits” begins with the line “There is a familiar paradox” and a footnote that reads “Relevant in controversies about telephone directories”. I wasn’t exactly sure what this controvery was, but we decided probably it was the fact that the probability of picking a random number out of the telephone book, and having it not contain a 9 (say) is fairly small. At first, I had thought maybe the controversy the footnote was hinting at might be related to Benford’s Law, which I also remembered was just recently in the news (slashdot).
Tags:hardy and wright, kempner, normal, number theory, series, surreal
### Hardy and Wright, Chapter 8 (Second Part)
May 8, 2009
Picking up where we left off last week, we finished chapter 8 today. Most of the time was spent trying to trace through the proofs of the various statements, so I won’t go into too much detail here about that. Many of the proofs had the same flavor, cleverly grouping terms in a polynomial, or setting corresponding coefficients equal in two different representations of a polynomial.
When I had first read the chapter, I didn’t pay too close attention to some of the later sections, for example the section on Leudesdorf’s Theorem (generalizing Wolstenholme’s), and the “Further consequences of Bauer’s Theorem“. However, during our meeting we worked through most of Leudesdorf’s theorem, and we were able to gain some appreciation for the various cases (specifically, why they arise).
One of the theorems in the sections we kinda glossed over was the following (Theorem 131 in the book): If $p$ is prime, and $2v<p-3$, then the numerator of $S_{2v+1}=1+\frac{1}{2^{2v+1}}+\cdots+\frac{1}{(p-1)^{2v+1}}$ is divisible by $p^2$. I noted that this $S_{2v+1}$ is a partial sum for $\zeta(2v+1)=\sum_{n=1}^{\infty} n^{-(2v+1)}$ (Wikipedia, Mathworld). Eric wondered if perhaps they were thinking about this sum as a generalization of this $\zeta$ function to some finite field, but the modulus of $p^2$ didn’t fit that entirely. Eric also reminded us that closed forms for $\zeta(2v)$ can be found, while closed forms for $\zeta(2v+1)$ are not known.
Tags:bauer, hardy and wright, leudesdorf, number theory, zeta function
Posted in Uncategorized | 1 Comment »
### Hardy and Wright, Chapter 8 (First Part)
May 1, 2009
This week we met a day early, owing to some scheduling conflicts. If you missed last week’s notes on chapter 7, they’re here. This week we spent an hour talking about things in chapter 8, but did not finish, so we’ll return to the last few sections next week. Chapter 8 is on “Congruences to composite moduli,” and is a bit dense, for us anyway.
Chris continues to claim the book is imprecise, while Eric and I are of the mind that it is simply written in an older style, and could be more verbose. Chris is an algebraist, and Eric and I are topologists, which may have something to do with our discrepancy. Chris’ objection this week concerned statements in the text at the end of section 8.4. In this and the previous section, the congruences under consideration are $f(x)\equiv 0\pmod{p^a}$ for some $a>1$, and polynomial $f$. The idea is that you can get at solutions by considering the equivalence mod $p^{a-1}$. Then depending on the function, and what roots you find mod $p^{a-1}$, you can say things about roots mod $p^a$. There is a condition on the derivative of $f$ that I thought was maybe related to field extensions (I vaguely remember things like this coming up in Algebra courses years ago), but we decided this was not the case.
Anyway, Chris was asking about the claim that $x^2-c\equiv 0\pmod{2^a}$ has either 2 roots or 0 when $a=2$, and 4 or 0 when $a>2$ ($c$ odd). He had asked me about this earlier in the week, and I have written out the squares mod 4,8,16, and 32, and verified the statement there. Chris was concerned about the relation to the statement of Theorem 123. Looking at that theorem, each root of $x^2-c\equiv 0\pmod{2^{a-1}}$ should either give 2 or 0 roots, mod $2^a$, and so it seemed like there should be more and more roots for higher powers. We tried to sort this out for a while, and worked through the proof of Theorem 123 as we went, and eventually decided maybe it was ok. Then, while writing this up I realized probably we hadn’t actually decided how the argument went for these cases. Perhaps I’ll have more for you next week.
Part of the proof of theorem 123 was to re-write $f(\xi+sp^{a-1})$ as $f(\xi)+sp^{a-1}f'(\xi)+\frac{1}{2}s^2p^{2a-2}f''(\xi)+\cdots$, a technique that was used a few times in this chapter. When I was reading this I decided it looked like this was a Taylor series, and during our meeting we agreed that this was the case. We also talked about the statement that $\frac{f^{(k)}(x)}{k!}$ is always an integer (it helps that $f$ is polynomial). There’s no way I would have thought to use Taylor series, if I had been trying to prove these statements on my own (which I would be, I suppose, if I were more serious).
I then asked if people had worked through the example at the end of section 8.1, about professors’ lecturing schedule. I could follow the equivalences, but I was a little confused about how the word problem was converted to the equivalences. I can see that starting on Monday, and lecturing every 2 days corresponds to the days $1+2k_1$ (if we let Monday be the first day), and similarly for the other lecturing schedules. But then I didn’t understand the condition $x=7k_7$ corresponding to no lectures on Sunday. Chris suggested that maybe the interpretation of the problem was that instructors would have “lectures” that spanned a few days (like the first professor above, starting on Monday, would have “2-day-lectures”). I think we have since sorted out that this is not correct. The instructor will lecture every $n$-th day (like $n=2$ in the example line above), and not lecture any of the other days. The setup of the problem is asking “what is the first Sunday that each lecturer would be lecturing?” Thus, we need equivalences $x=a+n_ak_a$ corresponding to the $a$-th lecturer lecturing every $n_a$ days (and beginning, conveniently, on the $a$-th day) to pick out days when everybody would be lecturing, and the equivalence $x=7k_7$ to pick out Sundays.
Eric asked if the solutions to $x^{p-1}-1\equiv 0\pmod{p^a}$ could be easily determined from the roots of $x^{p-1}-1\equiv 0\pmod{p^{a-1}}$. Of course, the point of these few sections was that they are roots $\xi$ of the second equation, with some multiple of $p^{a-1}$ added on. So the roots of the first equation have the form $\xi+sp^{a-1}$ for $0\leq \xi<p^{a-1},0\leq s<p$. Eric’s question is if those values of $s$ are easy to find. We got a little help from python, working out the case $p=5,a=2$, but burned out before getting to the next prime. Perhaps we’ll come back to it next week.
Finally, Chris asked about any relation between the polynomials $f_m(x)$ of section 8.5 and cyclotomic polynomials. I only vaguely remember what those are, but he reminded us that the $n$-th cyclotomic polynomial was one whose roots were exactly the $n$-th roots of unity. Interpreting this in the finite ring setting (that we are in), he seems to be correct when $m$ is a prime. However, we’re not sure that carries over when $m$ is not prime (which is, perhaps, part of the point of this section). In a related question, he asked what one could say about $x^5-1\equiv 0\pmod{6}$.
We didn’t come up with too much useful during out meeting, but Eric and I played around with some numbers afterwards. The results of the first few sections of chapter 8 indicate that you should be able to obtain solutions to $x^5-1\equiv 0\pmod{6}$ by considering the equivalence mod 2, and mod 3, and combining the results. Doing so, we note that $x^5\equiv x\pmod{2}$ and also mod 3, so the only roots are $x=1\pmod{2}$ and $x=1\pmod{3}$. This should give the single root $x=1\pmod{6}$. We asked python (well, sage, technically, via Eric’s sagenb.org setup), and found the interesting result that $x^5\equiv x\pmod 6$, which we were both surprised by. We played around with a few other moduli, and continued to be intrigued by what we found. Perhaps we’ll have more to say next week, when we continue our discussion of chapter 8. For now, all we can say is that we’re not too good at thinking in modular arithmetic.
Tags:hardy and wright, number theory
Posted in Play | 1 Comment »
### Hardy and Wright, Chapter 7
April 25, 2009
Another week, another chapter (last week’s chapter). This week was “General Properties of Congruences”, a reasonably fun chapter. On a side note, I’d like to say how pleased I am that our little group is still going. And note that having a scheduled blog post every week has been a fun habit. It also makes the tags for these post dominate my tag-cloud. I guess that means I should try to spend more time writing about other things as well.
Chris mentioned, before our meeting, how he thought some of the statements were imprecise. I disagreed, but we both thought that some of the wording could be improved. I think that’s to be expected from a book first written 70 years ago. Perhaps I should try to dig up some specific examples.
Chris and Eric and I all agreed that the beginning of the chapter, the first few sections on results about polynomials mod $p$, brought us back to our first year, graduate-level algebra class.
We discussed how the numbers $A_l$, defined to be “the sum of the products of $l$ different members of the set $1,2,\ldots,p-1$” was just the coefficient of $x^l$ in the polynomial $(x-1)(x-2)\cdots (x-(p-1))$. Chris pointed out that really it was the absolute value of that coefficient. If I remember correctly, this was some of the wording Chris was unhappy with.
Most of the rest of our time was spend tracing through the discussion in the text from sections 7.7 and 7.8 on “The residue of $\{\frac{1}{2}(p-1)\}!$” and “A theorem of Wolstenholme”. We all seemed to think that these were fun sections.
As none of the three of us spend much time actually working with integers, we had a bit to discuss with the “associate” of a number mod $p$, or mod $p^2$. We all realized that the associate was just the multiplicative inverse, but had to stop for a second to think about the difference between the inverse mod $p$ and mod $p^2$. We realized, before too long, that if the inverse of $i$, mod $p$, is $n$, then the inverse of $i$, mod $p^2$, has the form $n+p\cdot k$.
We thought about the relationship between integers mod $p$ and mod $p^2$. Thinking mod $p^2$, we write down a $p$ by $p$ array, the first row being $0,1,2,\ldots,p-1$, the next row being $p,p+1,\ldots,2p-1$, and on until the final row, $p(p-1),\ldots p^2-1$. Thinking about multiplicative inverses again, if the inverse of $i$ is $n$ mod $p$ (as in the previous paragraph), then the inverse of $i$ mod $p^2$ appears in the same column as $n$ in this array we have constructed. We were trying to interpret Wolstenholme’s theorem (the sum $1+\frac{1}{2}+\cdots+\frac{1}{p-1}$ is equivalent to 0 mod $p^2$, where $1/i$ is the multiplicative inverse of $i$ mod $p^2$) as summing across rows in this array. That’s not quite right, because the inverses don’t all lie in a row, they are scattered around. If I’m thinking about things correctly, though, these inverses, $1,\frac{1}{2},\ldots,\frac{1}{p-1}$ (mod $p^2$) should occur one in each column of the array we have made (I guess, besides the first column, which is the column of multiplies of $p$).
We didn’t make it to the theorem of von Staudt, concerning Bernoulli numbers taken mod 1. Perhaps we’ll return to it sometime later.
Tags:hardy and wright, number theory
Posted in Play | 2 Comments »
### Hardy and Wright, Chapter 6
April 20, 2009
Last week’s exploration into “Congruences and Residues” primed us (no pun intended) for Chapter 6 of Hardy and Wright, “Fermat’s Theorem and its Consequences”. This was, for me, the most challenging chapter so far. Luckily, we had our largest attendance to date (four) to get through it.
Chris started out by telling us that he thinks about as many of these mod-$p$ congruence results as possible as results about the cyclic group of units in the integers mod $p$. He’s an algebraist, so it’s reasonable.
Eric told us about a relation between some of these mod-$p$ results concerning binomial coefficients and topology. He mentioned something about the sphere spectrum, it’s localization at $p$, and the Eilenberg-Maclane spectrum $H\mathbb{Z}$. Perhaps he’d help us out by providing those remarks in the comments, as I’m not particularly familiar with them.
I asked if anybody knew if there was some way I was allowed to think about $a^{\frac{1}{2}(p-1)}$ (whose equivalence class mod $p$ was a hot topic in this chapter) as $(\sqrt{a})^{p-1}$. When $a$ is a quadratic residue, this just about makes sense (of course, it would have two roots, but both, when raised to the $p-1$ power, give 1 as a result). Alternatively, when $a$ is not a quadratic residue, $a^{\frac{1}{2}(p-1)}\equiv -1\pmod{p}$. So when $a$ is not a quadratic residue, I’m still tempted to think of $\sqrt{a}$ in some relation to the imaginary number $i$. None of us in the meeting seemed to know if there was some reasonable connection here, so perhaps a reader can fill us in?
While I’m talking about quadratic residues, I should point out that the Mathworld page on quadratic residues has some fun pictures.
I then mentioned that I had found the Shanks-Tonelli algorithm for finding the “square root” of a quadratic residue (that is, given a quadratic residue $a$, find an $x$ so that $x^2\equiv a\pmod{p}$). While I’m dropping names, I also noted that the result that the Fermat number $F_k$ is prime iff $F_k|(3^{\frac{1}{2}(F_k-1)}+1)$, at the end of section 6.14, sometimes goes by the name of Pepin’s Test.
Next, I mentioned that I had spent some time thinking about Theorems 86 and 87 for composite moduli. The question is: For which $n$ is there an $0<m<n$ and $0<x,0\leq y$ so that $1+x^2+y^2=mn$? It’s easy to show that this is never satisfied if $n$ is 4, or a multiple thereof. Eric noted that this was easy to see by thinking about the squares mod 4 (only 0 and 1 are squares mod 4). I found this to be a fun little problem to use as inspiration for learning a little more python, and generated a big table of how to write multiples of $n$ as one more than a square, or a sum of squares, for varying $n$. Turns out, there are many ways, in general, of doing so.
I asked if anybody had any ideas what it was about 1093 that made it one of the two known primes so that $2^{p-1}-1\equiv 0\pmod{p^2}$, in relation to Theorem 91. That is, was there something I should have gleaned from the proof? Nobody was particularly sure, but Eric thought maybe there was some relation to regular primes (or irregular). This is a topic that’ll apparently come up later in the book, but Eric told us that it was related to some statements about Dedekind domains and the class group. When we looked up the regular and irregular primes on the Online Encyclopedia of Integer Sequences, though, Eric guessed that maybe it was something else, as there are quite a few of both type. For your convenience, the regular primes are sequence A007703, and the irregulars are A000928.
I guess this last question of mine wasn’t particularly bright. If more were known about why 1093 worked, I feel like either people would have found more, or there would be proofs that there weren’t others. Perhaps I could look into it some more. Either way, it was only the beginning of my poor questions. I also asked why it was we were studying quadratic residues, instead of, say, triadic (?) or higher. My guess, which Eric seconded, was that it was simply the next easiest thing to consider after linear residues, which are somewhat straightfoward. We both guess that things are harder to say about higher powers. I suddenly wonder about higher powers of 2 though… quartic, octic… I’m just making up words.
My remaining not particularly bright question was why there was a whole section about the “quadratic character of 2.” Why 2, in particular? We decided that it was just because it was next after 1, and the answer was relatively easy to obtain. Andy pointed out that, since the quadratic character of -3 was done in the book, as was -1, the quadratic character of 3 was known, using the result about products of quadratic residues and non-residues.
In the 6th edition of the book (the one I’m using, to which all references refer) there is a typo in theorem 97, which we verified because Andy has an older version of the text, without the typo (so… they re-typed the text? I almost think that’s a job I could handle happily. I’ve done it before [pdf]). In the theorem, they make a claim about when 7 is a quadratic residue, and then return to prove the theorem after Gauss’ law of reciprocity. However, when they return, they actually prove a theorem about 5, not 7. I tried my hand at 7, but it seemed hard to say anything about it (at least, consider primes of various classes mod 10).
Eric asked about the efficiency of the primality tests that occupy the second to last section of this chapter (Theorems 101 and 102). None of us seemed to know anything, but my guess was that finding the element $h$, that seems to help out both of these tests, was not easy. But I really have no idea.
Finally, we concluded with my account of Carmichael’s paper “Fermat Numbers” in the American Mathematical Monthly, V. 26 No. 4 from 1919, pp 137-146 (available (in a sense) at JSTOR here). At least, my account of the part about Euler/Lucas’ result on divisors of Fermat numbers, which I mentioned in our discussion of chapter 2. For completeness or so, allow me to present the proof here.
Theorem: If the prime $p$ divides the Fermat number $F_n=2^{2^n}+1$, then there is a $k$ so that $p=k2^{n+2}+1$.
Proof: Since $2^{2^n}\equiv -1\pmod{p}$ by assumption, we see easily that $(2^{2^n})^2=2^{2^{n+1}}\equiv 1\pmod{p}$, and so by Fermat’s theorem, $2^{n+1}|p-1$, which is to say $p=k'\cdot 2^{n+1}+1$ (this is Euler’s result). It remains to be seen that $k'$ is even. If $n\geq 2$ then $p=k'\cdot 2^{n+1}+1\equiv 1\pmod{8}$, meaning $p$ is of the form $8t+1$. Since we studied the quadratic character of 2, we know that 2 is a quadratic residue of a prime of this form, and so $2^{\frac{1}{2}(p-1)}\equiv 1\pmod{p}$. Thus, $\frac{1}{2}(p-1)$ is divisible by $2^{n+1}$, and so $p-1$ is divisible by $2^{n+2}$, which was our goal. Of course, one should check the claim for the numbers $F_0$ and $F_1$.
Since the product of two numbers of the form $k2^{n+2}+1$ has the same form, the theorem can actually be stated about any divisor of Fermat numbers, not just prime divisors.
Tags:dedekind, euler, fermat, gauss, hardy and wright, number theory
Posted in Play | 3 Comments »
### Hardy and Wright, Chapter 5
April 11, 2009
As per schedule, we talked about chapter 5 from Hardy and Wright’s Number Theory book this week (chapter 4 last week). “We” was the smallest it’s been, and so “talked about” was the most brief of our talks to date (besides, perhaps, the organizational meeting).
Chapter 5, “Congruences and Residues”, was, in my mind, a somewhat odd chapter. The first handful of definitions and results, about congruence mod m, I feel like I’ve (mostly) seen several times, so I take it as pretty basic (which isn’t necessarily to say easy to read). I also like thinking about greatest common divisors, which were introduced in this chapter, in the context of category theory, but still am not sure where to go with it. I had hoped, fleetingly, that some lemma in this section might be stated in terms of some lemma about limits or so, but didn’t notice one. Anyway, after this introduction, some crazy sums are mentioned (Gauss’, Ramanujan’s, and Kloosterman’s), but I don’t see the motivation, and they aren’t used. Looking at the index, it looks like only Ramanujan’s will show up later in the text. After this section on sums, the chapter ends with a proof that the 17-gon is constructable. All in all, it seems like a hodge-podge chapter, both in topic and difficulty.
Chris and I agree that using $\equiv$ for “is congruent to” and “is logically equivalent to” is pretty obnoxious. Especially when used all in the same line. For example:
$t+ym'\equiv t+zm'\pmod{p}\equiv m|m'(y-z)\equiv d|(y-z)$
Chris brought up the result that the congruence $kx\equiv l\pmod{m}$ is soluble iff $d=(k,m)$ divides $l$, in which case there are $d$ solutions (this is Theorem 57 in the book). He brought it up because he wasn’t entirely aware of the generalization past $d=1$. I’m not sure I was either, but it is reasonable.
We did agree that using the notation $\overline{x}$ for the solution $x'$ to the equation $xx'\equiv 1\pmod{m}$ (when it exists) was a nice choice of notation. Embedding the integers mod $m$ in the unit circle in the complex plane via $k\mapsto e^{2\pi ik/m}$, the multiplicative inverse of $x$ is the complex conjugate, so the notation lines up.
We finished by talking a little bit about geometric constructions. I liked the long argument about the 17-gon being constructable, by showing that the cosine of an interior angle was constructable, by showing that it was a solution to a system of quadratics (or so). At the beginning of this process, 3 is chosen as a primitive root of 17, and used to define a permutation of $\{0,\ldots,15\}$ by $m\mapsto 3^m\pmod{17}$. I’m not exactly sure why this was done, or why 3 in particular, but it’s fun to trace through the argument from then on. Chris liked the geometric construction, and is tempted to try actually performing it. We reminded ourselves how to bisect angles and draw perpendiculars with straight-edge and compass.
Tags:geometry, hardy and wright, number theory
Posted in Play | 1 Comment »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 235, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9690670371055603, "perplexity_flag": "head"}
|
http://alanrendall.wordpress.com/2008/12/30/mathematical-models-for-tuberculosis/
|
# Hydrobates
A mathematician thinks aloud
## Mathematical models for tuberculosis
In previous posts I made some remarks on mathematical models for diseases and/or the immune system. I also had a post about tuberculosis. Now I came across the web page of Denise Kirschner where there are a lot of links to her publications on modelling TB, the immune system, HIV and related topics. You can also see a related video of a talk of hers (from June 19, 2007) on the web site http://videocast.nih.gov.
In a paper of Wigginton and Kirschner (Journal of Immunology 66, 1951) the authors introduce a mathematical model to describe the interactions of the immune system and the TB bacterium within the lung. This is a system of twelve ODE. The unknowns are two populations of bacteria (inside or outside macrophages), three populations of macrophages, three populations of T cells (Th0, Th1 and Th2) and the concentrations of four cytokines (interferon $\gamma$, IL-4, IL-10 and IL-12.) A lot of detail has been included concerning models for the interaction between the different players and in extracting values from the literature for the many parameters which occur. The goal is to understand the different outcomes of disease: acute infection, latent infection and reactivation.
The ODE are solved on the computer. As far as I could see there has been no general mathematical analysis of the properties of solutions of this system done. It may just be too big and complicated but I would be interested to see if something could be done in that direction. The numerical results apparently show convergence to a stationary state and convergence to a limit cycle in different situations. This model has been further extended by Kirschner and collaborators in other papers. In one paper a model with two compartments is introduced (lung and lymph node) where dendritic cells are also included. Another paper includes CD8+ T cells and TNF$\alpha$. What I like about this work is that it seems to be making real contact between mathematical modelling and the details of immunology, going beyond the simplest model systems.
### Like this:
This entry was posted on December 30, 2008 at 9:58 am and is filed under diseases, dynamical systems, immunology, mathematical biology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 2 Responses to “Mathematical models for tuberculosis”
1. lvs Says:
December 30, 2008 at 6:56 pm | Reply
wow I never thought one could use it for TB! Did these guys make very strict assumptions or is it a fairly realistic model for TB.
2. hydrobates Says:
December 31, 2008 at 8:19 am | Reply
My impression is that this is a fairly realistic model although of course, as in any model of such a complex phenomenon, many things have to be left out.
The fact that there was an invited lecture about this modelling effort at the National Institutes of Health is a sign that medical researchers take it seriously.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519664645195007, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/05/04/orthogonal-complements/?like=1&source=post_flair&_wpnonce=c8d80d4293
|
# The Unapologetic Mathematician
## Orthogonal Complements
An important fact about the category of vector spaces is that all exact sequences split. That is, if we have a short exact sequence
$\displaystyle\mathbf{0}\rightarrow U\rightarrow V\rightarrow W\rightarrow\mathbf{0}$
we can find a linear map from $W$ to $V$ which lets us view it as a subspace of $V$, and we can write $V\cong U\oplus W$. When we have an inner product around and $V$ is finite-dimensional, we can do this canonically.
What we’ll do is define the orthogonal complement of $U\subseteq V$ to be the vector space
$\displaystyle U^\perp=\left\{v\in V\vert\forall u\in U,\langle u,v\rangle=0\right\}$
That is, $U^\perp$ consists of all vectors in $V$ perpendicular to every vector in $U$.
First, we should check that this is indeed a subspace. If we have vectors $v,w\in U^\perp$, scalars $a,b$, and a vector $u\in U$, then we can check
$\displaystyle\langle u,av+bw\rangle=a\langle u,v\rangle+b\langle u,w\rangle=0$
and thus the linear combination $av+bw$ is also in $U^\perp$.
Now to see that $U\oplus U^\perp\cong V$, take an orthonormal basis $\left\{e_i\right\}_{i=1}^n$ for $U\subseteq V$. Then we can expand it to an orthonormal basis $\left\{e_i\right\}_{i=1}^d$ of $V$. But now I say that $\left\{e_i\right\}_{i=n+1}^d$ is a basis for $U^\perp$. Clearly they’re linearly independent, so we just have to verify that their span is exactly $U^\perp$.
First, we can check that $e_k\in U^\perp$ for any $k$ between $n+1$ and $d$, and so their span is contained in $U^\perp$. Indeed, if $u=u^ie_i$ is a vector in $U$, then we can calculate the inner product
$\displaystyle\langle u^ie_i,e_k\rangle=\bar{u^i}\langle e_i,e_k\rangle=\bar{u^i}\delta_{ik}=0$
since $i\leq n$ and $k\geq n+1$. Of course, we omit the conjugation when working over $\mathbb{R}$.
Now, let’s say we have a vector $v\in U^\perp\subseteq V$. We can write it in terms of the full basis $\left\{e_k\right\}_{k=1}^d$ as $v^ke_k$. Then we can calculate its inner product with each of the basis vectors of $U$ as
$\displaystyle\langle e_i,v^ke_k\rangle=v^k\langle e_i,e_k\rangle=v^k\delta_{ik}=v^i$
Since this must be zero, we find that the coefficient $v^i$ of $e_i$ must be zero for all $i$ between ${1}$ and $n$. That is, $U^\perp$ is contained within the span of $\left\{e_i\right\}_{i=n+1}^d$
So between a basis for $U$ and a basis for $U^\perp$ we have a basis for $V$ with no overlap, we can write any vector $v\in V$ uniquely as the sum of one vector from $U$ and one from $U^\perp$, and so we have a direct sum decomposition as desired.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 12 Comments »
1. The fact that every exact sequence splits is that every module is projective. Isn’t this the same as saying the ring in question (here a field) is semisimple?
Comment by Zygmund | May 5, 2009 | Reply
2. That sounds right, but I’m not really digging into ring theory like that.
Comment by | May 5, 2009 | Reply
3. Yeah, I was trying to remember something I read a while back from Cartan and Eilenberg. Anyway, so the property then is fairly unique since semisimple algebras are basically products of matrix algebras (over division rings though).
Comment by Zygmund | May 5, 2009 | Reply
4. [...] sum must be orthogonal. Incidentally, this shows that the direct sum between a subspace and its orthogonal complement is also a direct sum of inner product [...]
Pingback by | May 6, 2009 | Reply
5. [...] particular, since the top subspace is itself, and the bottom subspace is we can see that the orthogonal complement satisfies these properties. The intersection is empty, since the inner product is [...]
Pingback by | May 7, 2009 | Reply
6. [...] Complementation is a Galois Connection We now know how to take orthogonal complements of subspaces in an inner product space. It turns out that this process (and itself again) forms an [...]
Pingback by | May 19, 2009 | Reply
7. [...] (and, in particular, eigenspaces) of self-adjoint transformations. Specifically, the fact that the orthogonal complement of an invariant subspace is also [...]
Pingback by | August 11, 2009 | Reply
8. [...] orthonormal basis of all of . Just set to be the span of all the new basis vectors, which is the orthogonal complement of the image of , and let be the inclusion of into . We can then combine to get a unitary [...]
Pingback by | August 17, 2009 | Reply
9. [...] vector in the subspace defined by the new one. That is, we want the new parallelepiped to span the orthogonal complement to the subspace we start [...]
Pingback by | November 9, 2009 | Reply
10. [...] kernel of consists of all vectors orthogonal to the gradient vector , and the line it spans is the orthogonal complement to the kernel. Similarly, the kernel of consists of all vectors orthogonal to each of the gradient [...]
Pingback by | November 25, 2009 | Reply
11. [...] nonzero vector spans a line , and the orthogonal complement — all the vectors perpendicular to — forms an -dimensional subspace , which we can use [...]
Pingback by | January 18, 2010 | Reply
12. [...] we just consider them as vector spaces, we already know this: the orthogonal complement is exactly the subspace we need, for . I say that if is a -invariant subspace of , then is as [...]
Pingback by | September 27, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355888962745667, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/271172/bayes-theorem-in-stephen-baxters-manifold-time
|
# Bayes' Theorem in Stephen Baxter's Manifold: Time
I am currently reading the sci-fi novel Manifold: Time by Stephen Baxter, which contains the following problem.
You are given a box which has either 10 marbles or 1000 marbles. By pressing a lever on the box, one marble is randomly taken out and given to you. You know that there is exactly one red marble.
After pressing the lever three times, you obtain a red marble. The book claims that this information implies, using Bayes' theorem, that the probability that there are 10 marbles in the box is 2/3.
Can anyone explain how this computation actually works out, or at least how one is supposed to set up Bayes' equation to get this answer? Thanks.
-
## 2 Answers
Let $N$ be the unknown number of marbles in the box.
The question is ambiguous on whether (i) you press the lever three times and obtain the red marble on one of the three tries, or (ii) you press the lever three times and obtain the red marble only on the third try.
Assuming the latter case, the probability that you get the red marble on the third try is $$P(k=3|N=n)=\frac{n-1}n\cdot\frac{n-2}{n-1}\cdot\frac1{n-2}=\frac1n.$$ So $P(k=3|N=10) = 1/10$ and $P(k=3|N=1000) = 1/1000$. By Bayes' theorem, $$\begin{align} P(N=10|k=3)&=\frac{P(k=3|N=10)P(N=10)}{\sum_n P(k=3|N=n)P(N=n)}\\ &= \frac{\frac1{10}P(N=10)}{\frac1{10}P(N=10) + \frac1{1000}P(N=1000)}. \end{align}$$
This only turns out to be $\frac23$ if your prior on the box having ten marbles is $P(N=10)=\frac1{51}$.
-
3
Also, you are taking a view on what he means that "after pressing the level three times..." Basically, your reading is no different than if he said, "after pressing the lever once." I suspect he means that you get a red marble in one of the first three lever presses. You still don't get the 2/3 probability, though – Thomas Andrews Jan 5 at 22:35
@Thomas: Yes, I should have been more explicit. I thought about it for a bit, then decided that if (in real life) someone pressed the lever once and got a red marble, they would not press it two more times unless they were acting out a probability theory exercise. – Rahul Narain Jan 5 at 22:39
@Thomas: Would you say the same thing if the question read "After pressing the lever eleven times, you obtain a red marble"? – Rahul Narain Jan 5 at 22:49
Probably. As I said, the language is ambiguous, so your reading is not wrong, it's just that $3$ is a red herring in that reading. Hardly impossible, however, that this is what he meant. – Thomas Andrews Jan 5 at 23:01
Also, if the problem said $11$, then $11$ would not be a red herring, which would make it more likely he meant after the eleventh press. – Thomas Andrews Jan 5 at 23:07
show 1 more comment
Something doesn't sound complete about this problem the way it is posited. No prior is known. The probability of N=10 if the author says the probability is $\frac{2}{3}$ works out only if the prior is $\frac{1}{51}$ as mentioned above.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528782367706299, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/134000/intersection-of-two-functions/134008
|
Intersection of two functions
I know this seems completely amateur, but for whatever reason I cannot solve this.
I need to find the intersection values for $y= 10-0.00001x^2$ and $y=5+0.005x$
Any help would be much appreciated, thanks
-
1
Functions do not intersect. What you seem to need is to find the coordinates of the points of intersection of the graphs of those two functions. – Mariano Suárez-Alvarez♦ Apr 19 '12 at 18:14
4 Answers
The points of intersection must satisfy both equations. Therefore if (x_i,y_i) is a solution, then it must be true that y_i = 10-0.0001(x_i)^2, and also that y_i = 5 + 0.005x_i.
So if (x_i,y_i) is a solution, then we must have y_i = 10-0.0001(x_1)^2 = 5 + 0.005x_i. (*)
Solve the quadratic labelled (*), and you get two possible pairs (x_1,y_1), (x_2,y_2). Now you should check that these solutions really work by plugging both of them back into the original equation (which you were originally trying to solve).
After a quick check, we see that they are both solutions to the original equation.
Are there any other solutions? No, because if (x_3,y_3) is another solution, then (x_3,y_3) does not satisfy the quadratic equation values of x solutions which satisfy equation (*) (due to the fact that a quadratic equation has at most 2 solutions), and so a 3rd solution does not exist.
From now on don't have to do this every time, and instead of writing out (x_i,y_i) every time, we can just solve the quadratic because now we know that doing so gives all the solutions. But you can use a similar method in your head when asked to find all the solutions to a pair of equations.
-
The intersection is precisely the points where $$10 - 0.00001x^2= 5+0.005x.$$ This corresponds to a quadratic equation on $x$, $$0.00001x^2 + 0.005x - 5 = 0$$ or equivalently, $$x^2 + 500x - 500000 = 0.$$ which can be solved by the usual methods. Once you know the (up to) two values of $x$, they give you the corresponding values of $y$.
-
We have $y=10-0.00001x^2$ and $y=5+0.005x$. This is true if and only if $10-0.00001x^2=5+0.005x$, which simplifies to $$0.00001x^2+0.005x-5=0.\tag{$\ast$}$$ It might be nice to multiply through by $10^5$. We get $$x^2+500x-500000=0.\tag{$\ast\ast$}$$ No big improvement!
Now use the ordinary Quadratic Formula. If $a \ne 0$, then the roots of $ax^2+bx+c=0$ are $\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}$.
Now you can compute exact expressions for the roots. Or if you just want them to high accuracy, the calculator handles things quite nicely. We can use the Quadratic Formula on $(\ast\ast)$, but also directly on $(\ast)$.
Remark: By changing the numbers somewhat, we can produce interesting calculational problems. For instance, if you want to find the roots of $x^2-10^8 x+1$ to say $4$ significant figures, naive use of the Quadratic Formula and a simple scientific calculator will lead us to conclude that the "small" root is $0$, which is not correct to even one significant figure. The reason there is a problem is roundoff error in the calculator. But that issue does not come up with the numbers in your quadratic.
-
Set the two expressions equal to each other: $$10-.00001x^2=y=5+0.005x$$ $$.00001x^2+.005x-10+5=0$$ $$.00001x^2+.005x-5=0$$ $$x^2+500x-500000=0$$ Now apply the quadratic formula:$$x=\frac{-500\pm\sqrt{250000+2000000}}{2}$$ $$x=\frac{-500\pm1500}{2}$$ $$x=-250\pm750$$
To find the corresponding $y$ values, plug in $500$ and $-1000$ for $x$ in the original equations.
-
thank you very much, I see how to do it now :) except its 5+0.005x, nonetheless I see! – Sam Creamer Apr 19 '12 at 17:11
Yeah, typo there, fortunately I didn't carry it over to the next line. Modulo any more careless sign or order-of-magnitude errors, it should be correct as now shown (you should repeat the calculation on your own to make sure). – Brett Frankel Apr 19 '12 at 17:14
@RossMillikan Now corrected, and the final answers check out in the original equation. – Brett Frankel Apr 19 '12 at 21:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923558235168457, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/106114/lie-subgroups-of-so2son
|
Lie Subgroups of SO(2)×So(n)
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello, I need to know (connected closed) Lie subgroups of SO(2)×So(n), indeed these are compact Lie subgroups of SO(2,n) which I am looking for. But I don't know what we can say about Lie subgroups of product Lie groups. I'll be thankful if someone help me in this subject. Thanks in advanced.
-
Have you look in Hammermesh's book? Another useful reference could be Robert Gilmore's book. Cheers – Dox Sep 1 at 16:39
2 Answers
It is not clear to me what kind of answer is expected. Generally speaking, subgroups of Lie groups can be classified by Lie correspondence combined with combinatorial analysis resulting from structure theory of semisimple Lie groups. Below I address two particular cases.
If $K$ is a compact connected semisimple subgroup of $SO(2)\times SO(n)$ then its projection onto the first factor is trivial and the question is reduced to the $SO(n)$ case. (Closed) Lie subgroups of $SO(n)$ are precisely (compact) Lie groups with a faithful $n$-dimensional real orthogonal representation, so there are quite a few of them (the maximal connected ones were classified long time ago by Dynkin). If you need a complete description for small values of $n$, the Atlas of Lie groups is very handy.
In the other extreme case where $K=SO(2)$ you are, in effect, asking about the maps
$$f: SO(2)\to SO(2)\times SO(n).$$
They can be classified by passing to the Lie algebras. More precisely, the differential of $f$ is a linear map $so(2)\to so(2)\oplus so(n).$ Identifying $so(2)$ with $\mathbb{R}$ and $so(n)$ with the skew-symmetric matrices, it may be viewed as a pair $(d,A),$ where $d$ is an integer and $A$ is a skew-symmetric matrix whose eigenvalues are integral multiples of $i.$ Explicitly,
$$f:R(\varphi)\mapsto (R(d\varphi), \exp(\varphi A)),$$
where
$$R(\varphi)=\begin{bmatrix} \phantom{-}\cos(\varphi) & \sin(\varphi) \cr -\sin(\varphi) & \cos(\varphi) \end{bmatrix}$$
is the counterclockwise rotation by $\varphi$ and $\exp$ is the matrix exponential function.
The maps $f$ and $f'$ associated with non-zero pairs $(d,A)$ and $(d',A')$ have the same image if and only if the pairs are proportional. The case $d=0$ corresponds to an $SO(2)$ subgroup of the second factor $SO(n).$ In the case $d=1$, the subgroup $f(K)$ is the graph of a map $SO(2)\to SO(n).$
Note that for the original question about subgroups of $SO(2,n)$ one must impose further equivalences in the case $n=2$, because different subgroups of $SO(2)\times SO(2)$ can be conjugate in $SO(2,2)$.
-
I wasn't able to make bmatrix to work correctly. Feel free to fix it if you can. – Victor Protsak Sep 2 at 0:45
I fixed your rotation matrix. You need to triple up the backslashes (use \\\ instead of \\ for reasons of how the website and the LaTeX interact here) or else use \cr instead of \\. – Theo Buehler Sep 2 at 2:33
thanks for your useful comments. I need to know the compact subgroups up to conjugacy, but if it is not possible to do, we may add some nice hypothesis like as semisimle , etc. It is also helpful to know any special classes of such subgroups. – nerd-math Sep 2 at 14:27
I don't understand, why in the first case (second paragraph of the answer) when $K$ is semisimple it's projection on the first factor is trivial? – nerd-math Sep 20 at 9:41
The first factor is abelian and semisimple groups do not have non-trivial characters. – Victor Protsak Sep 20 at 19:49
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me answer this question for the Lie algebras, which already goes part way to answering the original question. The question is then what are the Lie subalgebras of $\mathfrak{so}(2)\oplus\mathfrak{so}(n)$. The general question of which are the Lie subalgebras of the direct sum of Lie algebras is solved by (the Lie algebraic version of) the Goursat Lemma.
It does not hurt to work in more generality. So Let $\mathfrak{g}_L$ and $\mathfrak{g}_R$ be two real Lie algebras and let $\mathfrak{g} = \mathfrak{g}_L \oplus \mathfrak{g}_R$ be their product. Elements of $\mathfrak{g}$ are pairs $(X_L,X_R)$ with $X_L \in \mathfrak{g}_L$ and $X_R \in \mathfrak{g}_R$. The Lie bracket in $\mathfrak{g}$ of two such elements $(X_L,X_R)$ and $(Y_L,Y_R)$ is given by the pair $([X_L,Y_L], [X_R,Y_R])$.
We are interested in Lie subalgebras $\mathfrak{h}$ of $\mathfrak{g}$.
Let $\pi_L : \mathfrak{g} \to \mathfrak{g}_L$ and $\pi_R : \mathfrak{g} \to \mathfrak{g}_R$ denote the projections onto each factor: they are Lie algebra homomorphisms. Let $\mathfrak{h}_L$ and $\mathfrak{h}_R$ denote, respectively, the image of the subalgebra $\mathfrak{h}$ under $\pi_L$ and $\pi_R$. They are Lie subalgebras of $\mathfrak{g}_L$ and $\mathfrak{g}_R$, respectively. Let us define $\mathfrak{h}^0_L := \pi_L(\ker \pi_R \cap \mathfrak{h})$ and $\mathfrak{h}^0_R := \pi_R(\ker \pi_L \cap \mathfrak{h})$. One checks that they are ideals of $\mathfrak{h}_L$ and $\mathfrak{h}_R$, respectively. This means that on $\mathfrak{h}_L/\mathfrak{h}^0_L$ and $\mathfrak{h}_R/\mathfrak{h}^0_R$ we can define Lie algebra structures. Goursat's Lemma says that these two Lie algebras are isomorphic.
Goursat's Lemma suggests a systematic approach to the determination of the Lie subalgebras of $\mathfrak{g}_L \oplus \mathfrak{g}_R$, which is particularly feasible when $\mathfrak{g}_L$ and $\mathfrak{g}_R$ have low dimension.
Namely, we look for Lie subalgebras $\mathfrak{h}_L \subset \mathfrak{g}_L$ and $\mathfrak{h}_R \subset \mathfrak{g}_R$ which have quotients isomorphic to $\mathfrak{q}$, say. Let $f_L:\mathfrak{h}_L \to \mathfrak{q}$ and $f_R:\mathfrak{h}_R \to \mathfrak{q}$ be the corresponding surjections. Let $\varphi$ denote an automorphism of $\mathfrak{q}$. Then we may define a Lie subalgebra $\mathfrak{h}$ of $\mathfrak{h}_L \oplus \mathfrak{h}_R$ by
$$\mathfrak{h} := \lbrace(X_L,X_R) \in \mathfrak{h}_L \oplus \mathfrak{h}_R | f_L(X_L) = \varphi(f_R(X_R))\rbrace$$ Of course, we need only consider automorphisms $\varphi$ which are not induced by automorphisms of $\mathfrak{h}_L$ or $\mathfrak{h}_R$. We record here the following useful dimension formula: $$\dim \mathfrak{h} = \dim \mathfrak{h}_L + \dim \mathfrak{h}_R - \dim \mathfrak{q}.$$
A commonly occurring special case is when one of $\mathfrak{h}_L \to \mathfrak{q}$ or $\mathfrak{h}_R \to \mathfrak{q}$ is an isomorphism. For definiteness let us assume that it is $\mathfrak{h}_R \to \mathfrak{q}$ which is an isomorphism. Then we get a Lie algebra homomorphism $\mathfrak{h}_L \to \mathfrak{h}_R$ obtained by composing $\mathfrak{h}_L \to \mathfrak{q}$ with the inverse of $\mathfrak{h}_R \to \mathfrak{q}$. In fact, we get a family of such homomorphisms labelled by the automorphisms of $\mathfrak{q}$ or, equivalently, of $\mathfrak{h}_R$. The fibred product which Goursat's Lemma describes is now the graph in $\mathfrak{h}_L \oplus \mathfrak{h}_R$ of such a homomorphism $\mathfrak{h}_L \to \mathfrak{h}_R$. The resulting Lie algebra is abstractly isomorphic to $\mathfrak{h}_L$.
In your case, since $\mathfrak{h}_R$, say, is one-dimensional, $\mathfrak{q}$ is either trivial or else isomorphic to $\mathfrak{h}_R$. In the trivial case, you have direct products of subalgebras, and in the latter case, the previous paragraph applies.
The Lie subalgebras of $\mathfrak{so}(n)$ are tabulated at least for small $n$ in several places. If you are a physicist, then perhaps Slansky's Physics Report Group theory for unified model building might be the most readable.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325850605964661, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/228696/alice-and-bob-game/228716
|
# Alice and Bob Game
Alice and Bob just invented a new game. The rule of the new game is quite simple. At the beginning of the game, they write down N random positive integers, then they take turns (Alice first) to either:
1. Decrease a number by one.
2. Erase any two numbers and write down their sum.
Whenever a number is decreased to 0, it will be erased automatically. The game ends when all numbers are finally erased, and the one who cannot play in his(her) turn loses the game.
Here's the problem: Who will win the game if both use the best strategy?
-
– Douglas S. Stones Nov 4 '12 at 6:01
1
Moreover, your question here is identical to this; your question here is identical to this. None of which give any attribution or display any attempt to answer the question. – Douglas S. Stones Nov 4 '12 at 6:16
@DouglasS.Stones These are archived questions. They are not part of any live tournaments. Hence there is no harm in asking any of these for an optimal solution. – Adwait Kumar Nov 4 '12 at 6:29
2
So, why not just say where it's from when you ask the question? (And, ideally, where you're stuck in solving the problem.) It helps whoever might answer the question. – Douglas S. Stones Nov 4 '12 at 7:55
2
The concept of "random positive integer" is not defined: there is no uniform probability distribution on the positive integers. – Carl Mummert Nov 4 '12 at 12:47
## 2 Answers
The complete solution to this game is harder than it looks, due to complications when there are several numbers $1$ present; I claim the following is a complete list of the "Bob" games, those that can be won by the second player to move. To justify, I will indicate for each case a strategy for Bob, countering any move by Alice by another move leading to a simpler "Bob" game.
I will write game position as partitions, weakly decreasing sequences of nonnegative integers (order clearly does not matter for the game). Entries present a number of times are indicated by exponents in parentheses, so $(3,1^{(4)})$ designates $(3,1,1,1,1)$. Moves are of type "decrease" (type 1 in the question) or "merge" (type 2); a decrease from $1$ to $0$ will be called a removal.
Bob-games are:
• $(0)$ and $(2)$
• $(a_1,\ldots,a_n,1^{(2k)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, $(a_1,\ldots,a_n)\neq(2)$, and $a_1+\cdots+a_n+n-1$ is even. Strategy: counter a removal of one of the numbers $1$ by another such removal; a merge of a $1$ and an $a_i$ by another merge of a $1$ into $a_i$; a merge of two entries $1$ by a merge of the resulting $2$ into one of the $a_i$; a decrease of an $a_i$ from $2$ to $1$ by a merge of the resulting $1$ into another $a_j$; any other decrease of an $a_i$ or a merge of an $a_i$ and $a_j$ by the merge of two entries $1$ if possible ($k\geq1$) or else merge an $a_i$ and $a_j$ if possible ($n\geq2$), or else decrease the unique remaining number making it even.
• (to be continued...)
Note that the minimal possibilities for $(a_1,\ldots,a_n)$ here are $(4)$, $(3,2)$, and $(2,2,2)$. Anything that can be moved into a Bob-game is an Alice-game; this applies to any $(a_1,\ldots,a_n,1^{(2k+1)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, $(a_1,\ldots,a_n)\neq(2)$ (either remove or merge a $1$ so as to leave $a_1+\cdots+a_n+n-1$ even), and to any $(a_1,\ldots,a_n,1^{(2k)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, and $a_1+\cdots+a_n+n-1$ odd (either merge two of the $a_i$ or two entries $1$, or if there was just an odd singleton, decrease it). All cases $(3,1^{(l)})$ and $(2,2,1^{(l)})$ are covered by this, in a manner depending on the parity of $l$. It remains to classify the configurations $(2,1^{(l)})$ and $(1^{(l)})$. Moving outside this remaining collection always gives some Alice-game $(3,1^{(l)})$ or $(2,2,1^{(l)})$, which are losing moves that can be ignored. Then we complete our list of Bob-games with:
• $(1^{(3k)})$ and $(2,1^{(3k)})$ with $k>0$. Bob wins game $(1,1,1)$ by moving to $(2)$ in all cases. Similarly he wins other games $(1^{(3k)})$ by moving to $(2,1^{(3k-3)})$ in all cases. Finally Bob wins $(2,1^{(3k)})$ by moving to $(1^{(3k)})$ (unless Alice merges, but this also loses as we already saw).
-
Here is an almost complete solution.
If there is only one integer, then clearly it depends on the parity of the number to determine who wins. If the number is odd then Alice will win. If the number is even then Bob will win.
If there are two integers, then it depends on the parity of the sum of the two numbers $\Sigma$. There is one combine and $\Sigma$ reductions so the game will be won by Alice if $\Sigma + 1$ is odd. This will be true as long as there is not an opportunity to remove one of the integers (by reducing it to zero) before there is a chance to combine. It is sufficient to require that none of the integers are $1$.
Let us now prove this strategy by induction.
Suppose the game is played with $N$ positive integers greater than $1$. Call the sum of the $N$ integers $\Sigma$. Then Alice has a winning strategy if and only if $\Sigma + N - 1$ is odd.
Proof: This is clearly true for $N = 1$. Suppose that the statement holds for $N$ and consider $N+1$. If $\Sigma + N$ is odd, then Alice can choose to combine two integers on her first turn. This reduces to the case of $N$ numbers with $\Sigma + N - 1$ even. Since Bob is now starting, from the induction hypothesis, he loses.
Conversely suppose that $\Sigma + N$ is even. If Alice chooses to reduce a number by $1$, then Bob chooses to combine two numbers. Then we have $N$ numbers with $\Sigma + N - 1$ being even. It is now Alice's turn and she loses. Alternatively if Alice chooses to combine two numbers, then by the induction hypothesis $\Sigma + N$ is odd with Bob starting. Therefore Bob wins.
This leaves the problem of when the $N$ numbers contain $1$s.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 90, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387447237968445, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/11/23/uniform-spaces/?like=1&source=post_flair&_wpnonce=8149a33672
|
# The Unapologetic Mathematician
## Uniform Spaces
Now let’s add a little more structure to our topological spaces. We can use a topology on a set to talk about which points are “close” to a subset. Now we want to make a finer comparison by being able to say “the point $a$ is closer to the subset $A$ than $y$ is to $B$.” We’ll do this with a technique similar to neighborhoods. But there we just defined a collection of neighborhoods for each point. Here we will define the neighborhoods of all of our points “uniformly” over the whole space.
To this end, we will equip our set $X$ with a family $\Upsilon$ of subsets of $X\times X$ called the “uniform structure” on our space, and the elements $E\in\Upsilon$ will be “entourages”. We will write $E[x]$ for the set of $y$ so that $(x,y)\in E$, and we want these sets to form a neighborhood filter for $x$ as $E$ varies over $\Upsilon$. Here we go:
• Every entourage $E$ contains the diagonal $\{(x,x)|x\in X\}$.
• If $E$ is an entourage and $E\subseteq F\subseteq X\times X$, then $F$ is an entourage.
• If $E$ and $F$ are entourages, then $E\cap F$ is an entourage.
• If $E$ is an entourage then there is another entourage $F$ so that $(x,y)\in F$ and $(y,z)\in F$ imply $(x,z)\in E$.
• If $E$ is an entourage then its reflection $\bar{E}=\{(y,x)|(x,y)\in E\}$ is also an entourage.
The first of these axioms says that $x\in E[x]$, as we’d hope for a neighborhood. The next two ensure that the collection of all the $E[x]$ forms a neighborhood filter for $x$, but it does so “uniformly” for all the $x\in X$ at once. This means that we can compare neighborhoods of two different points because each of them comes from an entourage, and we can compare the entourages. The fourth axiom is like the one I omitted from my discussion of neighborhoods; every collection of entourages gives rise to a topology, but topologies can only give back uniform structures satisfying this requirement. Finally, the last axiom gives the very reasonable condition that if $y\in E[x]$, then $x\in \bar{E}[y]$. That is, if one point is in a neighborhood of another, then the other point should be in a neighborhood of the first. Sometimes this requirement is omitted to get a “quasi-uniform space”.
Now that we can compare closeness at different points, we can significantly enrich our concept of nets. Before now we talked about a net $x_\alpha$ converging to a point $x$ in the sense that the points $x_\alpha$ eventually got close to $x$. But now we can talk about whether the points of the net are getting closer to each other. That is, for every entourage $E$ there is a $\gamma\in D$ so that for all $\alpha\geq\gamma$ and $\beta\geq\gamma$ the pair $(x_\alpha,x_\beta)$ is in $E$. In this case we say that the net is “Cauchy”.
Now, if the full generality of nets still unnerves you, you can restrict to sequences. Then the condition is that there is some number $N$ so that for any two numbers $m$ and $n$ bigger than $N$ we have $x_m\in E[x_n]$. This gives us the notion of a Cauchy sequence, which some of you may already have heard of.
We can also enrich our notion of continuity. Before we said that a function $f:X\rightarrow Y$ from a topological space defined by a neighborhood system $(X,\mathcal{N}_X)$ to another one $(Y,\mathcal{N}_Y)$ is continuous at a point $x\in X$ if for each neighborhood $V\in\mathcal{N}_Y(f(x))$ contained the image $f(U)$ of some neighborhood $U\in\mathcal{N}_X(x)$, and we said that $f$ was continuous if it was continuous at every point of $X$.
Now our uniform structures allow us to talk about neighborhoods of all points of a space together, so we can adapt our definition to work uniformly. We say that a function $f:X\rightarrow Y$ from a uniform space $(X,\Upsilon_X)$ to another one $(Y,\Upsilon_Y)$ is uniformly continuous if for each entourage $F\in\Upsilon_Y$ there is some entourage $E\in\Upsilon_X$ that gets sent into $F$. More precisely, for every pair $(x_1,x_2)\in E$ the pair $(f(x_1),f(x_2))$ is in $F$.
In particular, any neighborhood of a point $f(x)\in Y$ is of the form $F[f(x)]$ for some entourage $F\in\Upsilon_Y$. Then uniform continuity gives us an entourage $E\in\Upsilon_X$, and thus a neighborhood $E[x]$ which is sent into $F[f(x)]$. Thus uniform continuity implies continuity, but not necessarily the other way around. It is possible that a function is continuous, but that the only ways of picking neighborhoods to satisfy the definition do not come from entourages.
These two extended definitions play well with each other too. Let’s consider a uniformly continuous function $f:X\rightarrow Y$ and a Cauchy net $x_\alpha$ in $X$. Then I assert that the image $f(x_\alpha)$ of this net is again Cauchy. Indeed, for every entourage $F\in\Upsilon_Y$ we want a $\gamma$ so that $\alpha\geq\gamma$ and $\beta\geq\gamma$ imply that the pair $(f(x_\alpha),f(x_\beta))$ is in $F$. But uniform continuity gives us an entourage $E\in\Upsilon_X$ that gets sent into $F$, and the Cauchy property of our net gives us a $\gamma$ so that $(x_\alpha,x_\beta)\in E$ for all $\alpha$ and $\beta$ above $\gamma$. Then $(f(x_\alpha),f(x_\beta))\in F$ and we’re done.
It wouldn’t surprise me if one could turn this around like we did for neighborhoods. Given a map $f:X\rightarrow Y$ which is not uniformly continuous use the uniform structure $\Upsilon_X$ as a directed set and construct a net on it which is Cauchy in $X$, but whose image is not Cauchy in $Y$. Then one could define uniform continuity as preservation of Cauchy nets and derive the other definition from it. However I’ve been looking at this conjecture for about an hour now and don’t quite see how to prove it. So for now I’ll just leave it, but if anyone else knows the right construction offhand I’d be glad to hear it.
### Like this:
Posted by John Armstrong | Point-Set Topology, Topology
## 10 Comments »
1. Motivating examples of uniform spaces being given by metric spaces and by topological groups. (In particular, compare the fourth axiom on entourages with the triangle inequality.) Probably you’ll be talking about that?
Uniform continuity is something stronger than preservation of Cauchy nets; consider the fact that the squaring map R –> R takes Cauchy nets to Cauchy nets but is not uniformly continuous. I think somehow nets in the usual sense are too “local” (e.g., they converge to a single point in Hausdorff spaces), or uniform continuity too global, for uniform continuity to be easily captured by nets.
Unless: you change the concept of net so that there is a notion of convergence to the diagonal of X; e.g.: instead of nets D –> X which converge to a point in X, consider “uniform nets” f: D –> Rel(X, X) valued in the set of binary relations on X which converge to the diagonal of X, or even more general relations. Here’s an offhand attempt at definition: say that the uniform net f converges to a binary relation R if for every entourage E, there exists x in D such that x <= y implies that the binary relation f(y) is contained in E o R (the composite of the relations E and R). This definition may have to be tweaked a bit to make everything come out just right.
By the way: spurred in part by your posts, I’m thinking a bit about another approach to topology similar to nets, but based on ultrafilter convergence. (In particular, I wanted to understand better this relational beta-module business, which turns out to be a very attractive piece of lax algebra [in the categorical sense].) Two papers have caught my eye: one by Walter Tholen, which gives a uniform treatment of ordered sets, metric spaces, and general topological spaces (and it seems to me uniform spaces can also be fit within his framework). Another is by Claudio Pisani which characterizes exponentiable topological spaces in lax algebraic terms, and which usefully gives complete proofs of things including Barr’s relational beta-module characterization of topological spaces.
Comment by Todd Trimble | November 23, 2007 | Reply
2. That’s a great counterexample, Todd. Thanks. And of course I’ll be talking about topological groups, and particularly about ordered groups, which will finally open the road to my first official mention of the real numbers. Have you noticed I’ve gotten all this way without talking about them yet?
Comment by | November 24, 2007 | Reply
3. [...] Uniform Spaces Okay, in a uniform space we have these things called “Cauchy nets”, which are ones where the points of the net [...]
Pingback by | November 29, 2007 | Reply
4. [...] correspond to our usual notion of magnitudes like distances, let’s refine our concept of a uniform space to take into account this idea of [...]
Pingback by | December 10, 2007 | Reply
5. [...] many other ones which give rise to different distance functions, but the same topology and the same uniform structure. And often it’s the topology that we’ll be most interested [...]
Pingback by | August 19, 2008 | Reply
6. [...] an inverse , and it only makes sense that these be homeomorphisms. And to capture this, we put a uniform structure on our space. That is, we specify what the neighborhoods are of , and just translate them around to [...]
Pingback by | May 12, 2010 | Reply
7. [...] we say that the sequence is Cauchy a.e. if there exists a set of measure zero so that is a Cauchy sequence of real numbers for all . That is, given and there is some natural number depending on and so [...]
Pingback by | May 14, 2010 | Reply
8. [...] then , , and itself are all uniformly continuous. [...]
Pingback by | August 6, 2010 | Reply
9. You left out an axiom: that there exists an entourage at all (or equivalently, in light of the other axioms, that X × X is an entourage).
Comment by | June 1, 2011 | Reply
10. That’s a good point, though does the existence of an “empty” uniform structure lead to any huge problems?
Comment by | June 1, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 96, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445697665214539, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/124072/permutation-by-interchange?answertab=votes
|
# Permutation by interchange.
I made a conjecture today
Start from $1, \ldots, n$, by interchanging the position of $i$ and $j$ where $i < j$ in each step, we are able to get any permutation of $\{1, \ldots, n\}$.
Do you think my conjecture is correct?
OK, it turns out this is very easy. What if we add the following extra constraint to the conjecture?
In each step we can only switch the position of $i$ and $j$ who are adjacent in current permutation.
-
3
Yes, your conjecture is correct. It is equivalent to the statement that the symmetric group $S_n$ is generated by transpositions. Or, to the simple observation that if you want to, say, reorder $n$-books in your shelf, you can do this by exchanging two books at a time (exchange the book currently in position $1$ with the book you want in position $1$; then the book currently in position $2$ with the one you want in position $2$; etc until you are left with just two books, either in order (you are done) or in the wrong place (exchange them). – Arturo Magidin Mar 24 '12 at 22:32
Yes, and you should formulate it "from the end": Start from any permutation of $\left\lbrace 1,2,...,n\right\rbrace$. By interchanging the position of $i$ and $j$ where $i<j$ in each step, we are able to get the identity permutation. (Take your favorite sorting algorithm to prove this.) – darij grinberg Mar 24 '12 at 22:33
@ArturoMagidin: not exactly equivalent, because he allows only the transpositions that increase the number of inversions. – darij grinberg Mar 24 '12 at 22:33
3
«One can order a shelf of books using two hands, without even a place to temporarily put the books.» – Mariano Suárez-Alvarez♦ Mar 24 '12 at 23:03
1
The 2nd conjecture is that bubble sort works. And it does! – Louis Mar 25 '12 at 0:03
show 5 more comments
## 1 Answer
The easiest way to interpret this problem is as a sorting algorithm.
Take $n$ people and stand them in a line. Tell each person "if the person on your right is taller than you, then swap places".
We eventually will get the tallest person on the left. Then, by induction (ignoring the tallest person), we will get the next-tallest person on his right, and so on, down to the shortest person on the right.
Each swap made takes a taller person on the right and a shorter person on the left. This seems to be where you were heading with the $i<j$ constraint, although, as it stands, it's an empty constraint: e.g. we can interchange the position of $4$ and $2$, since when $i=2$ and $j=4$, we have $i<j$.
Reversing this process shows that we can reach any permutation via transpositions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256086349487305, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/12461/argument-about-fallacy-of-diffm-being-a-gauge-group-for-general-relativity
|
argument about fallacy of diff(M) being a gauge group for general relativity
I want to outline a solid argument (or bulletpoints) to show how weak is the idea of diff(M) being the gauge group of general relativity.
basically i have these points that in my view are very solid but i want to understand if there are misconceptions on my part that i'm simply not getting and if its so, i ask for help to make the case more solid, or understanding why it doesn't apply (to gravity):
• gauge groups are not the same as a symmetry group (thanks to Raymond Streater for making that point completely clear)
• gauge invariance in electrodynamics is an observation that physical observables are unchanged after a gauge transformation without changing coordinate frame ( we are ask to believe that in gravity someone did the same? that is, someone made the observation that physical observables are unchanged after a diffeomorphism-gauge-transformation, only to later argue that because of this, that there are no physical observables to begin with, that doesn't make a lot of sense, to not say that its just plain stupid circular argument )
• classic electrodynamics is also invariant (as in symmetry invariant, not as gauge-invariant) under Diff(M). The invariance is of course broken when the theory is quantized and $\hbar$ makes an appearance, because it assumes a preferred scale for certain energies. The key point here is: classical gravity is not special regarding having diff(M) as a symmetry group
• from bulletpoints 2 and 3, if i cannot infer that Diff(M) is a gauge-invariance of electrodynamics, the same should apply to gravity
For this question, i would say that a valid answer would either disprove any of the arguments as fallacies themselves (hence showing a solid argument why gravity is special and diff(M) is without a doubt its gauge group), or improve the argument for making it bullet-proof (sorry for the pun)
-
Possibly related: physics.stackexchange.com/q/4359/2451 and physics.stackexchange.com/q/5692/2451 – Qmechanic♦ Jul 19 '11 at 15:32
yes, at that time i didn't understand the point (of gauge invariance) very well, and the point i didn't understand at that time is that the statement (that A physical observable should be invariant under any gauge transformation) followed from the definition of gauge symmetry; but this question is actually about something i thought i understood (that Diff(M) is a gauge symmetry) but i definitely didn't (and now i perceive as wrong). In short, that previous question manifested my confusion on the subject, but i believe i now have a better idea from where that confusion stems – lurscher Jul 19 '11 at 15:44
1
If you could define invariance in such a way, any differential equation would become invariant, because it is known, how to rewrite differential equations for different coordinate systems. So Maxwell equations should not be considered as diff(M) invariant only because you may write that in curved coordinates. – Alex 'qubeat' Jul 19 '11 at 16:31
1
@Alex, exactly - so you are now feeling my confusion as well - so why expect gravity observables have to be invariant under diff(M)? – lurscher Jul 19 '11 at 16:34
1 Answer
I will mostly talk about the classical physics as this is complicated enough already (I might mention something about quantum stuff at the end). So, let us first get all the relevant terms straight, so that we avoid any further confusion. In particular, we need to be precise about what we mean by invariance because already two different notions have been thrown into one bag.
• A symmetry group of a physical system is a group of transformations that leave the system invariant. E.g. the electric field of a point charge is invariant under rotations w.r.t. to that point. In other words, we want the group to act trivially. But that means that this immediatelly rules out any equations that carries tensorial indices (i.e. transforms in a non-trivial representation of the rotation group). For these equations, if you perform a rotation, the equation will change. Of course, it will change in an easily describable manner and a different observer will agree. But the difference is crucial. E.g. in classsical quantum mechanics we require the equations to be scalar always (which is reflected in the fact that Hamiltonian transforms trivially under the symmetry group).
• Carrying a group action is a broader term that includes tensorial equations we have left out in the previous bullet point. We only require that equations or states are being acted upon by a group. Note that the group action need not have any relation to the symmetry. E.g. take the point charge and translate it. This will certainly produce a different system (at least if there is some background so that we can actually distinguish points).
• A gauge group of a system is a set of transformations that leave the states invariant. What this means is that the actual states of the system are equivalence classes of orbits of the gauge group. Explicitly, consider the equation ${{\rm d} \over {\rm d} x} f(x) = 0$. This has a solution $g(x) \equiv C$ for any $C$. But if we posit that the gauge group of the equation consists of the transformations $f(x) \mapsto f(x) + a$ then we identify all the constant solutions and are left with a single equivalence class of them -- this will be the physical state. This is what gauge groups do in general: they allow us to treat equivalence classes in terms of their constituents. Obviously, gauge groups are completely redundant. The reason people work with gauge groups in the first place is that the description of the system may simplify after introduction of these additional parameters that "see" into the equivalence class. Of course historical process went backwards: since gauge-theoretical formulation is simpler, this is whan people discovered first and they only noticed the presence of the gauge groups afterwards.
Now, having said this, let's look at the electromagnetism (first in the flat space). What symmetries are the Maxwell equations invariant under? One would like to say under Lorentz group but this actually not the case. Let's look at this more closely. As alluded to previously the equation $$\partial_{\mu} F^{\mu \nu} = J^{\nu}$$ can't really be invariant since it carries a vector index. It transforms in the four-vector representation of the Lorentz group, yes -- but it is certainly not invariant. Contrast this with the Minkowski space-time itself which is left invariant by the Lorentz group.
We also have ${\rm d} F = 0$ and therefore (in a contractible space-time) also $F = {\rm d} A$ which is obviously invariant w.r.t. to $A \mapsto A + {\rm d} \chi$. In terms of the above discussion the equivalence class $A + {\rm d} \Omega^0({\mathbb R}^{1,3})$ is the physical state and the gauge transformation lets us distinguish between its constituents
Let's move to a curved spacetime now. Then we have $$\nabla_{\mu} F^{\mu \nu} = J^{\nu}$$ Again, this is not invariant under ${\rm Diff}(M)$. But it transforms under an action of ${\rm Diff}(M)$. The only thing in sight that is invariant under ${\rm Diff}(M)$ is $M$ itself (by definition).
In the very same way, GR is not invariant under ${\rm Diff}(M)$ but only transforms under a certain action of it (different than EM though, since GR equations carry two indices). Also, ${\rm Diff}(M)$ can't possibly be a gauge group of any of these systems since it would imply that almost all possible field configurations get cramped to a single equivalence class possibly indexed by some topological invariant which can't be changed by a diffeomorphism. In other words, theory with ${\rm Diff}(M)$ as a gauge group would need to be purely topological with no local degrees of freedom.
-
i understand that tensor covariance is not exactly the same as invariance, but rather well-geometrically-defined-non-invariance (in this case, of tensors with 1 and 2 indices). I'm not sure i understand the idea that "Hamiltonian is a (geometrical) scalar always"; i definitely can take eigenstates of an atom and apply a Lorentz boost, measure them, and i will see a $\gamma$ factor shifting all the eigenvalues, isn't that how a 4-vector should behave? - I agree with the cramping of the fields and no local degrees of freedom, but isn't that a hint that this approach is wrong? – lurscher Jul 19 '11 at 19:47
@lurscher: ah, good point. I was only talking about classical QM; I will make that explicit. As for the last sentence, what do you mean by wrong? In topological theories of gravity (such as GR in three dimensions) this is precisely what happens. But if you are referring to quantum gravity, I am not sure. I have no idea what kind of observables people use in different kind of theories. It might be perfectly possible that the only reasonable observables are of the topological type. – Marek Jul 19 '11 at 20:04
1
wrong in the following sense: all quantum theories i know of produce classical observables from aggregation or reduction of quantum observables (averages, projectors, traces, etc.) but you cannot take a number of topological invariants and even in principle have them say anything about particular configurations of curvature, black holes, or Newtonian limits. So it would effectively partition physics in observables that are not measurable and measurables that are not observables - a sort of blind alley – lurscher Jul 19 '11 at 20:29
1
@lurscher: that is actually not the case if the observable is understood in the sense of a Hermitian operator. Consider QFT: there is no notion of position operator there since there is no way to consistently talk about position of a particle. Yet this information somehow magically appears when one restricts to low energies (implying both non-relativistic limit and restriction to some $n$-particle portion of the Fock space). Tell me why you think something similar can't happen in quantum gravity. – Marek Jul 19 '11 at 20:30
@lurscher, okay, your edited comment makes more sense. Now, tell me why you think that topological information on quantum scale can't determine macroscopic geometry? Here's one (arguable very naive) way of how to do this: suppose there were holes in the space-time roughly Planck length appart. There would be such a huge number of them that surely one could encode properties of the macroscopic system in a purely topological way. Once again, it's not obvious to me that something like this can't be done. – Marek Jul 19 '11 at 20:43
show 6 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436594843864441, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/43555/optimization-of-relative-entropy
|
## Optimization of relative entropy
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Wondering if my following question is an application of information theory:
Lets say we have a factory and ship boxes of stuff outside. If a competitor stands outside my factory, observes the stream of boxes coming out of a factory and note their sizes, he can get valuable information about my manufacturing process. So in order to hoodwink him, I manually expand the boxes to sizes different than the original ones. In order to make sure that I am doing a good job at masking the original size distribution from the resultant distribution, I must make sure that I maximize relative entropy between output distribution q and input distribution p, given that I cannot just create a uniform output distribution. Is this premise correct? Do you find any flaw in the question?
The constraint, of course, is that we must use minimum amount of box material. In other words, sum of box sizes for a unit period of time must not exceed a threshold B.
i.e. $\frac{\sum_{j=1}^{i}{BS_i}}{T} \le B$
Other constraints are:
1] Both p and q must be probability distributions i.e. sum up to 1
2] $BS_i \le MS$, where MS is the maximum size of a box.
--
If we take $BS_i$ as the size of the $i^{th}$ box, then we have an optimization function applying Lagrange multiplier $\lambda$:
$\Lambda(BS_i,\lambda,p,q) = D(q||p) + \lambda (\frac{\sum_{j=1}^{i}{BS_i}}{T} - B)$ $= \sum_{S=1}^{n}{q(S) \log \frac{q(S)}{p(S)}} + \lambda (\frac{\sum_{j=1}^{i}{BS_i}}{T} - B)$
$= \sum_{S=1}^{n}{q(S) \log q(S)} -\sum_{S=1}^{n}{q(S) \log p(S)} + \lambda (\frac{\sum_{j=1}^{i}{BS_i}}{T} - B)$
To proceed: Lets say we at the $i^{th}$ iteration. We know the actual box sizes that we already sent out for 1...i-1 iterations, and the actual size of the present box. i.e. $BS_0..BS_i$ are known, and hence p(BS) can be calculated and hence is a known value. We know the output box sizes till the $i-1^{th}$ iteration.
So, we have,
$\Lambda(BS_i,\lambda,q) = \sum_{S=1}^{n}{q(S) \log q(S)} -\sum_{S=1}^{n}{k*q(S) } + \lambda (\frac{\sum_{j=1}^{i}{BS_i}}{T} - B)$
How do we now solve the optimization function, in order to get the output box size for this \$i^{th} iteration?
We have, $\frac{\partial\Lambda}{\partial BS_i} = 0$, $\frac{\partial\Lambda}{\partial q(S)} = 0$, and $\frac{\partial\Lambda}{\partial \lambda} = 0$
The pertial differentiation wrt $\lambda$ results in a trivial solution. Since q(S) is dependent on $BS_i$, we will get a factor of $\frac{\partial q(S)}{\partial BS_i}$ in the other two equations.
How do I proceed from here? What I am thinking wrong?
-
## 2 Answers
Let us imagine that your factory manufactures two products, one of which is small, and the other is large. These products are shipped out in boxes. Suppose that your boxes come in two sizes, small and large. Suppose further that you can ship a small product in a large box, but that you cannot ship a large product in a small box.
Instead of products / boxes of various sizes, a more information-theoretic way of looking at things would be to think of the factory as a binary source, and to view the box-enlargement process as a binary channel. Let $X$ and $Y$ be discrete random variables with alphabets $\mathcal{X}$ and $\mathcal{Y}$, respectively, where $\mathcal{X} = \mathcal{Y} = \{0,1\}$. If the output of the production line is a small product, then $X = 0$, otherwise $X = 1$. If a small box is shipped out, then $Y = 0$, otherwise $Y = 1$. Hence, the random variable $X$ gives us the size of the product, while the random variable $Y$ gives us the size of the box. We can view $X$ and $Y$ as the input and output of a binary channel, respectively.
To deceive your competitors, every time a small product is ready to be shipped you flip a coin and, depending on the outcome, you choose to ship the small product in a large box or not. If you do so, then $X = 0$ and $Y = 1$. The "channel" has introduced an error. The channel is defined by the transition probabilities
$\{ \mathbb{P}[Y = 0 \mid X = 0], \mathbb{P}[Y = 1 \mid X = 0], \mathbb{P}[Y = 0 \mid X = 1], \mathbb{P}[Y = 1 \mid X = 1] \}$.
A competitor observes the sizes of the boxes being shipped out and tries to infer what the actual sizes of the products inside the boxes are. In other words, your competitor would like to infer what the probability mass function (p.m.f.) of $X$ is, knowing only the p.m.f. of $Y$. To keep your competitor maximally confused, you would like to maximize the conditional entropy $H (X \mid Y)$, which is the uncertainty about $X$ given $Y$. Recall that the mutual information is
$I (X;Y) = H(X) - H(X \mid Y)$
and it gives us the reduction in the uncertainty of $X$ due to knowledge of $Y$. We would like to minimize the mutual information, which is equivalent to maximizing the conditional entropy $H(X \mid Y)$, as $H(X)$ is fixed (depends on the p.m.f of $X$, which is assumed to be fixed).
The mutual information can be written as $I(X;Y) = D( p(x,y) \| p(x) p(y) )$, which is the Kullback-Leibler distance between the joint p.m.f. and the product of the marginal p.m.f.'s. Check [1] for details. Therefore, you have a relative entropy minimization problem.
Usually, we are given the channel, and we choose the p.m.f. of $X$ that maximizes the mutual information $I(X;Y)$. In this problem, we are given the p.m.f. of $X$, and we choose the channel that minimizes the mutual information. It's a sort of "dual" of finding the capacity of a given channel.
References:
[1] Thomas M. Cover and Joy A. Thomas, Elements of Information Theory, John Wiley & Sons 2006.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Two observations:
1. Are you sure that you have to maximize and not minimize? Relative entropy is a convex function, and usually maximizing convex functions is not a very nice thing --- the solution will be on the boundary of your feasible set.
2. In your formulation above, you have missed out the constraint that $q$ should be a distribution. Without that constraint, the expression that you have written will reach its supremum by letting $q(S) \to \infty$ for any index $S$.
-
Yes I guess we can minimize the mutual information or maximize relative entropy, so that the output distribution is as different as possible from the input distribution. But with mutual information, joint probability and marginal probabilities come in, and it becomes difficult. I agree with the implicit constraint that q has to be a distribution i.e. sum up to 1. How do I proceed from here then? – skypemesm Oct 25 2010 at 20:44
Note that if p(S)=0 for any index S, then taking q(S) > 0 maximizes your relative entropy. So some more care is needed in formulating your problem. Also, to take care of constraint, use the idea of "Lagrange multipliers". From your formulation above, it seems that you have written some sort of Lagrangian, and then merely taken derivatives. The way it is written above, letting BS_i increase without bound also maximizes $\Lambda$. Can you edit your problem to formally state your optimization task? – S. Sra Oct 25 2010 at 21:05
Yes. I agree with you. There has to be an upper bound on $BS_i$. However, relative entropy is only defined if P and Q both sum to 1 and if Q(i) > 0 for any i such that P(i) > 0. If the quantity 0log0 appears in the formula, it is interpreted as zero [source:wikipedia]. And $\lambda$ is the lagrange multiplier here. Thanks for replying. – skypemesm Oct 26 2010 at 22:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248687624931335, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Ultraproduct
|
# Ultraproduct
The ultraproduct is a mathematical construction that appears mainly in abstract algebra and in model theory, a branch of mathematical logic. An ultraproduct is a quotient of the direct product of a family of structures. All factors need to have the same signature. The ultrapower is the special case of this construction in which all factors are equal.
For example, ultrapowers can be used to construct new fields from given ones. The hyperreal numbers, an ultrapower of the real numbers, are a special case of this.
Some striking applications of ultraproducts include very elegant proofs of the compactness theorem and the completeness theorem, Keisler's ultrapower theorem, which gives an algebraic characterization of the semantic notion of elementary equivalence, and the Robinson-Zakon presentation of the use of superstructures and their monomorphisms to construct nonstandard models of analysis, leading to the growth of the area of non-standard analysis, which was pioneered (as an application of the compactness theorem) by Abraham Robinson.
## Definition
The general method for getting ultraproducts uses an index set I, a structure Mi for each element i of I (all of the same signature), and an ultrafilter U on I. The usual choice is for I to be infinite and U to contain all cofinite subsets of I. Otherwise the ultrafilter is principal, and the ultraproduct is isomorphic to one of the factors.
Algebraic operations on the Cartesian product
$\prod_{i \in I} M_i$
are defined in the usual way (for example, for a binary function +, (a + b) i = ai + bi ), and an equivalence relation is defined by a ~ b if and only if
$\left\{ i \in I: a_i = b_i \right\}\in U,$
and the ultraproduct is the quotient set with respect to ~. The ultraproduct is therefore sometimes denoted by
$\prod_{i\in I}M_i / U .$
One may define a finitely additive measure m on the index set I by saying m(A) = 1 if A ∈ U and = 0 otherwise. Then two members of the Cartesian product are equivalent precisely if they are equal almost everywhere on the index set. The ultraproduct is the set of equivalence classes thus generated.
Other relations can be extended the same way:
$R([a^1],\dots,[a^n]) \iff \left\{ i \in I: R^{M_i}(a^1_i,\dots,a^n_i) \right\}\in U,$
where [a] denotes the equivalence class of a with respect to ~.
In particular, if every Mi is an ordered field, then so is the ultraproduct.
An ultrapower is an ultraproduct for which all the factors Mi are equal:
$M^\kappa/U=\prod_{\alpha<\kappa}M/U.\,$
More generally, the construction above can be carried out whenever U is a filter on I; the resulting model $\prod_{i\in I}M_i / U$ is then called a reduced product.
## Examples
The hyperreal numbers are the ultraproduct of one copy of the real numbers for every natural number, with regard to an ultrafilter over the natural numbers containing all cofinite sets. Their order is the extension of the order of the real numbers. For example, the sequence ω given by ωi = i defines an equivalence class representing a hyperreal number that is greater than any real number.
Analogously, one can define nonstandard integers, nonstandard complex numbers, etc., by taking the ultraproduct of copies of the corresponding structures.
As an example of the carrying over of relations into the ultraproduct, consider the sequence ψ defined by ψi = 2i. Because ψi > ωi = i for all i, it follows that the equivalence class of ψi = 2i is greater than the equivalence class of ωi = i, so that it can be interpreted as an infinite number which is greater than the one originally constructed. However, let χi = i for i not equal to 7, but χ7 = 8. The set of indices on which ω and χ agree is a member of any ultrafilter (because ω and χ agree almost everywhere), so ω and χ belong to the same equivalence class.
In the theory of large cardinals, a standard construction is to take the ultraproduct of the whole set-theoretic universe with respect to some carefully chosen ultrafilter U. Properties of this ultrafilter U have a strong influence on (higher order) properties of the ultraproduct; for example, if U is σ-complete, then the ultraproduct will again be well-founded. (See measurable cardinal for the prototypical example.)
## Łoś's theorem
Łoś's theorem, also called the fundamental theorem of ultraproducts, is due to Jerzy Łoś (the surname is pronounced , approximately "wash"). It states that any first-order formula is true in the ultraproduct if and only if the set of indices i such that the formula is true in Mi is a member of U. More precisely:
Let σ be a signature, $U$ an ultrafilter over a set $I$, and for each $i \in I$ let $M_{i}$ be a σ-structure. Let $M$ be the ultraproduct of the $M_{i}$ with respect to $U$, that is, $M = \prod_{ i\in I }M_i/U.$ Then, for each $a^{1}, \ldots, a^{n} \in \prod M_{i}$, where $a^{k} = (a^{k}_{i})_{i \in I}$, and for every σ-formula $\phi$,
$M \models \phi[[a^1], \ldots, [a^n]] \iff \{ i \in I : M_{i} \models \phi[a^1_{i}, \ldots, a^n_{i} ] \} \in U.$
The theorem is proved by induction on the complexity of the formula $\phi$. The fact that $U$ is an ultrafilter (and not just a filter) is used in the negation clause, and the axiom of choice is needed at the existential quantifier step. As an application, one obtains the transfer theorem for hyperreal fields.
### Examples
Let R be a unary relation in the structure M, and form the ultrapower of M. Then the set $S=\{x \in M|R x\}$ has an analog *S in the ultrapower, and first-order formulas involving S are also valid for *S. For example, let M be the reals, and let Rx hold if x is a rational number. Then in M we can say that for any pair of rationals x and y, there exists another number z such that z is not rational, and x < z < y. Since this can be translated into a first-order logical formula in the relevant formal language, Łoś's theorem implies that *S has the same property. That is, we can define a notion of the hyperrational numbers, which are a subset of the hyperreals, and they have the same first-order properties as the rationals.
Consider, however, the Archimedean property of the reals, which states that there is no real number x such that x > 1, x > 1 +1 , x > 1 + 1 + 1, ... for every inequality in the infinite list. Łoś's theorem does not apply to the Archimedean property, because the Archimedean property cannot be stated in first-order logic. In fact, the Archimedean property is false for the hyperreals, as shown by the construction of the hyperreal number ω above.
## Ultralimit
For the ultraproduct of a sequence of metric spaces, see Ultralimit.
In model theory and set theory, an ultralimit or limiting ultrapower is a direct limit of a sequence of ultrapowers.
Beginning with a structure, A0, and an ultrafilter, D0, form an ultrapower, A1. Then repeat the process to form A2, and so forth. For each n there is a canonical diagonal embedding $A_n\to A_{n+1}$. At limit stages, such as Aω, form the direct limit of earlier stages. One may continue into the transfinite.
## References
• Bell, John Lane; Slomson, Alan B. (2006) [1969]. Models and Ultraproducts: An Introduction (reprint of 1974 edition ed.). Dover Publications. ISBN 0-486-44979-3.
•
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8934268355369568, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119732/what-is-the-characteristic-property-of-surjective-submersions/119758
|
## What is the characteristic property of surjective submersions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Lee's 'Introduction to smooth manifolds' he states that given smooth manifolds X,Y and a surjective submersion f:X→Y, then f is a smoothly final map, that is for any further smooth manifold Z, and any map g:Y→Z, we have g smooth iff g∘f is smooth.
He then says that problem 4.7 shows why this property is 'characteristic'. I can't see why the reverse implication should hold.
Unfortunately, google-books doesn't show that page, nor do I have access to a mathematical library, can some-one enlighten me as to what he means?
One of the answers to this question states a characteristic property, but it doesn't appear on the face of it what Lee has in mind.
-
2
Why are people trying to close this? – David Carchedi Jan 24 at 19:11
Since SS (surjective submersion) is a strictly stronger property compared with SF (smoothly final), it remains the curiosity of giving a characterization of both. Note that a SS map $f:X\to Y$ produces by restriction, on any open set $U\subset X$, an SS (thus SF) $f_{|U}:U\to f(U)$ onto an open subset of $Y$. Therefore it is not only SF, but also "locally SF" in the above sense. I'm not sure if this stronger property is enough to characterize SS maps. On the other hand, it would be interesting a characterization of SF, e.g. in the category of $C^1$ manifolds and maps. – Pietro Majer Jan 25 at 11:09
## 3 Answers
Here's what I had in mind:
Theorem: Suppose $M$ and $N$ are smooth manifolds and $\pi:M\to N$ is a surjective smooth submersion. Then the given topology and smooth structure on $N$ are the only ones that satisfy the characteristic property.
(That's what Problem 4-7 asks you to prove.)
-
I wasn't expecting the author of the text to turn up! Thanks, what you had in mind wasn't what I was expecting, but I see now I should have done, its exactly analogous to final maps in Top. – Mozibur Ullah Jan 25 at 1:49
@Mozibur Ullah, you're welcome! – Jack Lee Jan 25 at 5:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The reverse implication, as it is, is not true, for quite an obvious reason (though I think a local version of it should be true).
Start by any smoothly final map $f_0:X_0\rightarrow Y$ (e.g. any surjective submersion), and a smooth map $f_1:X_1 \rightarrow Y$ which is not a submersion. Then, the disjoint union $f:=f_0\sqcup f_1: X_0\sqcup X_1 \rightarrow Y$ is not a submersion, nevertheless it is still smoothly final. ( Indeed, for any smooth manifold $Z$ and any map $g:Y\rightarrow Z$, if $g\circ (f_0\sqcup f_1)=(g\circ f_0)\sqcup (g\circ f_1)$ is smooth, so is $g\circ f_0$, hence $g$ because $f_0$ is smoothly final.
It is true that a smoothly final map $f:X\rightarrow Y$ is necessarily surjective (note e.g. that the above construction $f_0\sqcup f_1$ was surjective). In fact, for any $y\in Y$ there exists a map $g:Y\rightarrow\mathbb{R}$ differentiable in $Y\setminus\{y\}$ and not in $y$ (e.g., a map supported in the domain of a local chart at $y$, that in a local chart is $\|\cdot\|$ near $0$). Then, clearly, if $f:X\rightarrow Y$ is not surjective, say because there is $y\in Y\setminus f(X)$, then $g\circ f$ is smooth though $g$ is not, so $f$ is not smoothly final.
-
More generally: for smooth maps $h:U\to X$ and $f:X\to Y$, if $fh$ is smoothly final, then $f$ is smoothly final. – Pietro Majer Jan 24 at 23:18
great, going by Lees answer I see that my question wasn't quite right. But I am interested in how I phrased it. Do you think it can actually hold locally? I've accepted Lees answer as it only seems fair since I picked up the question from his book. But your answer is equally worthwhile. It doesn't seem quite correct that one should choose. – Mozibur Ullah Jan 25 at 1:54
and your additional comment is useful too. – Mozibur Ullah Jan 25 at 1:58
By the implicit function theorem, the submersion property of $f$ tells you that any point $x\in X$ has a neighborhood of the product form $U\times V'$ such that $f$ is constant along each copy of $U$ and such that $f$ induces a diffeomorphism of $V'$ onto $V=f(U\times V')$, which is a neighborhood of $y=f(x)$. Knowing that $g\circ f$ is smooth at $x$ lets you precompose it with the inverse of the above diffeomorphism to get $g$ restricted to $V$. It is then easy to conclude that $g$ is smooth at $y$. Since $f$ is surjective, the same argument can be repeated for every $y\in Y$.
-
The OP asked for the other implication: If f is smoothly final, then it is a surjective submersion. – Dylan Wilson Jan 24 at 16:51
Hmm, got mixed up with which implication was in question. – Igor Khavkine Jan 24 at 17:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541253447532654, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/135759/euler-angles-quaternions-and-hyperspace
|
# Euler angles, quaternions and hyperspace
In three-dimensional space it's possible to define rotations using the Euler angles $(\Psi,\Theta,\Phi)$ or quaternions $(i,j,k)$ If we have a hyperspace with more than three coordinates, is it still possible to use quaternions or a generalization of them, to describe rotations? Thanks in advance.
-
1
If your vector space is real the group of special orthogonal matrices describe rotations in that space. If it's complex, special unitaries do. Quaternions have a representation as $SU(2)$, so you may think of $SU(N)$ as a generalization. Given that your vector space is provided with the 2-norm, $||v||_2$. – draks ... Apr 23 '12 at 12:47
## 1 Answer
In 4D, it's possible to describe any rotation by a pair of unit quaternions $p,q$, with which we can rotate points considered as quaternions as follows $x \mapsto pxq$. It's easy to see that this is either a 4D rotation or a reflection, since a quaternion $x = a + ib + jc + dk$ has norm $a^2 + b^2 + c^2 + d^2$, and this norm is multiplicative, and so multiplying by two quaternions of length one will preserve lengths, and thus must either gives a rotation or a rotoreflection (combination of a reflection and a rotation). But since there always exists a continuous path from such a rotation to the identity rotation, we know it must be a rotation.
To see that any 4D rotation can be expressed in this way, we note that any rotation can be described as the product of an even number of reflections. Now expressing reflections in terms of quaternions
reflection in the hyperplane with normal $n$ $$x - 2(x \cdot n)n = x - 2(\frac{1}{2}(x\overline{n} + n\overline{x}))n = x - x\overline{n}n - n\overline{x}n = -n\overline{x}n$$ so the product of an even number of reflections is of the form $x \mapsto pxq$.
With geometric algebra, it's possible to construct something similar for dimensions higher than 3 or 4 (using the same reflection trick), but this algebra has zero divisors, so it isn't as nice as the quaternions, and also, the rotation elements are not isomorphic to the the vector elements you are applying your action on.
If you don't know geometric algebra, then you might want to read up on it, but then you should be able to derive it using the fact that in geometric algebra, the dot product of two vectors can be expressed as $$x \cdot y = \frac{1}{2}(xy + yx)$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929320752620697, "perplexity_flag": "head"}
|
http://npcontemplation.blogspot.com/2012_02_01_archive.html
|
# NP Contemplation
Neural Networks, Machine Learning and Artificial Intelligence.
## Thursday, February 16, 2012
### A machine that can dream
Note: This takes 20-30s to load, supported browsers are Chrome, Safari 4.0+,
Firefox 4+, Opera 10.0+. It requires heavy computations.
This is not a video. This is a live Boltzmann machine. On top, the flickering lights are the firing neurons of the machine, and at the bottom you see what it is thinking. This particular machine has been shown thousands of faces, and now it imagines faces when it dreams.
The Boltzmann machine has the remarkable ability of dreaming. It can imagine things that it's never seen before. They were first introduced by Geoff Hinton and Terry Sejnowski as a model of the brain in 1983. However, it was a recent breakthrough in 2006 that finally showed their true potential for real world problems. Nowadays, they are becoming a key component in some state of the art systems for speech recognition and computer vision.
How do they work? Here's a quick explanation focusing on the Restricted Boltzmann Machine (RBM) for simplicity. It is defined by its so-called "energy" function
$$E({\bf v}, {\bf h}) = - \sum\limits_{i,j} v_i h_j w_{ij}$$
This function measures the energy between a sensory input vector $${\bf v}$$ and the state of each neuron $${\bf h}$$. The parameters $$w_{ij}$$ weight correlations in the data. This is used to define the probability
$$p({\bf v}, {\bf h}) = \frac{e^{-E({\bf v}, {\bf h})}}{\sum\limits_{{\bf v}',{\bf h}'} e^{-E({\bf v}', {\bf h'})}}$$
where the denominator is the summation of the energy of all possible configurations of inputs and brain states.
Learning consists of adjusting $$w_{ij}$$ to maximize the probability the RBM assigns to what you show it. This will make the neurons detect patterns in the sensory input. Dreaming consists of traveling in probable sensory inputs and brain states using Markov Chain Monte Carlo (MCMC).
If you want to know more about Boltzmann machines and Deep Learning, you should checkout this excellent talk by Geoff Hinton, or you can read this introductory paper by Yoshua Bengio and Yann LeCun.
You can also find here a pythonic implementation of the binary Restricted Boltzmann Machine (RBM) that I wrote.
Posted by Yann N. Dauphin
Subscribe to: Posts (Atom)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9001321196556091, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/18938?sort=newest
|
## Triangulations coming from a poset. Or: What conditions are necessary and sufficient for a finite simplicial complex to be the order complex of a poset?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Every partially ordered set gives a triangulation of (the geometric realisation of) its order complex. (The n-simplices of the order complex are the chains $x_0\leq x_1\leq\cdots\leq x_n$.) However, there are triangulations of topological spaces that do not arise this way.
Is there a name for triangulations having this special property of "coming from a poset?"
EDIT: Apparently, the following formulation of my question is cleaner: what conditions are necessary and sufficient for a finite simplicial complex to be the order complex of a poset?
-
3
Doesn't every triangulation come from a poset? Let $V$ be the set of vertices of the triangulation, $S=$ set of subsets of $V$, a poset via inclusion, and $T\subset S$ the sub-poset corresponding to the triangulation. What am I missing? – Paul Mar 21 2010 at 19:24
If on the other hand you're interested in when a simplicial complex is a triangulation of a manifold, that has a very nice (and algorithmically impossible to implement in general) solution. – Ryan Budney Mar 21 2010 at 22:26
4
@Paul: The triangulation coming from the poset you construct is the baryzentric subdivision of the triangulation we started with. – Rasmus Mar 21 2010 at 23:15
@Ryan Budney: I have changed manifold to topological space, sorry. – Rasmus Mar 21 2010 at 23:22
## 3 Answers
Here are necessary and sufficient conditions for an abstract, finite simplicial complex $\mathcal{S}$ to be the order complex of some partially ordered set.
(i) $\mathcal{S}$ has no missing faces of cardinality $\geq 3$; and
(ii) The graph given by the edges (=$1$-dimensional simplices) of $\mathcal{S}$ is a comparability.
[Definitions. (a) A missing face of $\mathcal{S}$ is a subset $M$ of its vertices (=$0$-dimensional simplices) such that $M \not \in \mathcal{S}$, but all proper subsets $P\subseteq M$ satisfy $P\in \mathcal{S}$. (b) A graph (=undirected graph with no loops nor multiple edges) is a comparability if its edges can be transitively oriented, meaning that whenever edges `$\{p, r_1\}, \{r_1, r_2\},\ldots, \{r_{u−1}, r_u\}, \{r_u, q\}$` are oriented as $(p, r_1), (r_1, r_2),\ldots, (r_{u−1}, r_u), (r_u, q)$, then there exists an edge `$\{p, q\}$` oriented as $(p, q)$.]
This characterisation appears with a sketch of proof $-$ which is not hard, anyway $-$ in
M. M. Bayer, Barycentric subdivisions. Pacific J. Math. 135 (1988), no. 1, pp. 1-16.
As Bayer points out, the result was first observed in
R. Stanley, Balanced Cohen-Macaulay complexes, Trans. Amer. Math. Soc, 249 (1979), pp. 139-157.
@Rasmus and @Gwyn: The characterisation might perhaps disappoint you if you were expecting something more topological. However, it's easy to prove that no topological characterisation of order complexes is possible, and therefore a combinatorial condition such as the one on comparabilities must be used. For this, first check that the barycentric subdivision of any simplicial complex indeed is an order complex. Next observe that barycentric subdivision of a simplicial complex does not change the homeomorphism type of the underlying polyhedron of the complex. Finally, conclude that for any topological space $T$ that is homeomorphic to a compact polyhedron, there is an order complex whose underlying polyhedron is homeomorphic to $T$.
I hope this helps.
-
2
"No missing faces" is the same as flag complex. – Victor Protsak Aug 24 2010 at 0:19
@Vincenzo Marra: Wow, that's nice. Thank you for this information. – Rasmus Aug 24 2010 at 15:25
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To rephrase your question, what conditions are necessary and sufficient for a simplicial complex to be an order complex?
There are also a few easy necessary conditions. For one, any simplicial complex is the Stanley-Reisner complex of a square-free monomial ideal (label each vertex with a variable, and the minimal non-faces in the simplicial complex are exactly the monomial generators of the ideal.) For all order complexes, their Stanley-Reisner ideal is an edge ideal (i.e. a square free monomial ideal generated in degree 2, called an "edge ideal" because it can be thought of as corresponding to a graph G with an edge for each generator.) This is immediate, because a minimal "non-face" in the order complex is a pair of incomparable elements, so all generators must be of degree 2. This does quickly cut down on the types of simplicial complexes to consider.
Unfortunately, having a 2-generated SR-ideal is also not sufficient. There are numerous subgraphs of a graph which will prevent the Stanley-Reisner complex of its edge ideal from being an order complex. For example, if the graph has an induced cycle of length longer than 7, the complex can't arise as an order complex.
I was working a few months ago on trying to classify the structures in graphs which would prohibit their edge ideals from having SR-complexes which were order complexes, but found the other forbidden structures weren't very easy to characterize. I'd love to see some more answers to this question as well!
-
I don't know a name for this concept, but I know names for two related concepts.
A simplicial complex $\Delta$ is called flag or clique if, whenever $v_1$, $v_2$, ..., $v_r$ is a collection of vertices such that $(v_i, v_j)$ is an edge of $\Delta$ for all $1 \leq i < j \leq r$, then $(v_1, v_2, \ldots, v_r)$ is a face of $\Delta$.
A simplicial complex $\Delta$ is called balanced if $\Delta$ is pure of dimension $d$ and it is possible to color the vertices of $\Delta$ with $d+1$ colors so that no face contains two vertices of the same color.
If $\Delta$ is the order complex of a poset then it is flag; if $\Delta$ is the order complex of a graded poset then it is balanced.
-
I don't think this works. Consider the "bowtie": two triangles $(a,b,c)$ and $(c,d,e)$ glued along a single vertex. This can be three colored; say $a$ and $d$ have color $0$; $c$ has color $1$; and $b$ and $e$ have color $2$. If we order the colors in the obvious numerical way, your proposed poset is not transitive. Admittedly, if we order $1 < 0 < 2$, then your construction works, but I think there is probably an example where no ordering works. – David Speyer Mar 24 2010 at 20:26
I see. A example where no coloring works should be given by the same idea. Just glue a triangle to each vertex of (a,b,c). – HenrikRüping Mar 24 2010 at 21:35
1
My previous example was a reply to a claimed proof (now deleted) that flag + balanced implies order complex. – David Speyer Mar 24 2010 at 21:47
This is maybe too tautological a characterisation: The complex is flag and there is a total order on the vertices such that if $x<y<z$ and there is one edge between $x$ and $y$ and one between $y$ and $z$, then there is an edge between $x$ and $z$. (In one direction it uses the fact that a partial order can be extended to a total one.) – Torsten Ekedahl Apr 2 2010 at 9:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302279949188232, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/83981?sort=newest
|
## Connectedness of space of ergodic measures
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X = \Sigma_p^+ = \{1,\dots,p\}^\mathbb{N}$ and let $f=\sigma\colon X\to X$ be the shift map. Let $\mathcal{M}$ be the space of Borel $f$-invariant probability measures on $X$ endowed with the weak* topology.
Now $\mathcal{M}$ is a Choquet simplex, and hence connected. The geometry of its extreme points is a little more subtle. These extreme points are precisely the ergodic measures. Let $\mathcal{M}^e$ denote the collection of ergodic measures in $\mathcal{M}$. Note that $\mathcal{M}^e$ has some nice properties; for instance, there is a natural embedding from the space of Hölder continuous functions into $\mathcal{M}^e$ that takes $\phi$ to its unique equilibrium state $\mu_\phi$. The image of the embedding is the collection of Gibbs measures (for Hölder potentials).
Of course, there are many ergodic measures that do not arise as equilibrium states of Hölder continuous functions, and so I wonder which nice properties of the collection of Gibbs measures extend to $\mathcal{M}^e$. In particular: Is $\mathcal{M}^e$ connected? Path connected? I expect that it is, and that moreover this should happen whenever $X$ is a compact metric space and $f\colon X\to X$ is a continuous map satisfying the specification property, but I don't know a reference and don't yet see how to approach a proof.
-
I'm not familiar with all relevant terms (particularly the specification property), but one example in which you probably aren't interested is as follows: take a finite disjoint union of compact metric spaces, each equipped with a continuous, uniquely ergodic self-map. Then the space of ergodic measures for the induced map on the union is a finite, discrete space. – Mark Schwarzmann Dec 21 2011 at 1:21
@Mark: Specification is a uniform version of topological transitivity, which in particular rules out taking disjoint unions. – Vaughn Climenhaga Dec 21 2011 at 1:37
3
Almost every simplex is the Poulsen simplex. What about this one? – Gerald Edgar Dec 21 2011 at 3:05
1
@Gerald: $\mathcal{M}^e$ is a dense subset of $\mathcal{M}$ whenever $X$ has specification, and since as I understand it the Poulsen simplex is characterised up to affine homeomorphism (among compact metrisable simplices) by the condition that extreme points are dense, I believe we are in fact dealing with the Poulsen simplex here. In particular, the 1978 paper of Lindenstrauss, Olsen, and Sternfeld shows that if extreme points are dense then they are arc-connected, which gives an even more complete answer. Thanks for the suggestion! (I hadn't heard of the Poulsen simplex before.) – Vaughn Climenhaga Dec 22 2011 at 5:59
## 2 Answers
Hi Vaughn,
It is an old result of Karl Sigmund that the space of ergodic measures of a subshift of finite type is path connected in weak* topology. The proof is very neat and takes only a page or so. Here is the paper:
Sigmund, Karl "On the connectedness of ergodic systems." Manuscripta Math. 22 (1977), no. 1, 27–32.
I don't know about generalizations. Sigmund's proof does not generalize directly.
-
Thanks! I knew about Sigmund's 1970 paper where he shows that the set of ergodic measures is residual, but I didn't know about the 1977 paper. – Vaughn Climenhaga Dec 21 2011 at 2:05
1
Upon a little further reflection, I think it does generalise to the case with specification. Periodic measures are dense (as per his 1970 paper), and the periodic measure on $O(x)$ is the unique equilibrium state corresponding to the upper semi-continuous potential function that is $0$ on $O(x)$ and $-\infty$ everywhere else. To connect two periodic measures with an arc of ergodic measures, it suffices to connect the corresponding potential functions with an arc of Holder continuous potentials and consider the unique equilibrium states for these measures. – Vaughn Climenhaga Dec 21 2011 at 3:31
So the endpoints of your arc are not Holder continuous. Are you sure that you have continuity of the map $\varphi\mapsto\mu_\varphi$ at the endpoints? That would be a different proof. – Andrey Gogolev Dec 21 2011 at 18:13
Yes, endpoints are not Holder (although the equilibrium state is still unique). Continuity of $\phi\mapsto\mu_\phi$ doesn't follow from general principles, but you should get it pretty easily from the Gibbs property for $\mu_\phi$ when $\phi$ is Holder (observing that the weight on any Bowen ball that doesn't intersect the periodic orbit goes to $0$). – Vaughn Climenhaga Dec 21 2011 at 18:53
(Continuity at the endpoints, I mean -- where you were talking about. Continuity on the rest of the path is a general property of equilibrium states for Holder potentials.) – Vaughn Climenhaga Dec 21 2011 at 18:54
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'll flesh out the consequences of Gerald's comment in a (CW-ed) answer. Lindenstrauss, Olsen, and Sternfeld showed in 1978 that if $S_1$ and $S_2$ are compact metrisable simplices such that the extremal points of $S_i$ are dense in $S_i$ for $i=1,2$, then there is an affine homeomorphism from $S_1$ to $S_2$; the unique (up to affine homeomorphism) compact metrisable simplex with the property that its extremal points are dense is called the Poulsen simplex.
In that same paper, it was shown that the Poulsen simplex has the property that its set of extremal points is arc-connected. Since $\mathcal{M}$ is a compact metrisable simplex whenever $X$ is a compact metric space and $f\colon X\to X$ is continuous, and the extremal points of $\mathcal{M}$ are precisely the ergodic measures $\mathcal{M}^e$, it follows that $\mathcal{M}^e$ is arc-connected whenever it is dense in $\mathcal{M}^e$. In particular, the strong specification property introduced by Bowen implies that periodic orbit measures are dense in $\mathcal{M}^e$ (Sigmund 1974), and since such measures are ergodic, this implies that $\mathcal{M}$ is the Poulsen simplex, and hence $\mathcal{M}^e$ is arc-connected, whenever $(X,f)$ has strong specification.
So that's not quite as constructive a proof as the approach following (Sigmund 1977) as suggested in Andrey's answer and the comment following, but it's certainly simpler to write down based on existing results.
-
1
In case a more in-depth explanation is worthwhile to anyone, I wrote some more details at vaughnclimenhaga.wordpress.com/2011/12/21/… – Vaughn Climenhaga May 12 2012 at 16:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348776936531067, "perplexity_flag": "head"}
|
http://medlibrary.org/medwiki/Plus-minus_sign
|
# Plus-minus sign
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
±
The plus-minus sign (±) is a mathematical symbol commonly used either
• to indicate the precision of an approximation, or
• to indicate a value that can be of either sign.
The sign is normally pronounced "plus or minus". In experimental sciences, the sign commonly indicates the confidence interval or error in a measurement, often the standard deviation or standard error. The sign may also represent an inclusive range of values that a reading might have. In mathematics, it may indicate two possible values: one positive, and one negative. It is commonly used in indicating a range of values, such as in mathematical statements.
## Usage
### Precision indication
The use of ± for an approximation is most commonly encountered in presenting the numerical value of a quantity together with its tolerance or its statistical margin of error. For example, "5.7±0.2" denotes a quantity that is specified or estimated to be within 0.2 units of 5.7; it may be anywhere in the range from 5.7 − 0.2 (i.e., 5.5) to 5.7 + 0.2 (5.9). In scientific usage it sometimes refers to a probability of being within the stated interval, usually corresponding to either 1 or 2 standard deviations (a probability of 68.3% or 95.4% in a Normal distribution).
A percentage may also be used to indicate the error margin. For example, 230 ± 10% V refers to a voltage within 10% of either side of 230 V (207 V to 253 V). Separate values for the upper and lower bounds may also be used. For example, to indicate that a value is most likely 5.7 but may be as high as 5.9 or as low as 5.6, one could write 5.7+0.2
−0.1
.
### Shorthand
In mathematical equations, the use of ± may be found as shorthand, to present two equations in one formula: + or −, represented with ±.
For example, given the equation x2 = 1, one may give the solution as x = ±1, such that both x = +1 and x = −1 are valid solutions.
More generally we have the quadratic formula:
If ax2 + bx + c = 0[1] then
$\displaystyle x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}.$
Written out in full, this states that there are two solutions to the equation:
$\text{either } x = \frac{-b + \sqrt {b^2-4ac}}{2a} \text{ or } x = \frac{-b - \sqrt {b^2-4ac}}{2a}.$
Another example is found in the trigonometric identity
$\sin(A \pm B) = \sin(A) \cos(B) \pm \cos(A) \sin(B).\,$
This stands for two identities: one with "+" on both sides of the equation, and one with "−" on both sides.
A somewhat different use is found in this presentation of the formula for the Taylor series of the sine function:
$\sin\left( x \right) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \pm \frac{1}{(2n+1)!} x^{2n+1} + \cdots.$
This mild abuse of notation is intended to indicate that the signs of the terms alternate, where (starting the count at 0) the terms with an even index n are added while those with an odd index are subtracted. A more rigorous presentation would use the expression (−1)n, which gives +1 when n is even and −1 when n is odd.
### Chess notation
The symbols ± and ∓ are used in chess notation to denote an advantage for white and black respectively.
## Minus-plus sign
There is another symbol, the minus-plus sign (∓). It is generally used in conjunction with the "±" sign, in such expressions as "x ± y ∓ z", which can be interpreted as meaning "x + y − z" or/and "x − y + z", but not "x + y + z" nor "x − y − z". The upper "−" in "∓" is considered to be associated to the "+" of "±" (and similarly for the two lower symbols) even though there is no visual indication of the dependency. (However, the "±" sign is generally preferred over the "∓" sign, so if they both appear in an equation it is safe to assume that they are linked. On the other hand, if there are two instances of the "±" sign in an expression, it is impossible to tell from notation alone whether the intended interpretation is as two or four distinct expressions.) The original expression can be rewritten as "x ± (y − z)" to avoid confusion, but cases such as the trigonometric identity
$\cos(A \pm B) = \cos(A) \cos(B) \mp \sin(A) \sin(B)$
are most neatly written using the "∓" sign. The trigonometric equation above thus represents the two equations:
$\cos(A + B) = \cos(A)\cos(B) - \sin(A) \sin(B)\,$
$\cos(A - B) = \cos(A)\cos(B) + \sin(A) \sin(B)\,$
but never
$\cos(A + B) = \cos(A)\cos(B) + \sin(A) \sin(B)\,$
$\cos(A - B) = \cos(A)\cos(B) - \sin(A) \sin(B)\,$
because the signs are exclusively alternating.
## Encodings
• In ISO 8859-1, -7, -8, -9, -13, -15, and -16, the plus-minus symbol is given by the code 0xB1hex Since the first 256 code points of Unicode are identical to the contents of ISO-8859-1 this symbol is also at Unicode code point U+00B1.
• The symbol also has a HTML entity representation of `±`.
• On Windows systems, it may be entered by means of Alt codes, by holding the ALT key while typing the numbers 0177 on the numeric keypad.
• On Unix-like systems, it can be entered by typing the sequence compose + -.
• On Macintosh systems, it may be entered by pressing option shift = (on the non-numeric keypad).
• The rarer minus-plus sign (∓) is not generally found in legacy encodings and does not have a named HTML entity but is available in Unicode with codepoint U+2213 and so can be used in HTML using `∓` or `∓`.
• In TeX 'plus-or-minus' and 'minus-or-plus' symbols are denoted `\pm` and `\mp`, respectively.
• These characters are also seen written as a (rather untidy) underlined or overlined + symbol. ( + or + ).
## Similar characters
The plus-minus sign resembles the Chinese character 士, whereas the minus-plus sign resembles 干.
## See also
• Plus and minus signs
• Table of mathematical symbols
• ≈ (approximately equals to)
• Engineering tolerance
## References and notes
1. with b² - 4ac > 0
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Plus-minus sign", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Plus-minus_sign
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8925609588623047, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Triangular_number
|
# Triangular number
The first six triangular numbers
A triangular number or triangle number counts the objects that can form an equilateral triangle, as in the diagram on the right. The nth triangle number is the number of dots composing a triangle with n dots on a side, and is equal to the sum of the n natural numbers from 1 to n. The sequence of triangular numbers (sequence in OEIS), starting at the 0th triangular number, is:
0, 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120 ....
The triangle numbers are given by the following explicit formulas:
$T_n= \sum_{k=1}^n k = 1+2+3+ \dotsb +n = \frac{n(n+1)}{2} = {n+1 \choose 2}$
where $\textstyle {n+1 \choose 2}$ is a binomial coefficient. It represents the number of distinct pairs that can be selected from n + 1 objects, and it is read aloud as "n plus one choose two".
The triangular number Tn solves the handshake problem of counting the number of handshakes if each person in a room with n + 1 people shakes hands once with each person. In other words, the solution to the handshake problem of n people is Tn-1.[1]
Triangle numbers are the additive analog of the factorials, which are the products of integers from 1 to n.
The number of line segments between closest pairs of dots in the triangle can be represented in terms of the number of dots or with a recurrence relation:
$L_n = 3 T_{n-1}= 3{n \choose 2};~~~L_n = L_{n-1} + 3(n-1), ~L_1 = 0.$
In the limit, the ratio between the two numbers, dots and line segments is
$\lim_{n\to\infty} \frac{T_n}{L_n} = \frac{1}{3}$
## Relations to other figurate numbers
Triangular numbers have a wide variety of relations to other figurate numbers.
Most simply, the sum of two consecutive triangular numbers is a square number, with the sum being the square of the difference between the two (and thus the difference of the two being the square root of the sum). Algebraically,
$T_n + T_{n-1} = \left (\frac{n^2}{2} + \frac{n}{2}\right) + \left(\frac{\left(n-1\right)^2}{2} + \frac{n-1}{2} \right ) = \left (\frac{n^2}{2} + \frac{n}{2}\right) + \left(\frac{n^2}{2} - \frac{n}{2} \right ) = n^2 = (T_n - T_{n-1})^2.$
Alternatively, the same fact can be demonstrated graphically:
6 + 10 = 16 10 + 15 = 25
There are infinitely many triangular numbers that are also square numbers; e.g., 1, 36. Some of them can be generated by a simple recursive formula:
$S_{n+1} = 4S_n \left( 8S_n + 1\right)$ with $S_1 = 1.$
All square triangular numbers are found from the recursion
$S_n = 34S_{n-1} - S_{n-2} + 2$ with $S_0 = 0$ and $S_1 = 1.$
Also, the square of the nth triangular number is the same as the sum of the cubes of the integers 1 to n.
The sum of the all triangular numbers up to the nth triangular number is the nth tetrahedral number,
$\frac {n(n+1)(n+2)} {6}.$
More generally, the difference between the nth m-gonal number and the nth (m + 1)-gonal number is the (n - 1)th triangular number. For example, the sixth heptagonal number (81) minus the sixth hexagonal number (66) equals the fifth triangular number, 15. Every other triangular number is a hexagonal number. Knowing the triangular numbers, one can reckon any centered polygonal number: the nth centered k-gonal number is obtained by the formula
$Ck_n = kT_{n-1}+1\$
where T is a triangular number.
The positive difference of two triangular numbers is a trapezoidal number.
## Other properties
Triangular numbers correspond to the first-order case of Faulhaber's formula.
Every even perfect number is triangular, given by the formula
$M_p 2^{p-1} = M_p (M_p + 1)/2 = T_{M_p}$
where Mp is a Mersenne prime. No odd perfect numbers are known, hence all known perfect numbers are triangular.
For example, the 3rd triangular number is 3x2 = 6; the 7th is 7x4 = 28; the 31st is 31x16 = 496; and the 127th is 127x64 = 8128.
In base 10, the digital root of a nonzero triangular number is always 1, 3, 6, or 9. Hence every triangular number is either divisible by three or has a remainder of 1 when divided by nine:
0 = 3×0,
1 = 9×0+1,
3 = 3×1,
6 = 3×2,
10 = 9×1+1,
15 = 3×5,
21 = 3×7,
28 = 9×3+1,
36 = 9×4,
45 = 9×5,
55 = 9×6+1,
...
The digital root pattern, repeating every nine terms, is "1 3 6 1 6 3 1 9 9".
The inverse of the statement above is, however, not always true. For example, the digital root of 12, which is not a triangular number, is 3 and divisible by three.
If x is a triangular number, then ax+b is also a triangular number, given the following conditions are satisfied:
a=an odd square, b=(a-1)/8
Note that b will always be a triangular number, because 8Tn+1=(2n+1)2, which yields all the odd squares are revealed by multiplying a triangular number by 8 and adding 1, and the process for b given a is an odd square is the inverse of this operation.
The first several pairs of this form (not counting 1x+0) are: 9x+1, 25x+3, 49x+6, 81x+10, 121x+15, 169x+21,.... . Given x is equal to Tn, these formulas yield T3n+1, T5n+2, T7n+3, T9n+4, and so on.
The sum of the reciprocals of all the nonzero triangular numbers is:
$\!\ \sum_{n=1}^{\infty}{1 \over {{n^2 + n} \over 2}} = 2\sum_{n=1}^{\infty}{1 \over {n^2 + n}} = 2 .$
This can be shown by using the basic sum of a telescoping series:
$\!\ \sum_{n=1}^{\infty}{1 \over {n(n+1)}} = 1 .$
Two other interesting formulas regarding triangular numbers are:
$T_{a+b} = T_a + T_b + ab\$
and
$T_{ab} = T_aT_b + T_{a-1}T_{b-1},\$
both of which can easily be established either by looking at dot patterns (see above) or with some simple algebra.
In 1796, German mathematician and scientist Carl Friedrich Gauss discovered that every positive integer is representable as a sum of at most three triangular numbers, writing in his diary his famous words, "EΥΡHKA! num = Δ + Δ + Δ" Note that this theorem does not imply that the triangular numbers are different (as in the case of 20=10+10), nor that a solution with three nonzero triangular numbers must exist. This is a special case of Fermat's Polygonal Number Theorem.
The largest triangular number of the form 2k-1 is 4095, see Ramanujan–Nagell equation.
Wacław Franciszek Sierpiński posed the question as to the existence of four distinct triangular numbers in geometric progression. It was conjectured by Polish mathematician Kazimierz Szymiczek to be impossible. This conjecture was proven by Fang and Chen in 2007.[2][3]
## Applications
A fully connected network of n computing devices requires the presence of Tn-1 cables or other connections; this is equivalent to the handshake problem mentioned above.
In a tournament format that uses a round-robin group stage, the number of matches that need to be played between n teams is equal to the triangular number Tn-1. For example, a group stage with 4 teams requires 6 matches, and a group stage with 8 teams requires 28 matches. This is also equivalent to the handshake and fully connected network problems.
One way of calculating the depreciation of an asset is the sum-of-years' digits method, which involves finding Tn, where n is the length in years of the asset's useful life. Each year, the item loses (b - s)*(n-y)⁄Tn , where b is the item's beginning value (in units of currency), s is its final salvage value, n is the total number of years the item is usable, and y the current year in the depreciation schedule. Under this method, n item with a usable life of 4 years would lose 4/10 of its "losable" value in the first year, 3/10 in the second, 2/10 in the third, and 1/10 in the fourth, accumulating a total depreciation of 10/10 the losable value.
## Triangular roots and tests for triangular numbers
By analogy with the square root of x, one can define the (positive) triangular root of x as the number n such that Tn = x:[4]
$n = \frac{\sqrt{8x+1}-1}{2}.$
An integer x is triangular if and only if 8x + 1 is a square. Equivalently, if the positive triangular root n of x is an integer, then x is the nth triangular number.[4]
## See also
• 1 + 2 + 3 + 4 + …
• Metcalfe's law, that the complexity of communication between a group of people grows with the number of pairs of people, a triangular number.
• Miraculous Draught of Fish, an episode from the Gospels involving the triangular number 153; the triangular form of this number was thought by Saint Augustine to be important in interpreting this passage.[5]
• Polygonal number
• Pentagonal number
• Hexagonal number
## Notes
1. ^ a b Euler, Leonhard; Lagrange, Joseph Louis (1810), 1 (2nd ed.), J. Johnson and Co., pp. 332–335
2. Owen, O. T. (1988), "One Hundred and Fifty Three Fishes", Expository Times 100: 52–54, doi:10.1177/001452468810000204 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8999257683753967, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/262158/kernel-of-the-product-a-otimes-a-rightarrow-a
|
# Kernel of the product $A\otimes A\rightarrow A$
Let $R$ be a commutative ring with $1$ and let $A$ be a commutative $R$-algebra. We view $A\otimes A$ also as $R$-algebra, the multiplication on generatrs being given by $(a\otimes b)(a'\otimes b')= aa'\otimes bb'$.
Denote by $m: A\otimes A\rightarrow A$ the product homomorphism, i.e. the map that maps a generator $a\otimes b$ to $ab$.
My question is: why is the kernel of $m$ generated by the elements $a\otimes 1 - 1\otimes a$ where $a$ runs through $A$?
The answer seems to be so easy that I could not find it anywhere in books. Alas, it is not clear to me :(
-
Is $R$ implicitly assumed to be an integral domain? Otherwise, if $ab=0$, how would you write $a\otimes b$ in terms of these generators? – espen180 Dec 19 '12 at 16:45
@espen180: You mean if $A$ has no zero divisors? No, neither $R$ nor $A$ are implicitly assumed to have no zero divisors. Note that the kernel of $m$ is an ideal. For $ab=0=ba$, we could write $a\otimes b = (a\otimes 1-1\otimes a)(1\otimes b - b\otimes 1) - (b\otimes 1-1\otimes b)(1\otimes a)$, so it lies in the ideal generated by the $a\otimes 1-1\otimes a$, hence in the kernel. – Sh4pe Dec 19 '12 at 17:05
@espen180: even shorter: $a\otimes b=(a\otimes 1-1\otimes a)(1\otimes b)$... – Sh4pe Dec 19 '12 at 17:12
1
Of course, my mistake. But doesn't this prove the theorem, then? Since it is obvious that the above generated set is included in the kernel, and you just showed the reverse inclusion? – espen180 Dec 19 '12 at 17:30
1
– Mariano Suárez-Alvarez♦ Dec 19 '12 at 17:35
show 1 more comment
## 2 Answers
Hint: Show that $a\otimes b - ab\otimes 1$ is in $I$, the ideal generated by elements of the form $x\otimes 1 - 1\otimes x$. This can easily be done by noting that $a\otimes b = (a\otimes 1)(1\otimes b)$, then substituting $1\otimes b = b\otimes 1 - (b\otimes 1 - 1\otimes b)$.
From here, show that every element of $A\otimes A/I$ can be written as $x\otimes 1 + I$.
Since $I$ is a sub-ideal of the kernel of your morphism, your morphism factors:
$$A\otimes A \to A\otimes A/I \to A$$
If $I$ is not the kernel, then $A\otimes A/I \to A$ is not $1-1$. But every element of $A\otimes A/I$ can be written as $x\otimes 1 + I$ and it must be sent to $x$. So this is necessarily $1-1$.
-
Easy: for $a\otimes b$ in the kernel, write $a\otimes b = (a\otimes 1-1\otimes a)(1\otimes b)$ and be done
=)
-
2
There are elements in the kernel which are not elementary tensors, though. To get an example, pick pretty much any integral domain $A$, like $k[X]$: in this last example you can easily check that there is no elementary tensor in the kernel of the multiplication map! – Mariano Suárez-Alvarez♦ Dec 19 '12 at 17:36
The elements of $A\otimes A$ are not only of the form $a\otimes b$ – Thomas Andrews Dec 19 '12 at 17:44
(That is pretty much my first sentence!) – Mariano Suárez-Alvarez♦ Dec 19 '12 at 17:45
1
Yeah, didn't update the page. – Thomas Andrews Dec 19 '12 at 17:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295541048049927, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/124766/chomsky-hierarchy-doesnt-make-sense-to-me
|
Chomsky Hierarchy doesn't make sense to me.
Wiki Article: Chomsky Hierarchy
In this containment hierarchy regular languages are a subset of context-sensitive languages. But let $\sum$ be the alphabet we're using. Then $\sum^*$ is a regular language that contains all languages on $\sum$. So it seems to me that we should think of regular languages as the more broad languages and context-sensitive languages as more specific.
How do I resolve this?
Thanks.
-
2
The containment hierarchy is about containment of one class of languages within another class, but your $\Sigma^*$ observation is about containment of one language within another language. – MJD Mar 26 '12 at 18:52
$\Sigma^*$ is also a context-free language. In fact every regular language is context-free. So the context-free languages are a more general class than the regular languages. (Brian's answer gives more detail.) – Tara B Mar 26 '12 at 22:13
1 Answer
Let an alphabet $\Sigma$ be given, and define $\Sigma^*$ in the usual way. Now let $\mathscr{L}=\wp(\Sigma^*)$, the collection of all subsets of $\Sigma^*$; the members of $\mathscr{L}$ are the languages over the alphabet $\Sigma$. (In other words, the languages over $\Sigma$ are precisely the subsets of $\Sigma^*$.) The Chomsky hierarchy distinguishes certain subfamilies of $\mathscr{L}$:
$$\begin{align*} \mathscr{L}_0&=\{L\in\mathscr{L}:\text{some formal grammar over }\Sigma\text{ generates }L\}\\ \mathscr{L}_1&=\{L\in\mathscr{L}:\text{some context-sensitive grammar over }\Sigma\text{ generates }L\}\\ \mathscr{L}_2&=\{L\in\mathscr{L}:\text{some context-free grammar over }\Sigma\text{ generates }L\}\\ \mathscr{L}_3&=\{L\in\mathscr{L}:\text{some regular grammar over }\Sigma\text{ generates }L\} \end{align*}$$
These are all subfamilies of $\mathscr{L}$: they are collections of languages, and it can be shown that $$\mathscr{L}_3\subsetneq\mathscr{L}_2\subsetneq\mathscr{L}_1\subsetneq\mathscr{L}_0\subsetneq\mathscr{L}$$ (provided that $|\Sigma|\ge 2$).
$\Sigma^*$, on the other hand, is not a collection of languages: it’s a single language that happens to belong to $\mathscr{L}_3$. It’s perfectly true that every $L\in\mathscr{L}$ is a subset of $\Sigma^*$, but the class $\mathscr{L}_3$ of regular languages is not closed under taking subsets: it’s entirely possible for a regular language to have a non-regular language as a subset. In particular, the fact that every language over $\Sigma$ is a subset of the regular language $\Sigma^*$ says nothing at all about the inclusiveness of the class $\mathscr{L}_3$ of regular languages. It simply happens to be the case that the most restrictive class of languages in the hierarchy turns out to contain the most inclusive single language.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9116697311401367, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/43933/force-on-earth-due-to-suns-radiation-pressure/51931
|
# Force on Earth due to Sun's radiation pressure
I have been asked by my Classical Electrodynamics professor to calculate the force that the Sun exerts in the Earth's surface due to its radiation pressure supposing that all radiation is absorbed and a flat Earth, and knowing only that the magnitude of the Poynting vector in the surface is $\left\langle {\bar S} \right\rangle = 13000{\rm{[W/}}{{\rm{m}}^{\rm{2}}}{\rm{]}}$ using:
1. Maxwell's stress tensor.
2. The absorbed momentum.
Using Maxwell's stress tensor I get ${\rm{35.6}} \cdot {\rm{1}}{{\rm{0}}^8}{\rm{[N]}}$, which seems plausible since we consider a flat Earth and no radiation reflection. But I'm lost on how to obtain an answer using the variation of electromagnetic momentum.
I think I should start by writing
$$\vec F = \frac{d}{{dt}}{\vec p_{EM}} = \frac{d}{{dt}}\int\limits_V {{\varepsilon _0}{\mu _0}\left( {\vec E \times \vec H} \right)dV}$$
But, how do I take it from here?
-
## 1 Answer
$\vec S$ is the flux, so you need an area integral of the surface of the earth.
The pressure $P$ you will have is force per area, $F/A$. The pressure is flux $S$ divided by speed of light, since you have a momentum of $hf/c$ in the photons.
Then you should integrate over the pressure (i. e. multiplying with the cross section of the earth) to get the force: $$\vec F = \frac{\pi R^2 \vec S}{c}$$
-
I see... so I just simply integrate S/c in the surface of the Earth? But wouldn't I get $\left\langle F \right\rangle = \frac{{\pi {R^2}}}{c}\left\langle S \right\rangle$? – Miguel Dovale Jan 23 at 21:39
1
Yes, the 4 should not be there (I was thinking surface of a sphere apparently), and I forgot the $c$. So your result is correct. – queueoverflow Jan 25 at 13:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425032734870911, "perplexity_flag": "middle"}
|
http://www.onemathematicalcat.org/Math/Algebra_II_obj/equation_simple_parabola.htm
|
EQUATIONS OF SIMPLE PARABOLAS
• Jump right to the exercises!
• You may want to review the prior web exercise, Parabolas.
If a parabola is placed in a coordinate plane in a simple way,
then a simple equation is obtained, as derived below.
Place a parabola with its vertex at the origin. If we put the focus on the $\,y$-axis, the the directrix will be parallel to the $\,x$-axis. Or, if we put the directrix parallel to the $\,x$-axis, then the focus will be on the $\,y$-axis. In either case, let $\,p \ne 0\,$ denote the $\,y$-value of the focus. Thus, the focus has coordinates $\,(0,p)\,$. Although the sketch at right shows the situation where $\,p\gt 0\,$, the following derivation also holds for $\,p \lt 0\,$.
Notice that:
| | | | | |
|----------|----------------|-------------------------------|----------------|--------------------------------------------|
| $p\gt 0$ | if and only if | the focus is above the vertex | if and only if | the parabola is concave up (holds water) |
| AND | | | | |
| $p\lt 0$ | if and only if | the focus is below the vertex | if and only if | the parabola is concave down (sheds water) |
For example, if the focus is above the vertex, then $\,p\,$ must be greater than zero, and the parabola must be concave up.
As a second example, if the parabola is concave down, then $\,p\,$ must be less than zero, and the focus is below the vertex.
Since the vertex is a point on the parabola, the definition of parabola dictates that it must be the same distance from the focus and the directrix.
Thus, the directrix must cross the $\,y\,$-axis at $\,-p\,$; indeed, every $\,y$-value on the directrix equals $\,-p\,$.
Let $\,(x,y)\,$ denote a typical point on the parabola.
The distance from $\,(x,y)\,$ to the focus $\,(0,p)\,$ is found using the distance formula: $$\tag{1} \sqrt{(x-0)^2 + (y-p)^2 } = \sqrt{x^2 + (y-p)^2}$$
To find the distance from $\,(x,y)\,$ to the directrix, first drop a perpendicular from $\,(x,y)\,$ to the directrix.
This perpendicular intersects the directrix at $\,(x,-p)\,$.
The distance from $\,(x,y)\,$ to the directrix is therefore the distance from $\,(x,y)\,$ to $\,(x,-p)\,$: $$\tag{2} \sqrt{(x-x)^2 + ((y-(-p))^2} = \sqrt{(y+p)^2}$$
From the definition of parabola, distances $\,(1)\,$ and $\,(2)\,$ must be equal: $$\sqrt{x^2 + (y-p)^2} = \sqrt{(y+p)^2}$$
This equation simplifies considerably, as follows:
Squaring both sides: $x^2 + (y-p)^2 = (y+p)^2$
Multiplying out: $x^2 + y^2 - 2py + p^2 = y^2 + 2py + p^2$
Subtracting $\,y^2 + p^2\,$ from both sides: $x^2 - 2py = 2py$
Adding $\,2py\,$ to both sides: $x^2 = 4py$
Dividing by $\,4p\,$ and rearranging: $\displaystyle y = \frac{1}{4p} x^2$
Such a beautiful, simple description for our parabola!
The most critical thing to notice is the coefficient of $\,x^2\,$, since it holds the key to locating the focus of the parabola.
As an example, consider the equation $\,y = 5x^2\,$.
Comparing $\,y = 5x^2\,$ with $\displaystyle \,y = \frac1{4p}x^2\,$, we see that $\displaystyle 5 = \frac{1}{4p}\,$.
Solving for $\,p\,$ gives:
$\begin{alignat}{2} 5\ &= \frac{1}{4p} &\qquad&\text{original equation} \cr 20p\ &= 1 &&\text{multiply both sides by } 4p\cr p\ &= \frac{1}{20} &&\text{divide both sides by } 20 \end{alignat}$
Thus, $\,y = 5x^2\,$ graphs as a parabola with vertex at the origin, and focus $\displaystyle\,(0,\frac{1}{20})\,$.
So easy!
In summary, we have:
EQUATIONS OF SIMPLE PARABOLAS
Every equation of the form $\,y= ax^2\,$ (for $\,a\ne0\,$) is a parabola
with vertex at the origin, directrix parallel to the $\,x\,$-axis, and focus on the $\,y\,$-axis.
If $\,a\gt 0\,$, then the parabola is concave up (holds water).
If $\,a\lt 0\,$, the the parabola is concave down (sheds water).
If $\,p\,$ denotes the $\,y$-value of the focus, then $\displaystyle\,a = \frac{1}{4p}\,$.
Solving for $\,p\,$ gives $\,\displaystyle p = \frac{1}{4a}\,$, and thus the coordinates of the focus are $\displaystyle \,(0,\frac{1}{4a})\,$.
MEMORY DEVICE: ‘one over four pee pairs’
Notice that if $\displaystyle\,a = \frac{1}{4p}\,$, then $\displaystyle\,p = \frac{1}{4a}\,$. Or, if $\displaystyle\,p = \frac{1}{4a}\,$, then $\displaystyle\,a = \frac{1}{4p}\,$. What a beautiful symmetric relationship!
Thus, I fondly refer to $\,a\,$ and $\,p\,$ with the catchy phrase: ‘one over four pee pairs’. (Try to say this quickly ten times in a row!)
If you know either $\,a\,$ or $\,p\,$, then it's easy to find the other—just multiply by $\,4\,$ and then flip (take the reciprocal).
For example, if you know that $\,a = 5\,$, then $\,p\,$ is found as follows:
• $\,4\times 5 = 20\,$ (multiply by $\,4\,$)
• $\displaystyle p=\frac{1}{20}$ (flip)
Or, if you know that $\displaystyle\,p = \frac{1}{20}\,$, then $\,a\,$ is found as follows:
• $\displaystyle\,4\times \frac{1}{20} = \frac{4}{20} = \frac{1}{5}\,$ (multiply by $\,4\,$)
• $\displaystyle a = (\text{the reciprocal of } \frac{1}{5}) = 5\,$ (flip)
SHIFTING THE PARABOLA
Now, here's some very good news.
By using graphical transformations, knowledge of this one simple equation $\,y = ax^2\,$
actually gives full understanding of all parabolas with directrix parallel to the $\,x$-axis!
The results are summarized next:
Shift the parabola (together with its focus and directrix) horizontally by $\,h\,$, and vertically by $\,k\,$.
This yields the following information:
| | | | |
|---------------------|------------------------------------------|-------------------|---------------------------------------------|
| original equation: | $y=ax^2$ | shifted equation: | $y=a(x-h)^2 + k$ |
| original vertex: | $(0,0)$ | new vertex: | $(h,k)$ |
| original focus: | $\displaystyle (0,p) = (0,\frac{1}{4a})$ | new focus: | $\displaystyle (h,p+k)= (h,\frac{1}{4a}+k)$ |
| original directrix: | $y=-p$ | new directrix: | $y=-p+k$ |
EXAMPLE graphing a shifted parabola
In this example, I illustrate the approach that I usually take when asked to give
complete information about the parabola $\,y = a(x-h)^2 + k\,$.
Question:
Completely describe the graph of the equation $\,y = -3(x+5)^2 + 1\,$.
Solution:
• Concave up or down?
Since $\,a=-3\lt 0\,$, the parabola is concave down (sheds water).
Since the focus is always inside a parabola, we know the focus is under the vertex.
• Find the vertex:
The vertex of $\,y = a(x-h)^2 + k\,$ is the point $\,(h,k)\,$:
$$y = a( \overset{\text{What value of } \ x\ }{\overset{\text{ makes this zero?}}{\overbrace{ \underset{\text{Answer: } \ h}{\underbrace{x-h}}}}})^2\ \ \ + \overset{\text{this is the } y\text{-value of the vertex}}{\overbrace{\ \ k\ \ }}$$ For us, what makes $\,x+5\,$ equal to zero? Answer: $\,-5\,$
Thus, $\,-5\,$ is the $\,x$-value of the vertex.
When $\,x = -5\,$, the corresponding $\,y$-value is $\,1\,$.
Thus, the vertex is $\,(-5,1)\,$.
• Find the distance from the vertex to the focus:
Using the ‘one over four pee pair’ memory device,
$\displaystyle\,p = \frac{1}{4a} = \frac{1}{4(-3)} = -\frac{1}{12}\,$.
Thus, the distance from the focus to the vertex is $\displaystyle\,|p| = \left|-\frac1{12}\right| = \frac{1}{12}\,$.
• Find the focus:
We already determined that the focus lies below the vertex.
So, the focus is:
$\displaystyle\,(-5,1-\frac{1}{12}) =(-5,\frac{11}{12})\,$
• Find the equation of the directrix:
Every horizontal line has an equation of the form: $\,y = \text{(some #)}\,$
In our example, the focus lies $\displaystyle\,\frac{1}{12}\,$ below the vertex, so the directrix lies $\displaystyle\,\frac{1}{12}\,$ above the vertex.
Thus, the equation of the directrix is $\displaystyle\,y=1+\frac{1}{12}\,$, that is, $\displaystyle\,y=\frac{13}{12}\,$.
• Plot an additional point:
Plotting an additional point gives a sense of the ‘width’ of the parabola.
For example, when $\,x = -4\,$ we have:
$\,y = -3(-4+5)^2 + 1 = -3(1) + 1 = -2\,$
In this parabola, the vertex is close to the focus, so the parabola is narrow.
$\,y=-3(x+5)^2+1\,$
For fun, zip up to WolframAlpha and type in any of the following:
vertex of y = -3(x+5)^2 + 1
focus of y = -3(x+5)^2 + 1
directrix of y = -3(x+5)^2 + 1
How easy is that?!
Master the ideas from this section
When you're done practicing, move on to:
Quadratic Functions and the Completing the Square Technique
On this exercise, you will not key in your answer.
However, you can check to see if your answer is correct.
Take a look at my monthly earnings from this website. As you can see, I need your help. Thank you!
(MAX is 12; there are 12 different problem types.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 112, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8753085136413574, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/73419-mathematical-proofs.html
|
# Thread:
1. ## Mathematical proofs
If a mod would like to move this to it's assigned section please do so, I don't know which one it belongs into.
sorry i'm not good with latex
Q1. Prove the following biconditional statement
1. For any a in Z(set of integers) a not logically equivalent to 0(mod 3) if and only if a^2 is logically equivalent with 1(mod 3)
2.Prove that if a is an odd integer, then for any x in Z(set of integers), x^2 - x - a does not equal 0
3. Use mathematical induction to prove that for any natural number n, 3 divides (2^(2n) - 1 ). Specify each step in the process clearly.
Thank you soo much (:
2. Originally Posted by treetheta
If a mod would like to move this to it's assigned section please do so, I don't know which one it belongs into.
sorry i'm not good with latex
Q1. Prove the following biconditional statement
1. For any a in Z(set of integers) a not logically equivalent to 0(mod 3) if and only if a^2 is logically equivalent with 1(mod 3)
do you know how to change modular equations into algebraic ones? if so, i suggest you do that. for instance, $a \equiv 1 \mod{3}$ means $a - 1 = 3k$ for some $k \in \mathbb{Z}$.
after rewriting the statements that way, see if you can figure out what to do.
2.Prove that if a is an odd integer, then for any x in Z(set of integers), x^2 - x - a does not equal 0
consider two cases: (1) x is even, (2) x is odd. show that the statement holds in either case. good ol' fashion algebra should do the trick. recall how to define even and odd integers.
3. Use mathematical induction to prove that for any natural number n, 3 divides (2^(2n) - 1 ). Specify each step in the process clearly.
Thank you soo much (:
do you know the method of mathematical induction? what have you tried?
Let $P(n):$ 3 divides $2^{2n} - 1$ for all $n \in \mathbb{N} = \{ 0,1,2,3, \cdots \}$
Clearly $P(0)$ is true. (check this!)
Assume $P(n)$ is true. Now we need to show that this implies $P(n + 1)$ is true.
Note that since $P(n)$ is true, we have
$2^{2n} - 1 = 3k$ for some $k \in \mathbb{Z}$.
So, $P(n + 1) = 2^{2n + 2} - 1$
$= 4 \cdot 2^{2n} ~{\color{red} + 4 - 4}~ - 1$
$= \cdots$
3. okay i think i got the #2
for the first 1 do i assume 3 cases for 0,1,2
and the third one how did u get
$<br /> <br /> P(n + 1) = 2^{2n + 2} - 1<br />$
or did u multiply everything by 2^2 in which case u get
$<br /> <br /> P(n + 1) = 2^{2n + 2} - 4 = 12p<br /> 2^(2n + 2) - 1 = 3(4n + 1)<br />$
and does 2n + 2 = 2k
o.o i did these steps but i ended up getting 9/10 i don't know where i went wrong..
[/FONT][/COLOR]
4. Originally Posted by treetheta
okay i think i got the #2
ok
for the first 1 do i assume 3 cases for 0,1,2
for (=>) direction you need only 2 cases: 1 (mod 3) and 2 (mod 3)
for (<=) direction, use the contrapositive
and the third one how did u get
$<br /> <br /> P(n + 1) = 2^{2n + 2} - 1<br />$
replace n with (n + 1)
or did u multiply everything by 2^2 in which case u get
$<br /> <br /> P(n + 1) = 2^{2n + 2} - 4 = 12p<br /> 2^(2n + 2) - 1 = 3(4n + 1)<br />$
and does 2n + 2 = 2k
that's not what i did
o.o i did these steps but i ended up getting 9/10 i don't know where i went wrong..
[/font][/color]
the answer here is not a number...
5. replace n with (n + 1)
i see how u got 2k + 2 now
that's not what i did
and sorry the latex came out wrong
$P(n + 1) = 2^{2n + 2} - 4 = 12p$
$2^(2n + 2) - 1 = 3(4n + 1)$
so n = 2k
the answer here is not a number...
and that wasn't the answer that is what i got on a marking scale
6. Originally Posted by treetheta
i see how u got 2k + 2 now
and sorry the latex came out wrong
$P(n + 1) = 2^{2n + 2} - 4$
no, this is not P(n + 1)
$= 12p$
how??
$2^(2n + 2) - 1 = 3(4n + 1)$
so n = 2k
no, and you are not looking for n in the first place. are you sure you know what the method of mathematical induction is?
look up how to prove something via mathematical induction, then re-read my post and make sure you understand what is going on. then just pick up where i left off.
7. okay, I've given up on what I did,
and now I'm working on what you did
Why did you multiply and add and subtract 4 on both sides
can you explain it to me or link me to something that will help me understand it
8. Omg, thanks jhevon i gett it ! : D
9. Originally Posted by treetheta
Omg, thanks jhevon i gett it ! : D
you got it? really? great!
that adding and subtracting 4 (hence adding zero) is a neat trick, right?
10. yea it was hard for me to follow why u did it
but now it all makes sense : D
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273713827133179, "perplexity_flag": "middle"}
|
http://qig.itp.uni-hannover.de/qiproblems/SIC_POVMs_and_Zauner's_Conjecture
|
• SEARCH
• Toolbox
SIC POVMs and Zauner's Conjecture
From OpenQIProblemsWiki
Cite as: http://qig.itp.uni-hannover.de/qiproblems/23
Previous problem: Asymptotic cloning is state estimation?
Next problem: Secret key from all entangled states
Problem
We will give three variants of the problem, each being stronger than its predecessor. The terminology of problems 1 and 2 is taken mainly from [1]. For problem 3 see [2] and [3].
Problem 1: SIC-POVMs
A set of d2 normed vectors { | φi > }i in a Hilbert space of dimension d constitutes a set of equiangular lines if their mutual inner products $\left|<\phi_i|\phi_j>\right|^2$ are independent of the choice of $i \neq j$. It can be shown [1] that
• the associated projection operators sum to a multiple of unity and thus induce a POVM (up to normalization) and that
• these operators are linearly independent and hence any quantum state can be reconstructed from the measurement statistics $p_i := tr\left(|\phi_i> <\phi_i| \rho \right)$ of the POVM.
A POVM that arises in this way is called symmetric informationally complete, or a SIC-POVM for short.
The most general form of the problem is: decide if SIC-POVMs exists in any dimension d.
Problem 2: Covariant SIC-POVMs
For a given basis $\left\{|q>\right\}_{q=0\ldots d-1}$ of the Hilbert space, define the shift operator X and clock operator Z respectively by the relations
X | q > : = | q + 1 >
$Z |q> := e^{i \frac {2 \pi} d q} |q>$,
where arithmetic is modulo d. Further, define the Weyl operators
$w(p,q) = Z(p) X(q) \quad (*)$
for all $p, q \in {\mathbf{Z}}_d$. We will refer to the group generated by (*) as the Heisenberg group. It is also known as the Weyl-Heisenberg group or Generalized Pauli group.
A vector | φ > is called a fiducial vector with respect to the Heisenberg group if the set
$\left\{w(p,q) \, |\phi> <\phi| \, w(p,q)^\ast \right\}_{p,q=0\ldots d-1}\quad (**)$
induces a SIC-POVM. Such a SIC-POVM is said to be group covariant. The definition makes sense for any group of order at least d2. However, we will focus on the Heisenberg group in what follows.
The problem: decide if group covariant SIC-POVMs exist in any dimension d.
Problem 3: Zauner's Conjecture
The normalizer of the Heisenberg group within the unitaries U(d) is called the Clifford group. There exists an element z of the Clifford group which is defined via its action on the Weyl operators as
$z \,w(p,q) z^\ast = w(q-p,-p).$
Zauner's conjecture, as formulated in [3], runs: in any dimension d, a fiducial vector can be found among the eigenvectors of Z.
Background
Besides their mathematical appeal, SIC-POVMs have obvious applications to quantum state tomography. The symmetry condition assures that the possible measurement outcomes are in some sense maximally complementary.
Partial Results
• In the context of quantum information, the problem seems to have been tackled first by Gerhard Zauner in his doctorial thesis [2] in 1999. To our knowledge, the results were neither published nor translated into English, which caused some confusion in the English literature, as to what Zauner had actually conjectured (Refer e.g. to the first vs. the second version of [3] on the arXiv server.). Zauner analyzed the spectrum of z. He listed analytical expressions for fiducial vectors in dimension 2, 3, 4, 5 and numerical expressions for d = 6,7. He noted that for dimension 8 an analytic SIC-POVM is known, which is covariant under the action of the threefold tensor product of the two dimensional Heisenberg group.
• Wide interest in the problem arose with the 2003 paper by Renes et. al. [1]. Building on concepts from frame theory, the authors reduced the task of numerically finding fiducial vectors to a non-convex global optimization problem. Using this method, they presented numerical fiducial vectors for all dimensions up to 45 and counted the number of distinct covariant SIC-POVMs up to dimension 7. The question of whether those vectors were eigenstates of a Clifford operation was left open (but see below). Further, four groups other than the Heisenberg group were numerically found to induce SIC-POVMs in the sense of (**). The authors showed that a SIC-POVM corresponds to a spherical 2-design (A finite set X of unit vectors is a t-design if the average of any t-th order polynomial over X is the same as the average of that polynomial over the entire unit sphere.). The same assertion was proven by Klappenecker and Rötteler in [4] and was apparently known to Zauner (see Remark 3 in [4]).
• In [5] Grassl used a computer algebra system capable of symbolic calculations to prove Zauner's conjecture for d = 6. He remarked that elements of the Clifford group map fiducial vectors onto fiducial vectors. Building on that observation, he could account for all 96 covariant SIC-POVMs that were reported to exist for d = 6 in [1].
• Appleby in [3] gave a detailed description of the Clifford group and extended it by allowing for anti-unitary operators. He verified that the numeric solutions of [1] were compatible with Zauner's conjecture and analyzed their stability groups inside the Clifford group (A similar analysis can be performed using discrete Wigner functions, as will be reported in [6]. Appleby goes on to present analytical expressions for fiducial vectors in dimension 7 and 19 and specifies an infinite sequence of dimensions for which he conjectures that solutions can be found more easily.
• Inspired by a construction that links finite geometries to MUBs, there have been some speculations by Wootters about whether SIC-POVMs can be linked to finite affine planes [7]. The same line of thought was pursued by Bengtsson and Ericsson in [8]. However, the existence of such a construction remains an open problem. The results by Grassl are of some relevance here, as it is known that affine planes of order 6 do not exist.
Literature
1. ↑ 1.0 1.1 1.2 1.3 1.4 J. M. Renes, R. Blume-Kohout, A. J. Scott, and C. M. Caves, Symmetric Informationally Complete Quantum Measurements, J. Math. Phys. 45, 2171 (2004) and http://xxx.lanl.gov/abs/quant-ph/0310075(2003).
2. ↑ 2.0 2.1 G. Zauner, Quantendesigns -- Grundzüge einer nichtkommutativen Designtheorie, Doctorial thesis, University of Vienna, 1999 (available online at http://www.mat.univie.ac.at/~neum/papers/physpapers.html
3. ↑ 3.0 3.1 3.2 3.3 D. M. Appleby, SIC-POVMs and the Extended Clifford Group, http://xxx.lanl.gov/abs/quant-ph/0412001
4. ↑ 4.0 4.1 A. Klappenecker, and M. Rötteler, Mutually Unbiased Bases are Complex Projective 2-Designs, http://xxx.lanl.gov/abs/quant-ph/0502031
5. ↑ M. Grassl, On SIC-POVMs and MUBs in dimension 6, http://xxx.lanl.gov/abs/quant-ph/0406175
6. ↑ D. Gross, Diploma thesis, University of Potsdam, 2005
7. ↑ W. K. Wootters, Quantum measurements and finite geometry, http://xxx.lanl.gov/abs/quant-ph/0406032
8. ↑ I. Bengtsson and \AA. Ericsson, Mutually Unbiased Bases and The Complementarity Polytope, http://xxx.lanl.gov/abs/quant-ph/0410120
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9087578058242798, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/82407-clarification-counting-problem.html
|
Thread:
1. Clarification of counting problem
Number of ways to write 10 as a sum of 4 non negative numbers.
I get why this is 13 choose 3. But then the question also asks what happens when every number has to be atleast one, the answer is 9 choose 3. Do you just subtract the 4 zeros that are not going to be used or what?
Thanks for the help.
2. Originally Posted by smellatron
Number of ways to write 10 as a sum of 4 non negative numbers. I get why this is 13 choose 3. But then the question also asks what happens when every number has to be atleast one, the answer is 9 choose 3. Do you just subtract the 4 zeros that are not going to be used or what?
The first question is $\binom{10+4-1}{10}$, the number of ways putting ten identical objects (in this case 1’s) into four different cells (in this case variables).
$\binom{N+k-1}{N}$ is the number of ways putting N identical objects into k different cells.
But in both of these cases some cells could be empty.
If we want no cell to be empty, we play a mind game.
Go ahead and put a 1 into each variable leaving 6 to distribute into the 4 variables.
This gives $\binom{6+4-1}{6}$.
For the general case:
$\binom{N-1}{N-k}$ is the number of ways putting N identical objects into k different cells with no cell empty.
3. Placing objects into cells
Hello smellatron
Originally Posted by smellatron
Number of ways to write 10 as a sum of 4 non negative numbers.
I get why this is 13 choose 3. But then the question also asks what happens when every number has to be atleast one, the answer is 9 choose 3. Do you just subtract the 4 zeros that are not going to be used or what?
Thanks for the help.
Sorry to be fussy, but I'm not sure about this. I think you and Plato are answering a slightly different question, which is: How many solutions are there to the equation $a_1+a_2+a_3+a_4 = 10$, where the $a_i \in \mathbb{N}_0$ (including zero)? And, secondly, where the $a_i \in \mathbb{N}$ (excluding zero).
If this is what you meant, then that's fine. But the difficulty with the problem as you stated it is, of course, is that there is only one solution that uses, for example, the numbers 10,0,0,0. However, the question as I have stated it above (and which, I believe, you have both answered) has four distinct solutions; (10,0,0,0),(0,10,0,0), (0,0,10,0),(0,0,0,10).
Indeed, in Plato's solution, he mentions 'different cells', whereas in the question as you stated it the 'cells' are, I think, all identical, as well as the objects that are to be placed in them.
As far as your saying that you understand where $\binom{13}{3}$ comes from, I can't say that I think it's at all obvious, and would be interested to hear why you think it is. I think it's probably easier to understand the second case where the numbers have to be non-zero.
My reasoning for this is as follows:
If each number has to be non-zero, imagine 10 1's in a line. There are therefore 9 gaps separating each number 1 from its neighbour. Then we have to choose 3 of these gaps into which to place a barrier, thus dividing the 1's into 4 groups. This can be done in $\binom{9}{3}$ ways.
To answer the first question (where zeros are allowed), the only method that I know is to imagine 14 1's with, therefore, 13 gaps from which 3 must again be chosen. This can be done in $\binom{13}{3}$ ways. In each group thus formed, there is a non-zero number of 1's. We then simply discard one 1 from each of the four groups, leaving 10 1's as required.
Is there an easier way of looking at this that I am overlooking?
Grandad
4. That description does make alot of sense. The 14 dots was how the teacher showed it to us. Thanks again for the help.
5. Originally Posted by Grandad
I think you and Plato are answering a slightly different question, which is: How many solutions are there to the equation $a_1+a_2+a_3+a_4 = 10$, where the $a_i \in \mathbb{N}_0$ (including zero)? And, secondly, where the $a_i \in \mathbb{N}$ (excluding zero).
I believe, you have both answered has four distinct solutions; (10,0,0,0),(0,10,0,0), (0,0,10,0),(0,0,0,10).
Grandad, when I first read this question, the interpretation I made was exactly what you wrote.
However, the numbers in OP made no sense under that meaning.
Your reading involves the partition of an integer. How many ways can 10 be represented in by four or fewer summands (the ‘fewer’ is the zero case): $P(10;4)=23$.
Here are some of those 23: $10=10$, $10=9+1$, $10=1+2+3+4$.
But the number 23 makes no sense of the proposed $\binom{13}{3}$.
Whereas, the other reading is exactly that.
The nonzero case is “How many ways can 10 be represented in by exactly four summands?”: $P(10;4)-P(10;3)=9$.
There reason that I doubt that the question refers to partitions of an integer is due to the difficulty in calculating $P(N;k)$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9656775593757629, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/86539?sort=votes
|
## Closest 3D rotation matrix in the Frobenius norm sense
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a 3 by 3 matrix $M$ I would like to find the rotation matrix $R$ minimizing the Frobenius norm:
\begin{equation} \|R-M\|_F \end{equation}
Is there a closed form solution for $R$, or is it possible to express $R$ as the solution to a linear system? I would like to avoid gradient descent if possible.
-
## 1 Answer
Let $M=U\Sigma V$ be the singular value decomposition of $M$, then $R=UV$. If you want $R$ to be a proper rotation (i.e. $\det R=1$) and $UV$ is not, replace the singular vector $\mathbf{u}_3$ associated with the smallest singular value of $M$ with $-\mathbf{u}_3$ in the $U$ matrix. The appropriate reference for this answer is http://www.ma.man.ac.uk/~nareports/narep161.pdf.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.833243191242218, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/101163?sort=votes
|
## probably Lagrange or Legendre, Pell variant
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Evidently Legendre showed that, for positive primes, if $p \equiv 3 \pmod 8$ there is an integral solution to $x^2 - p y^2 = -2.$ Next, if $q \equiv 7 \pmod 8$ there is an integral solution to $x^2 -q y^2 = 2.$
What I would like, and seems to be true, is $x^2 - 2 p y^2 = -2$ for $p \equiv 3 \pmod 8,$ and $x^2 - 2 q y^2 = 2$ for $q \equiv 7 \pmod 8.$ It is probably in Mordell's book, which I do not have here.
Mordell does $x^2 - r y^2 = -1$ for any prime $r \equiv 1 \pmod 4,$ I do remember that. Anyway, I am writing up something and this issue came up.
P.S. Note these are the same as $2x^2 - p y^2 = -1$ if $p \equiv 3 \pmod 8,$ while if $q \equiv 7 \pmod 8$ there is $2x^2 -q y^2 = 1.$
-
Dirichlet generalized Legendre's technique to composite values of m in $x^2 - my^2 = d$; whether he actually treated the cases you are interested in is irrelevant since the method of proof can be transferred easily. I might have given a few references in my papers on descent on Pell conics. – Franz Lemmermeyer Jul 2 at 18:49
@Franz thanks. I looked at some of your homework solution pdfs and did not see this. I will look at the Pell conics items. – Will Jagy Jul 2 at 19:01
## 1 Answer
According to Dickson (History of numbers Vol. 2, Ch. XII, p.376), Göpel (Jour. für Math. 45, 1853, 1-14) proved your conjectures "by use of continued fractions".
Actually Jour. für Math. stands for Crelle's journal, and Göpel's paper (which is his 1835 doctoral dissertation) is available online here.
-
Thanks. I have Dickson's history at home, I did not think to check there. Also, this means Dickson was definitely aware of this when he wrote the book Studies in 1930. I did not find the indefinite ternary quadratic form I wanted in the tables on pages 150-151, then I realized that it would all work out if the binary forms I mentioned behaved as Gopel proved. Here we go, Crelle's is nowadays Journal für die reine und angewandte Mathematik. Very good. – Will Jagy Jul 2 at 21:37
It's in Latin. Wow. – Will Jagy Jul 2 at 21:40
I tried to find a more readable account, but no luck. It also took me a bit of time to realize that "Jour. für Math." meant "Journal für die reine und angewandte Mathematik". I think the title of the journal never changed, "Crelle's journal" has always been folklore. – GH Jul 2 at 21:47
According to en.wikipedia.org/wiki/Adolph_Göpel, Gopel died in 1847, 34 years old, and "after his death some of his works were published in Crelle's Journal." More onformation about Gopel at www-history.mcs.st-andrews.ac.uk/Biographies/…. If you go to Google Scholar and type in gopel quadratic you'll find a few papers that mention his work. I haven't checked to see whether any of them give his, or other, proofs of the results in question. – Gerry Myerson Jul 2 at 23:26
In volume 3, page 19, Dickson gives a bit more detail, but he seems to be reporting on writing $p$ and $2p$ as $x^2 \pm 2 y^2.$ Anyway, continued fractions are fine by me, I am happy using "reduced" indefinite binary quadratic forms, and demonstrating equivalence by finding the cycle of neighboring forms. This is a disguise for continued fractions. So if $p \equiv 3 \pmod 8,$ then $p x^2 - 2 y^2 \equiv x^2 - 2 p y^2,$ and if $q \equiv 7 \pmod 8,$ then $2 x^2 - q y^2 \equiv x^2 - 2 q y^2.$ – Will Jagy Jul 2 at 23:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404782652854919, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/199676/what-are-imaginary-numbers/200015
|
# What are imaginary numbers?
At school I really struggled to understand the concept of imaginary numbers. My teacher told us that an imaginary number is a number which has something to do with the square root of -1. When I tried to calculate the square root of -1 on my calculator, it gave me an error. To this day I do not understand imaginary numbers. It makes no sense to me at all. Is there someone here who totally gets it and can explain it?
Why is the concept even useful?
-
21
I don't get it. – Sachin Kainth Sep 20 '12 at 12:28
7
@SachinKainth: What is a real number? I mean, what do you understand a real number to be and why do you not struggle with that concept? If people see that you understand such numbers for particular reasons, they may be able to give similar reasons for the existence of complex numbers, or at least gauge what would be required to convince you that complex numbers are useful. – Michael Albanese Sep 20 '12 at 12:54
37
$\mathbb{R}eally$ exist? – user1729 Sep 20 '12 at 13:14
7
Real numbers don't "exist" either, they're all just mathematicians' ideas. – akkkk Sep 20 '12 at 13:30
3
@ivan Your comment is misleading. A complex number is a number on the plane. An imaginary number is merely the second coordinate in 2D, the imaginary part of the complex number. – Matt N. Sep 20 '12 at 15:56
show 51 more comments
## 16 Answers
Let's go through some questions in order and see where it takes us. [Or skip to the bit about complex numbers below if you can't be bothered.]
What are natural numbers?
It took quite some evolution, but humans are blessed by their ability to notice that there is a similarity between the situations of having three apples in your hand and having three eggs in your hand. Or, indeed, three twigs or three babies or three spots. Or even three knocks at the door. And we generalise all of these situations by calling it 'three'; same goes for the other natural numbers. This is not the construction we usually take in maths, but it's how we learn what numbers are.
Natural numbers are what allow us to count a finite collection of things. We call this set of numbers $\mathbb{N}$.
What are integers?
Once we've learnt how to measure quantity, it doesn't take us long before we need to measure change, or relative quantity. If I'm holding three apples and you take away two, I now have 'two fewer' apples than I had before; but if you gave me two apples I'd have 'two more'. We want to measure these changes on the same scale (rather than the separate scales of 'more' and 'less'), and we do this by introducing negative natural numbers: the net increase in apples is $-2$.
We get the integers from the naturals by allowing ourselves to take numbers away: $\mathbb{Z}$ is the closure of $\mathbb{N}$ under the operation $-$.
What are rational numbers?
My friend and I are pretty hungry at this point but since you came along and stole two of my apples I only have one left. Out of mutual respect we decide we should each have the same quantity of apple, and so we cut it down the middle. We call the quantity of apple we each get 'a half', or $\frac{1}{2}$. The net change in apple after I give my friend his half is $-\frac{1}{2}$.
We get the rationals from the integers by allowing ourselves to divide integers by positive integers [or, equivalently, by nonzero integers]: $\mathbb{Q}$ is (sort of) the closure of $\mathbb{Z}$ under the operation $\div$.
What are real numbers?
I find some more apples and put them in a pie, which I cook in a circular dish. One of my friends decides to get smart, and asks for a slice of the pie whose curved edge has the same length as its straight edges (i.e. arc length of the circular segment is equal to its radius). I decide to honour his request, and using our newfangled rational numbers I try to work out how many such slices I could cut. But I can't quite get there: it's somewhere between $6$ and $7$; somewhere between $\frac{43}{7}$ and $\frac{44}{7}$; somewhere between $\frac{709}{113}$ and $\frac{710}{113}$; and so on, but no matter how accurate I try and make the fractions, I never quite get there. So I decide to call this number $2\pi$ (or $\tau$?) and move on with my life.
The reals turn the rationals into a continuum, filling the holes which can be approximated to arbitrary degrees of accuracy but never actually reached: $\mathbb{R}$ is the completion of $\mathbb{Q}$.
What are complex numbers? [Finally!]
Our real numbers prove to be quite useful. If I want to make a pie which is twice as big as my last one but still circular then I'll use a dish whose radius is $\sqrt{2}$ times bigger. If I decide this isn't enough and I want to make it thrice as big again then I'll use a dish whose radius is $\sqrt{3}$ times as big as the last. But it turns out that to get this dish I could have made the original one thrice as big and then that one twice as big; the order in which I increase the size of the dish has no effect on what I end up with. And I could have done it in one go, making it six times as big by using a dish whose radius is $\sqrt{6}$ times as big. This leads to my discovery of the fact that multiplication corresponds to scaling $-$ they obey the same rules. (Multiplication by negative numbers responds to scaling and then flipping.)
But I can also spin a pie around. Rotating it by one angle and then another has the same effect as rotating it by the second angle and then the first $-$ the order in which I carry out the rotations has no effect on what I end up with, just like with scaling. Does this mean we can model rotation with some kind of multiplication, where multiplication of these new numbers corresponds to addition of the angles? If I could, then I'd be able to rotate a point on the pie by performing a sequence of multiplications. I notice that if I rotate my pie by $90^{\circ}$ four times then it ends up how it was, so I'll declare this $90^{\circ}$ rotation to be multiplication by '$i$' and see what happens. We've seen that $i^4=1$, and with our funky real numbers we know that $i^4=(i^2)^2$ and so $i^2 = \pm 1$. But $i^2 \ne 1$ since rotating twice doesn't leave the pie how it was $-$ it's facing the wrong way; so in fact $i^2=-1$. This then also obeys the rules for multiplication by negative real numbers.
Upon further experimentation with spinning pies around we discover that defining $i$ in this way leads to numbers (formed by adding and multiplying real numbers with this new '$i$' beast) which, under multiplication, do indeed correspond to combined scalings and rotations in a 'number plane', which contains our previously held 'number line'. What's more, they can be multiplied, divided and rooted as we please. It then has the fun consequence that any polynomial with coefficients of this kind has as many roots as its degree; what fun!
The complex numbers allow us to consider scalings and rotations as two instances of the same thing; and by ensuring that negative reals have square roots, we get something where every (non-constant) polynomial equation can be solved: $\mathbb{C}$ is the algebraic closure of $\mathbb{R}$.
[Final edit ever: It occurs to me that I never mentioned anything to do with anything 'imaginary', since I presumed that Sachin really wanted to know about the complex numbers as a whole. But for the sake of completeness: the imaginary numbers are precisely the real multiples of $i$ $-$ you scale the pie and rotate it by $90^{\circ}$ in either direction. They are the rotations/scalings which, when performed twice, leave the pie facing backwards; that is, they are the numbers which square to give negative real numbers.]
What next?
I've been asked in the comments to mention quaternions and octonions. These go (even further) beyond what the question is asking, so I won't dwell on them, but the idea is: my friends and I are actually aliens from a multi-dimensional world and simply aren't satisfied with a measly $2$-dimensional number system. By extending the principles from our so-called complex numbers we get systems which include copies of $\mathbb{C}$ and act in many ways like numbers, but now (unless we restrict ourselves to one of the copies of $\mathbb{C}$) the order in which we carry out our weird multi-dimensional symmetries does matter. But, with them, we can do lots of science.
I have also completely omitted any mention of ordinal numbers, because they fork off in a different direction straight after the naturals. We get some very exciting stuff out of these, but we don't find $\mathbb{C}$ because it doesn't have any natural order relation on it.
Historical note
The above succession of stages is not a historical account of how numbers of different types are discovered. I don't claim to know an awful lot about the history of mathematics, but I know enough to know that the concept of a number evolved in different ways in different cultures, likely due to practical implications. In particular, it is very unlikely that complex numbers were devised geometrically as rotations-and-scalings $-$ the needs of the time were algebraic and people were throwing away (perfectly valid) equations because they didn't think $\sqrt{-1}$ could exist. Their geometric properties were discovered soon after.
However, this is roughly the sequence in which these number sets are (usually) constructed in ZF set theory and we have a nice sequence of inclusions $$1 \hookrightarrow \mathbb{N} \hookrightarrow \mathbb{Z} \hookrightarrow \mathbb{Q} \hookrightarrow \mathbb{R} \hookrightarrow \mathbb{C}$$
Stuff to read
• The other answers to this question give very insightful ways of getting $\mathbb{C}$ from $\mathbb{R}$ in different ways, and discussing how and why complex numbers are useful $-$ there's only so much use to spinning pies around.
• A Visual, Intuitive Guide to Imaginary Numbers $-$ thanks go to Joe, in the comments, for pointing this out to me.
• Some older questions, e.g. here and here, have some brilliant answers.
I'd be glad to know of more such resources; feel free to post any in the comments.
-
87
I registered to math.stackexchange just to vote for this wonderful answer! – lukas.pukenis Sep 20 '12 at 13:35
37
This is probably the best plain-English, grade-school-level explanation of the various sets of numbers I have ever heard. WAY better than anything my teachers on the subject could come up with. – KeithS Sep 20 '12 at 14:42
12
+1 but you didn't complete the definition that the `imaginary number` line as simply the axis orthogonal to the `real number` line in the `complex number` plane so that every `complex number` can be expressed as the sum of a `real number` and an `imaginary number`. – StarNamer Sep 20 '12 at 14:59
7
– Joe Sep 20 '12 at 15:01
5
Oh, please, please, please, expand your answer to quaternions, please! – Daniel Excinsky Sep 21 '12 at 7:12
show 34 more comments
You ask why imaginary numbers are useful. As with most extensions of number systems, historically such generalizations were invented because they help to simplify certain phenomena in existing number systems. For example, negative numbers and fractions permit one to state in a single general form the quadratic equation and its solution (older solutions bifurcated into many cases, avoiding negative numbers and fractions). One of the primary reasons motivating the invention of complex numbers is that they serve to linearize what would otherwise be nonlinear phenomena - thus greatly simplifying many problems. Here are some examples.
Consider the problem of representing integers as sums of squares $\rm\: n = x^2 + y^2$. Early solutions to this and related problems employed a complicated arithmetic of binary quadratic forms. Such arithmetic was quite intricate and often very nonintuitive, e.g. even the proof of associativity of composition of such forms was a tour de brute force, occupying pages of unmotivated computations in Gauss' Disq. Arith. But this quadratic arithmetic of binary quadratic forms can be linearized. Indeed, by factorization $\rm\: x^2 + y^2 = (x+y{\it i})(x-y{\it i}),$ which allows us to view sums of squares as norms of Gaussian integers $\rm\:x+y{\it i},\ \ x,y\in \Bbb Z.\:$ But just like the rational integers $\Bbb Z,$ these "imaginary" integers have a Euclidean algorithm, so enjoy unique factorization into primes. By considering all the possible factorizations of $\rm\:n\:$ in the Gaussian integers we obtain all the possible representations of $\rm\:n\:$ as a sum of squares. In a similar way, "rational, real" arithmetic of integral quadratic forms becomes much simpler by passing to the "irrational" and/or "imaginary" arithmetic of quadratic number fields. This line of research led to the discovery of ideals and modules, fundamental linear structures at the heart of modern number theory and algebra.
Thus, by factorizing completely over $\Bbb C$, we have reduced the complicated nonlinear arithmetic of binary quadratic forms to the simpler, linear arithmetic of Gaussian integers, i.e. to the more familiar arithmetical structure of a unique factorization domain (in fact a Euclidean domain). Analogous linearization serves to simplify many problems. For example, when integrating or summing rational functions (quotients of polynomials), by factoring denominators over $\Bbb C$ (vs. $\Bbb R)$ and taking partial fraction decompositions, the denominators are at worst powers of linear (vs. quadratic) polynomials - which greatly simplifies matters. More generally, when solving constant coefficient differential or difference equations (recurrences), by factoring their characteristic (operator) polynomials over $\Bbb C,$ we reduce to solutions of linear (vs. quadratic) differential or difference equations. In the same way, there are many real problems (over $\Bbb R)$ whose simplest solutions are obtained by an imaginary detours (over $\Bbb C).$ Perhaps readers will mention more such problems in the comments.
-
2
That's up to now by far the best answer about the usefulness. +1 – celtschk Sep 20 '12 at 15:56
Beautifully stated. I wasn't aware of the Gaussian integers, very cool – acjohnson55 Sep 20 '12 at 23:26
I went to school for electrical engineering (7 years total) and we used imaginary numbers all over the place.
Even with all that schooling, this is probably the clearest explanation of imaginary numbers I've seen:
http://betterexplained.com/articles/a-visual-intuitive-guide-to-imaginary-numbers/
HTH.
-
I especially loved the historical note showing how offended people in the mid-1700's were by negative numbers... it clarifies the fact that "imaginary" and "negative" are just labels for parts of the plane. – Jerry Andrews Sep 20 '12 at 21:22
I had seen this article before, and I must agree -- IT IS EXCELLENT. If you're grappling with the concept of imaginary numbers, you MUST check it out! – Charlie Flowers Sep 21 '12 at 18:15
That article is great! This should've been the accepted answer! – Meysam Sep 24 '12 at 11:13
This link was my first thought - completely worth the time to read – Deebster Sep 24 '12 at 11:55
Well, as you know there's no real number whose square is negative. But now imagine numbers which are. Let's call them imaginary. Now what properties would such numbers have? Well, there would be for example a number whose square is $-1$. Let's call that number the imaginary unit and give it the name $\mathrm i$. Now if we multiply this number with some real number, that is, use $r\mathrm i$, we get a number whose square is $(\mathrm ir)^2 = \mathrm i^2r^2 = -r^2$. Since all positive numbers can be written as $r^2$, we get that all negative numbers can be written as $(\mathrm ir)^2$. Thus the products $\mathrm ir$ are our imaginary numbers. We also see that $(-\mathrm i)^2 = (-1)^2\mathrm i^2 = -1$, so there are actually two numbers whose square is $-1$ (which makes sense because, after all, there are also two numbers whose square is $1$, namely $1$ and $-1$).
OK, but what happens if we add a real number and one of out imaginary numbers. Well, now things get complex. We get general complex numbers.
OK, but how do we know that we've not just made some nonsense, similar to the nonsense that we get when we invent a number $o$ so that $0o=1$? Well to see that, we recognize that all complex numbers are of the form $x+\mathrm iy$ with real numbers $x$ and $y$, and thus the pair $(x,y)$ completely specifies a complex number. Therefore now we re-derive the complex numbers as pairs of real numbers, but now using proper mathematical instruments so we know for sure that whatever we do is well defined. Since doing that we arrive at the very same structure which we just had derived in a quite informal way, we know that the complex numbers are a sound mathematical structure.
OK, now that we have invented the imaginary and complex numbers, are they useful for something? Well, indeed they are. For example, several mathematical statements are much easier in complex numbers than in real numbers. For example, with complex numbers, every polynomial can be written in the form $a(x-x_1)(x-x_2)\cdots(x-x_n)$. With real numbers, this is impossible for polynomials having for example factors of the form $(x^2+1)$. Moreover, we have the very useful relation $\mathrm e^{\mathrm i\phi} = \cos\phi + \mathrm i\sin\phi$. So forget about complicated addition theorems for sine and cosine. Just rewrite your formula in complex exponentials and enjoy the simple relation $\mathrm e^{\mathrm i(\alpha+\beta)}=\mathrm e^{\mathrm i\alpha}\mathrm e^{\mathrm i\beta}$.
Finally, if you want to do quantum physics (and almost all modern physics is quantum physics) you'll find that you have to use complex numbers.
-
The term "imaginary" is somewhat disingenuous. It's a real concept, with real (at least theoretical) application, just like all the "real" numbers.
Think back to that algebra class. You were asked to solve a polynomial equation; that is, find all the values of X for which the entire equation evaluates to zero. You learned to do this by polynomial factoring, simplifying the equation into a series of first-power terms, and then it was easy to see that if any one of those terms evaluated to zero, then everything else, no matter its value, was multiplied by zero, producing zero.
You tried this on a few quadratic equations. Sometimes you got one answer (because the equation was $y=ax^2$ and so the only possible answer was zero), sometimes you got two (when the equation boiled down to $y= (x\pm n)(x \pm m)$, and so when $x=-m$ or $x=-n$ the equation was zero), and a couple of times, you got no answers at all (usually, an equation that breaks down to $y=(x+n)(x+m)$ doesn't evaluate to zero at $x=-m$ or $x=-n$).
In your algebra class, you're told this just happens sometimes, and the only way to make sure any factored term $(x\pm k)$ represents a real root is to plug in $-k$ for $x$ and solve. But, this is math. Mathematicians like things to be perfect, and don't like these "rules of thumb", where a method works sometimes but it's really just a "hint" of where to look. So, mathematicians looked for another solution.
This leads us to application of the quadratic formula: for $ax^2 + bx + c = 0$, $x=\dfrac{-b \pm \sqrt{b^2-4ac}}{2a}$. This formula is quite literally the solution of the general form of the equation for x, and can be derived algebraically. We can now plug in the coefficients, and find the values of $x$ where $ax^2 + bx + c=0$. Notice the square root; we're first taught, simply, that if $b^2-4ac$ is ever negative, then the roots you'd get by factoring the equation won't work, and thus the equation has no real roots. $b^2-4ac$ is called the determinant for this reason.
But, the fact that $b^2-4ac$ can be negative remains a thorn in our side; we want to solve this equation. It's sitting right in front of us. If the determinant were positive, we would have solved it already. It's that pesky negative that's the problem.
Well, what if there was something we could do, that conforms to the rules of basic algebra, to get rid of the negative? Well, $-m = m*-1$, so what if we took our term that, for the sake of argument, evaluated to $-36$, and made it $36*-1$? Now, because $\sqrt{mn} = \sqrt{m}\sqrt{n}$, $\sqrt{-36} = \sqrt{36}\sqrt{-1} = 6\sqrt{-1}$. We've simplified the expression by removing what we can't express as a real number from what we can.
Now to clean up that last little bit. $\sqrt{-1}$ is a common term whenever the determinant is negative, so let's abstract it behind a constant, like we do $\pi$ and $e$, to make things a little cleaner. $\sqrt{-1} = i$. Now, we can define some properties of $i$, particularly a curious thing that happens as you raise its power:
$$i^2 = \sqrt{-1}^2 = -1$$ $$i^3 = i^2*i = -i$$ $$i^4 = i^2*i^2 = -1*-1 = 1$$ $$i^5 = i^4*i = i$$
We see that $i^n$ transitions through four values infinitely as its power $n$ increases, and also that this transition crosses into and then out of the real numbers. Seems almost... circadian, rotational. As Clive N's answer so elegantly explains it, that's what imaginary numbers represent; a "rotation" of the graph through another plane, where the graph DOES cross the $x$-axis. Now, it's not actually really a circular rotation onto a new linear z-plane. Complex numbers have a real part, as you'd see by solving the quadratic equation for a polynomial with imaginary roots. We typically visualize these values in their own 2-dimensional plane, the complex plane. A quadratic equation with imaginary roots can thus be thought of as a graph in four dimensions; three real, one imaginary.
Now, we call $i$ and any product of a real number and $i$ "imaginary", because what $i$ represents doesn't have an analog in our "everyday world". You can't hold $i$ objects in your hand. You can't measure anything and get $i$ inches or centimeters or Smoots as your result. You can't plug any number of natural numbers together, stick a decimal point in somewhere and end up with $i$. $i$ simply is.
As far as having use outside "ivory tower" math disciplines, a big one is in economics; many economies of scale can be described as a function of functions of the number of units produced, with a cost term and a revenue term (the difference being profit or loss), each of these in turn defined by a function of the per-unit sale price or cost and the number produced. This all generally simplifies to a quadratic equation, solvable by the quadratic formula. If the roots are imaginary, so are the breakeven points (and your expected profits).
Another good one is in visualizations of complex numbers, and of their interactions when multiplied. The first one I was exposed to is a well-known series set, produced by taking an arbitrary complex number, squaring it ($(a+bi)^2 = (a+bi)(a+bi) = a^2 + 2abi + b^2i^2 = a^2-b^2 + 2abi$), and then adding back its original value. Repeated to infinity with this number, the series either converges to zero or diverges to infinity (with a few starting numbers exhibiting periodicity; they'll jump around infinitely between a finite number of points much like $i$ itself does). The set of all complex numbers for which the series does not diverge is the Mandelbrot set or M-set, and while the area of the graph is finite, its perimeter is infinite, making the graph of this set a fractal (one of the most highly-studied, in fact).
The Mandelbrot set can in turn be defined as the set of all complex numbers $c$ for which the Julia set $J(f)$ of $f(z)=z^2 + c \to z$ is connected. A Julia set exists for every complex polynomial function, but usually the most interesting and useful sets are the ones for values of $c$ that belong to the M-set; Julia fractals are produced much the same way as the M-set (by repeated iteration of the function to determine if a starting $z$ converges or diverges), but $c$ is constant for all points of the set instead of being the original point being tested. You can define Julia sets with all sorts of fractal shapes. These fractals, more accurately the iterative evaluation behind them, are used for pseudorandom number generation, computer graphics (the sets can be plotted in 3-d to create landscapes, or they can be used in shaders to define complex reflective properties of things like insect shells/wings), etc.
-
This question has already been answered quite thoroughly, but I just want to add that generally speaking, besides the whole numbers, none of the numbers we use "exist" in the real world. The only reason we have adopted extensions to the whole numbers to the natural, integer, rational, real, and complex sets in turn is because these extensions make problems solvable when thinking abstractly. At the end of the day, everything relates back to the whole numbers, however.
Most people use all of the sets except for complex numbers in very commonplace, everyday situations, which is why we've come to view everything up to the real numbers as being fairly intuitive, at least at first glance. (When you dig under the surface, everything gets a great deal more subtle, which is why there are people who study primarily numbers, who we call number theorists. But that's a whole other story.)
It's important to note that this progression isn't the only way to extend the whole numbers. There are hundreds of different arithmetics that have been designed, many not even based on the whole numbers. It's just that the usual extension applies to so many situations that come up commonly. (People who study universal algebra study the ways in which different possible math systems are alike and different. But that's a whole other story as well.)
Complex numbers have taken their place as the normal extension to the reals because they are so useful when dealing with polynomials, which happen to arise in a massive number of mathematical situations. They also allow the exponential function and the trigonometric functions to be viewed as special cases of the same thing, through Euler's Formula, which enables all sorts of great algebra tricks. Specifically, these sorts of functions pop up constantly when using either Taylor or Fourier series to simplify the process of working on problems with tricky transcendental functions. Complex numbers make dealing with these representations a breeze (relatively).
There are even further extensions. If instead of worrying about how to take the square root of -1, you worry about what happens past infinity, the real numbers can alternatively be expanded in several jumps to include hyperreals, superreals, and surreals. None of these systems have caught on, though, because we have alternative ways of dealing with the infinite and infinitesimal quantities in calculus that people find more powerful/convenient.
You can also zip on past complex numbers to Quaternions, and octonions on top of that. Vectors generalize all of the above. They aren't often though of as numbers, but are similar in that they generalize the concept of a property of an object having a mathematical value. Matrixes generalize vectors, and tensors generalize matrixes.
As you climb this ladder, you gain more and more mathematical power, but you start to lose properties that we expect of whole numbers. For complex numbers, order (greater than/less than) begins to become ambiguous. We generally don't think of vectors as "numbers" because we want all operations on vectors to work regardless of dimension, and most of the arithmetic operations don't really generalize. With matrixes, the commutative property goes out the window, and things start to get really weird, especially when the matrixes aren't square. And so forth.
All of this to say that numbers are best viewed as machinery. Different number systems are really only used to the extent which they make a given math situation or problem easier to think about. If you're an engineer, complex numbers do this in many, many situations, which justifies their added...complexity. If you're not an engineer, they're definitely worth understanding, but you may not find uses for them on a daily basis.
-
Complex numbers are just a handy way to handle two dimensional points and move them around. The key to it is understanding that i × i = −1 is just a simple by-product of moving these points around.
Real numbers correspond to numbers on a line (one dimension), which is usually how they are represented: a single axis where each number has a position. Operations on these real numbers have been defined to apply the two most basic transformations:
• Translation (addition)
Move a point by a given amount.
• Scaling (multiplication)
Move a point by an amount related to its value, e.g., two times further than it was.
Now, for a number of situations, you need to handle elements that are not on a line, but on a plane—you are now in two dimensions. When working in two dimensions, you need to know where you are horizontally and vertically, which you usually represent with two numbers. For instance, (3, 2) is “3 to the right, 2 up”. Complex numbers are designed to manipulate these elements with dimensions with “simple” mathematics.
We define i as being the vertical unit. 2i is “2 up”, −4i is “4 down”, and 3 + 2i is “3 to the right and 2 up”. We still can use translation and scaling like in the one-dimension case, but we would like to add something: rotation. How do I turn “2 to the right” into “2 up”?
The solution comes with multiplying by i. If 1 is “1 to the right” and 1 × i is “1 up”, then it means that multiplying by i is simply rotating by 90 degrees with point 0 as a center, counter-clockwise. 2 × i = 2i means “2 to the right” multiplied by i gives “2 up”.
And this is where it gets interesting: rotating the point “1 to the right” by 90 degrees gives “1 up”. Rotating it again by 90 degrees gives “1 left”. This means that multiplying 1 twice by i gives −1.
We have 1 × i × i = −1, and since i × i = −1, i is by definition the square root of −1.
-
1
"Complex numbers are just a handy way to handle two dimensional points and move them around." No, it's not that simple. The Schrödinger equation(the fundamental equation of quantum mechanics) uses them. Its use is essential, contrary to the use of them in, e.g. electronic circuits. IMO, complex numbers are indispensable to describe the laws of physics. – Makoto Kato Sep 21 '12 at 5:12
I think of i as just a symbol to represent an operation
``` √-1
```
When we want the square root of -1, just represent the whole statement with a symbol without evaluating it. This avoids the necessity of trying to explain it further, we don't need to map the answer to some real world concept, it's just a saved operation. We also know that the square root has the following property:
``` √x √x = x
```
No matter what the x. i.e.
``` √-1 √-1 = -1
```
Numbers are useful to me when they represent concepts in the real world. I don't map i to anything in the real world but with this ability to represent the operation, I can now manipulate it in algebraic expressions to ultimately get back to non-imaginary numbers that I do find useful. http://en.wikipedia.org/wiki/Euler%27s_formula
-
This argument is a loose argument for the sake of simplicity and because I know little about the subject. However, I think it may be good for non-mathematicians.
The simplistic view is to note that imaginary numbers (or Complex Numbers) are numbers that are defined by humans to describe quantities different from the numbers we use in our day-to-day life (unless you are a scientist). They have certain rules that are somewhat different than those we use to calculate with non-complex numbers. Hence, the subjects of (Complex Variables and Complex Analysis).
In mathematics, this is not strange. There are concepts that may look surprising until you study them carefully. For example, in Binary Numbers $1+1=10$. This result does not make any sense unless you understand and realize that the result is valid in the Binary System, domain or framework.
Personally, I thought about this before I read your question, and found that the problem comprehending such concepts could arise when you think about a concept outside its framework (or domain) and try to rationalize the results using our every day concepts.
For example trying to evaluate the $\sqrt{-1}$ on a regular calculator with no setting for Imaginary Arithmetic (the proper name is probably Complex Arithmetic). The calculator has to be set to the correct mode (or framework) to give a correct result. In fact, the software in your calculator should have given you a decent error message (or better yet the result of $i$ with a warning note).
Again, the same thing will happen if you are using your calculator in Binary mode to add $1+1$, you will not get the familiar $2$.
Many other examples can be driven around the same concept.
I hope this helps.
-
Check it out, I just learned this very recently:
Define the set of all ordered pairs $(x, y)$, call it $\mathbb{C}$, the set of complex numbers. We call $x$ the real part, and y the imaginary part. Now define multiplication like this:
$(x, y) \cdot (a,b) = (xa-yb, xb+ya)$
Now I'm not sure what that's supposed to be but observe:
$(0, 1)^2 = (0,1) \cdot(0,1)= (0-1, 0+0) = (-1, 0)$
Since the second number in the ordered pair is the imaginary part, (0,1) corresponds to $0+1\cdot i =i$. (In fact all complex numbers $(x, y)$ correspond to $x+yi$ ).
So I have just shown you how defining multiplication that way results in $i^2 = -1$.
But that multiplication isn't the multiplication I'm familiar with!, you say. Well guess what:
$a\cdot b = (a, 0) \cdot (b, 0) = (ab-0,0+0) = (ab, 0) = ab$
Yes it is!
So what I get from this is that essentially someone said: "What if there was a number that could be squared to get -1", and there you have it!
In fact, once you define addition like this:
$(a, b) + (x, y) = (a+b, x+y)$
I'm pretty sure you'll find this new system of complex numbers, $\mathbb{C}$, to be compatible with the old set of real numbers, $\mathbb{R}$ .
-
1
$(x,y)⋅(a,b)=(xa−yb,xb+ya)$ is just FOIL, as you're used to, observe: $(x+yi)\cdot (a+bi)$ is equivalent to the above. Just dawned on me XD, I'm a bit slow sometimes... – Christian Burke Sep 20 '12 at 21:53
If that blows your mind, check out polar coordinates. Polar coordinates: x + yi = r ⋅ e^(Θi) (where theta is the angle from 0 in radians, and r is the distance from 0). (ed. fixed formula getting messed up in paste). – fennec Sep 20 '12 at 22:39
One answer for why imaginary (and complex) numbers are useful is that they provide solutions to polynomial equations. (The square root of -1 part comes from trying to solve the equation $x^2 = -1$, which has no real number solutions.) The Fundamental Theorem of Algebra states that any polynomial equation with real (or even complex!) coefficients has solutions in the complex number system.
The theorem doesn't always seem very powerful, because a lot of times we discard all non-real solutions. But, this isn't always the case. Linear (ordinary) differential equations can be solved by first solving an associated polynomial equation. The complex solutions to the polynomial equation end up influencing the solution to the differential equation.
-
BTW, these differential equations I speak of can be used to solve quite a few real-world problems. The type of diff-eq I mentioned arises when studying oscillators. Forced oscillators, for example, can give rise to resonance. – Hugh Denoncourt Sep 21 '12 at 4:58
I'm surprised that, as far as I can see, no one has mentioned Paul Nahin's book "An imaginary tale : the story of √-1", pub: Princeton University Press ISBN 0-691-12798-0. It is a historical account of how √-1 became a necessary mathematical tool, and is written in an easy to read conversational style. I keep re-reading parts of it, like going over old ground again with a friend.
Two reviews give contrasting opinions: the first very favourable http://plus.maths.org/content/imaginary-tale; the second giving a long list of (alleged -- I haven't checked them independently) inaccuracies and omissions: http://www.ams.org/notices/199910/rev-blank.pdf
-
– Martin Sleziak Sep 26 '12 at 14:30
1
Thank you @Martin. I followed the link and am suitably chastened, but I did, and do, enjoy reading this book. I still think it might well be a nice introduction to complex numbers for someone who has not become acquainted with them already. – Harry Weston Sep 26 '12 at 14:50
The book still might be a good read (I'm not saying that the opinion of the reviewer should be taken as infallible). But link to a review might be useful for people reading your answer anyway. – Martin Sleziak Sep 26 '12 at 14:58
I just think of imaginary numbers as a definition. In the "real world" you cannot take the square root of $−1$ (which is what is happening with your calculator). However, we just define some "number", call it $i$, such that $i^2=−1$, add it to our number system and see what happens. So when you study imaginary numbers, you are just "seeing what happens".
One can then write every number as $a+ib$ where $a,b\in\mathbb{R}$ ($a$ and $b$ are real numbers) and $i^2=−1$. In his comment, ivan is taking this pair $(a,b)$ and pointing out that this pair defines a point on a plane (so, like, a piece of paper, as when you draw a graph). This is the way that people often view imaginary numbers - as points on the plane (and the plane is the Complex Plane, or an Argand diagram).
-
2
In the real world, there is no such number as -1, if we're going that route. The complex numbers are as real or as fake as the negative numbers. – acjohnson55 Sep 20 '12 at 23:28
1
Thus the quote marks. – user1729 Sep 21 '12 at 9:05
Imaginary numbers can also be thought of as a simple hack mathematicians use when they want to keep units separate.
Need a result with more than one component? make it a multiple of something that won't resolve. Pretty handy.
-
1
That works if all you're doing is adding or subtracting, but once you start doing more complicated operations than that, the real and imaginary parts interact. I think complex numbers are best thought of as describing situations where you have two quantities that can be thought of as being separate that interact in particular ways at times, like phases. In any honest use of complex numbers, the real and imaginary parts must have the same units. – acjohnson55 Sep 21 '12 at 20:48
This is probably not helpful for someone first learning about imaginary numbers, but my personal motivation for complex numbers is so that every linear transformation over the reals can be decomposed into a direct sum of shift plus scaling operators, ie. the Jordan normal form exists.
If you work with matrices/linear operators over the reals for long enough, this is something that "feels" like it should be true - like some sort of linear algebra version of the pigeonhole principle - but it doesn't quite work over the reals because of rotation matrices. On the other hand, rotation is "like" a scaling because if you apply a rotation twice, it's the same as rotating twice as much once, so one feels this shouldn't really be an obstruction.
In any case, complex numbers are exactly the number system you need to ensure Jordan normal form exists, where rotations are scalings of complex eigenvectors by a complex number.
-
@ I just want to add that generally speaking, besides the whole numbers, none of the numbers we use "exist" in the real world.
@ Sachin Kainth Hmmm. The question you ask is a deep one. The answer is far from easy. The quote above from one of the earlier answers is not right. I am not sure that "whole numbers" do "exist in the real world", let alone that "real numbers" do, or that complex numbers, quaternions or octonions don't.
The relationship between maths and the real world is extremely mysterious. It goes straight into the classic "God is a Mathematician" statement. I do not remotely have time to go into it properly here. Whole books have been written about it. Some better than others.
One viewpoint is simply to ignore questions of "reality" or relationship to the "real world" and say that complex numbers are exceedingly useful. Another approach is to go down the Clifford Algebra route, originally pioneered by William Clifford (1845-79) at my old college (Trinity, Cambridge) and which has recently seen an explosion of interest by theoretical physicists led perhaps by Stephen Gull at the Cavendish. Roger Penrose (Oxford) is also interesting on the subject of complex numbers.
But all that stuff requires some math sophistication to understand. An important prior question is "real numbers" or even fractions. There are many deeply puzzling and paradoxical questions about them. I suspect you have not been exposed to them.
Looking for someone who "totally gets it" is likely to be a vain hope. If you find them, let me know!
-
## protected by Qiaochu YuanSep 22 '12 at 8:21
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 161, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547566175460815, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/98009-integral-sought-c-x-exp-1-2-x-m-s-2-a.html
|
# Thread:
1. ## Integral sought for: C/x * exp(-1/2 [x-m/s]^2)
I am trying to find a closed-form expression for the following integral:
$\int_{a}^{b}\frac{C}{x}exp^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}dx$
with $0<a<b$
Neither Maple nor Mathematica will give a ready answer.
If I put $\mu$ to 0, Maple gives me:
$\int\frac{C}{x}exp^{-\frac{1}{2}(\frac{x}{\sigma})^2}dx=-\frac{1}{2}*C*Ei(1,\frac{1}{2}\frac{x^2}{\sigma^2} )$
With Ei the exponential integral. This is Ok as far as it goes, but I need the integral for $x-\mu$.
If I add $\mu$ to the function, Maple can no longer find the integral.
I tried taking the derivative of
$-\frac{1}{2}*C*Ei(1,\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2})$ for clues, and that gives me $\frac{C}{x-\mu}$ $exp^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}$, and the factor $\frac {C}{x-\mu}$ is not what I'm looking for; I need $\frac {C}{x}$.
Any suggestions? Am I overlooking something obvious?
2. Originally Posted by Golodh
I am trying to find a closed-form expression for the following integral:
$\int_{a}^{b}\frac{C}{x}exp^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}dx$
with $0<a<b$
Neither Maple nor Mathematica will give a ready answer.
If I put $\mu$ to 0, Maple gives me:
$\int\frac{C}{x}exp^{-\frac{1}{2}(\frac{x}{\sigma})^2}dx=-\frac{1}{2}*C*Ei(1,\frac{1}{2}\frac{x^2}{\sigma^2} )$
With Ei the exponential integral. This is Ok as far as it goes, but I need the integral for $x-\mu$.
If I add $\mu$ to the function, Maple can no longer find the integral.
I tried taking the derivative of $-\frac{1}{2}*C*Ei(1,\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2})$ for clues, and that gives me $\frac{C}{x-\mu}$ $exp^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}$, and the factor $\frac {C}{x-\mu}$ is not what I'm looking for; I need $\frac {C}{x}$.
Any suggestions? Am I overlooking something obvious?
Why do you need this integral?
Why not define a special function:
$G(x, \kappa)=\int_1^x \frac{e^{\frac{(\xi-\kappa)^2}{2}}}{\xi} \; d\xi$
CB
3. ## Why not ...
Well, the integral arises as the expectation of 1/x of a statistic with a truncated Normal distribution. If you use an ordinary non-truncated Normal distribution, the distribution of 1/x is the Cauchy distribution, which doesn't have an expectation. But I don't think that helps, and the answer seems to revolve around special functions anyway so I didn't post it in the probability and statistics thread.
If I allow $a<0, b>0$ I have to keep track of the singularity of $1/x$ in 0, which I'm not sure how to do. This work has already been done for the case that $\mu=0$ in the investigation of the Ei function,which isn't something I know how to replicate.
So why not define a special function like e.g. $G(x,\kappa)$ and go with that?
Well, for two reasons.
First of all, the integral seems tantalizingly close to one that's already expressible in terms of a known special function (Ei). If I'm to define a "new" special function, I should have some confidence I'm not overlooking some obvious trick that will reduce it to a known special function, right. I have no idea how to go about that. Any suggestions on this part?
If I can be reasonable certain that my integral isn't expressible in known elementary and special functions I'm happy to define a new special function $G(x,\kappa)$ as you suggest and do a simple investigation of its properties (extremes, inflexion points, limits).
Secondly, I can get numerical values for the integral I'm looking for from Matlab, but that just gives me numbers but not a lot of insight. Besides I can't readily check the numbers if I do, so I'd either have to trust Maple to get it right, or program the function in a dedicated numerical program like Matlab, Scilab or Octave and then use a numerical integration algorithm that I know to be correct to check on Maple's numbers. I will if I have to, but again, I want to be sure that it's not redundant (i.e. that $G(x,\kappa)$ isn't some simple expression of documented quantities).
4. Originally Posted by Golodh
Well, the integral arises as the expectation of 1/x of a statistic with a truncated Normal distribution. If you use an ordinary non-truncated Normal distribution, the distribution of 1/x is the Cauchy distribution, which doesn't have an expectation. But I don't think that helps, and the answer seems to revolve around special functions anyway so I didn't post it in the probability and statistics thread.
The problem about the mean of $1/x$ with the non-truncated normal distribution is due to the singularity at $x=0$ not the tails, so if your integration interval includes $0$, you will have the same problem.
CB
5. ## Truncated ...
The problem about the mean of with the non-truncated normal distribution is due to the singularity at not the tails, so if your integration interval includes , you will have the same problem.
Of course, which is why I'm interested in the truncated Normal (so that I can exclude 0 from the integration domain). In the question of the integral as posed the problem of the singularity for $x=0$ plays no role.
The digression about the singularity and the expectation of was by way of background information, which I left out because I felt it might obscure the real problem,which is how to deal with the integral.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510194063186646, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/16788/blowing-up-1-curves-effective-and-ample-divisors/16789
|
## blowing up, -1 curves, effective and ample divisors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Lets say we're on a smooth surface, and we blow up at a point. Is there a simple explicit computation that shows to me the fact that the exceptional divisor E has self intersection -1 ? I don't consider the canonical divisor explicit (but am open to it). I do consider power series hacking to be explicit.
I'm quite unnerved by this -1. Is E effective (seems to be, by definition?). Is E ample (seems to not be, by Nakai-Mozeishon type things)? More generally, I used to think of effective and ampleness as both being measures of "positivity"; but perhaps this is wrong - what do effectiveness and ampleness have to do with each other.
What happens locally at a point of -1 intersection? I thought two irreducible curves on a surface should intersect either in 0 points, or in a positive number of points. To find E.E I would have tried to move E to some other divisor, and then I would get E.E = 0 or nonnegative.
Sorry for the multiple questions, but I'm really distressed :(
-
## 5 Answers
Dear Fellow,
You can't move $E$ (!), hence there is no contradiction with it having self-intersection -1. Indeed, if you take a normal vector field along $E$, it will necessarily have degree -1 (i.e. the total number of poles is one more than the total number of zeroes), or (equivalently), the normal bundle to $E$ in the blown-up surface is $\mathcal O(-1)$.
[Added:] Here is a version of the argument given in David Speyer's answer, which is rigorous modulo basic facts about intersection theory:
Choose two smooth very ample curves $C_1$ and $C_2$ passing through the point $P$ being blown-up in different tangent directions. (We can construct these using hyperplane sections in some projective embedding, using Bertini; smoothness is just because I want $P$ to be a simple point on each of them.) If the $C_i$ meet in $n$ points away from $P$, then $C_1\cdot C_2 = n+1$.
Now pull-back the $C_i$ to curves $D_i$ in the blow-up. We have $D_1 \cdot D_2 = n + 1$. Now because $C_i$ passes through $P$, each $D_i$ has the form $D_i = D_i' + E,$ where $D_i'$ is the proper transform of $C_i$, and passes through $E$ in a single point (corresponding to the tangent direction along which $C_i$ passed through $P$). Thus $D_1'\cdot D_2' = n$ (away from $P$, nothing has changed, but at $P$, we have separated the curves $C_1$ and $C_2$ via our blow-up).
Now compute $n+1 = D_1\cdot D_2 = D_1'\cdot D_2' + D_1'\cdot E + E\cdot D_2' + E\cdot E = n + 1 + 1 + E\cdot E$, showing that $E\cdot E = -1$. (As is often done, we compute the intersection of curves that we can't move into a proper intersection by adding enough extra stuff that we can compute the resulting intersection by moving the curves into proper position.)
-
@David Speyer and Emerton: See the last two paragraphs of Andrea Ferretti's version of the computation: it's simpler to move one curve instead of two. Maybe somebody should consolidate all these answers into one community wiki answer? – Bjorn Poonen Mar 2 2010 at 2:50
Dear Bjorn, I saw that, but thought it might be helpful to have a computation that didn't use anything (counting the push-pull formula as something). – Emerton Mar 2 2010 at 3:33
All Andrea is really using is that E.f^* C = 0, which is easy because C can be replaced by a linearly equivalent divisor disjoint from P. Aren't you implicitly using the same kind of argument to know that D_1.D_2 = C_1.C_2 ? – Bjorn Poonen Mar 2 2010 at 5:56
Dear Bjorn, I think you're right. – Emerton Mar 2 2010 at 15:01
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First, there is no contradiction. If you take another representative of the same linear system, this will need to have some negative coefficients. You can compute intersections counting points only when the intersection is transverse (or at least proper if you count multiplicities) and certainly $E$ is not transverse to itself.
From another point of view, the tangent space to the space deformations of $E$ in $S$ in the point $[E]$ is $T_{[E]} Def = H^0 ( N_{E/S})$, and the latter is zero. Indeed by adjunction $N_{E/S} = \mathcal{O}_S(E) |_E = \mathcal{O}_E(-1)$. So not only your linear system only contains the point $[E]$; there is no way to deform $E$ at all (even in a nonlinear way).
As for the computation, say $S$ is the blowup of $T$ in the point $p$, let $f \colon S \to T$ be the blowup. Take any curve $C$ passing through $p$ with multiplicity $1$; then $f^{*}(C) = \widetilde{C} + E$, where $\widetilde{C}$ is the strict transform.
By the push-pull formula `$E \cdot f^{*}(C) = f_{*}(E) \cdot C = 0$`, hence $E \cdot \widetilde{C} = - E^2$. But $E \cdot \widetilde{C} = 1$ because they intersect transversely in one point and you're done!
-
That $E$ has self-intersection $-1$ is Hartshorne Proposition V.3.1. The relation between effectivity and ampleness is more clear in the case of divisors on a curve where a (non linearly trivial) divisor has degree greater than $0$ if and only if it is ample (Hartshorne IV.3.3). So certainly for curves effectivity implies ampleness. On the other hand, even on curves, there exist ample divisors which aren't effective, for example consider a smooth non-hyperelliptic curve of genus at least $3$ and take the divisor $2p-q$ for $p,q$ points on the curve. This is ample but not effective.
As for how to think about ampleness in general, a divisor is ample if and only if some tensor power of it is very ample (Hartshorne II.7.6), and very ampleness is convenient to think about since it basically says you have an embedding into projective space and that the locally free sheaf of rank $1$ associated to the divisor is the pullback of $\mathcal{O}(1)$ from the projective space. Finally, a curve might have negative self-intersection only with itself, so you can still rely on Bezout's theorem working as your intuition does! Also, as you can see, all the above statements are in Hartshorne!
-
The intuitive argument is the following: Let $D$ be a local chart on your smooth surface, with coordinates $(z,w)$, and with $(0,0)$ the point being blown up. Let $\pi: D' \to D$ be the blow up, with $E$ the curve $\pi^{-1}{\large (}(0,0){\large )}$. Consider the intersection of `$X_t := \pi^{-1} \left( \{ x = t \} \right)$` and `$Y_u := \pi^{-1} {\large (}\{ y = u\} {\large )}$`.
When $t$ and $u$ are not zero, these are smooth curves, meeting transversely at $(t,u)$, so they intersect with multiplicity $1$. When $t$ becomes $0$, $X_0$ splits up into two pieces $X' \cup E$. For $u$ nonzero, $Y_u$ misses $E$ and meets $X'$ transversely, so the intersection is still $1$. Similarly, $Y_0 = E \cup Y'$.
Now, let $t=u=0$. We want to compute `$$\langle X_0, Y_0 \rangle = \langle X'+E,\ Y'+E \rangle = \langle X', Y' \rangle + \langle X', E \rangle + \langle E, Y' \rangle + \langle E, E \rangle.$$`
By continuity, the left hand side should be $1$. $X_0$ and $Y_0$ miss each other, and $E$ meets $X_0$ and $Y_0$ transversely. We get `$$1 = 0 + 1 + 1 + \langle E, E \rangle$$` so `$$ \langle E,E \rangle = -1.$$`
-
Write $\mathbb{CP}^1$ as two copies of $\mathbb{C}$, with coordinates $z_1$ and $z_2$ respectively, glued along the map $z_1 \mapsto z_2=\frac{1}{z_1}$ on $z_1\neq 0$.
Write the line bundle $\mathcal{O}(-1) \rightarrow \mathbb{CP}^1$ as two copies of $\mathbb{C} \times \mathbb{C}$, with coordinates $(z_1, v_1)$ and $(z_2,v_2)$ respectively, glued along the map $(z_1,v_1) \mapsto (z_2, v_2) = (\frac{1}{z_1}, z_1 v_1)$ on $z_1\neq 0$. Denote the zero-section by $Z$.
By definition of a blow-up, there is a holomorphic isomorphism from a small neighborhood of E to a neighborhood of $Z$, and this isomorphism sends $E$ to $Z$.
Therefore, it is enough to compute the self-intersection of $Z$. This is a topological notion, so a natural thing to do is to find a cycle $\gamma$ homologous to $Z$ and intersecting $Z$ transversely. (You can't ask $\gamma$ to be a divisor: $Z$ is the only compact divisor in $\mathcal{O}(-1)$.)
Construct such a $\gamma$ as continuous section of $\mathcal{O}(-1)$: on $\vert z_1 \vert \leq 1$, take $z_1 \mapsto (z_1,v_1=1)$; on $\vert z_2 \vert \leq 1$, take $z_2 \mapsto (z_2,v_2= \overline{z_2})$. On the overlap $\vert z_1 \vert=1= \vert z_2 \vert$, we have $v_2= \overline{z_2} = \frac{1}{z_2} = z_1 = z_1 v_1$, as needed.
A homotopy from $Z$ to $\gamma$ is given by $z_1 \mapsto (z_1,t)$ and $z_2 \mapsto (z_2, t\ \overline{z_2})$, for $t\in [0,1]$. In particular (the image of) $\gamma$ is homologous to $Z$.
The only intersection point of $\gamma$ and $Z$ is at $(z_2,v_2)=(0,0)$. There, the orientation of $Z$ given by its complex structure is represented by the vectors $(1,0)$ and $(i,0)$. Pushing this orientation with the above homotopy gives the orientation for $\gamma$ represented by $(1,1)$ and $(i,-i)$.
The $\mathbb{R}$-basis of $\mathbb{C} \times \mathbb{C}$ given by $(1,0)$, $(i,0)$, $(1,1)$ and $(i,-i)$ has same orientation as $(1,0)$, $(i,0)$, $(0,1)$ and $(0,-i)$, which is negative. Conclusion: $Z.\gamma = -1$, so $Z.Z=-1$, so $E.E=-1$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 145, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480149745941162, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/84594-solution-using-three-variables-print.html
|
solution using three variables
Printable View
• April 19th 2009, 10:47 PM
mathshadow
solution using three variables
what is the answer to n=m+x I need to know what the value of m is.
• April 19th 2009, 11:21 PM
Prove It
Quote:
Originally Posted by mathshadow
what is the answer to n=m+x I need to know what the value of m is.
If $n = m + x$ then $m = n - x$.
• April 20th 2009, 02:23 PM
mathshadow
If I had known how to work a problem I would not have come on this forum to ask how it was done. I do not appreciate being called stupid. The only stupid question is one that is not asked.
• April 20th 2009, 02:26 PM
Prove It
Quote:
Originally Posted by mathshadow
If I had known how to work a problem I would not have come on this forum to ask how it was done. I do not appreciate being called stupid. The only stupid question is one that is not asked.
No-one has called you stupid.
You asked to know what m is. The only way you can find a value of m is if you're given values of n and x.
And once you have the values of n and x, then you work out m as I have described.
Don't be so rude.
All times are GMT -8. The time now is 12:51 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9808056950569153, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/32445/wilcoxon-mann-whitney-critical-values-in-r
|
# Wilcoxon-Mann-Whitney critical values in R
I have noticed that when I try to find the critical values for the Mann-Whitney U using R, the values are always 1+critical value. For example, for $\alpha=.05, n = 10, m = 5$, the (two-tailed) critical value is 8, while for $\alpha=.05, n=12, m=8$, the (two-tailed) critical value is 22 (check the tables), but:
````> qwilcox(.05/2,10,5)
[1] 9
> qwilcox(.05/2,12,8)
[1] 23
````
Of course I'm not considering something, but... anyone could explain me why?
-
## 2 Answers
I think that the answer here might be that you're comparing apples and oranges.
Let $F(x)$ denote the cdf of the Mann-Whitney $U$ statistic. `qwilcox` is the quantile function $Q(\alpha)$ of $U$. By definition, it is therefore $$Q(\alpha)=\inf \{x\in \mathbb{N}: F(x)\geq \alpha\},\qquad \alpha\in(0,1).$$
Because $U$ is discrete, there is usually no $x$ such that $F(x)=\alpha$, so typically $F(Q(\alpha))>\alpha$.
Now, consider the critical value $C(\alpha)$ for the test. In this case, you want $F(C(\alpha))\leq \alpha$, since you otherwise will have a test with a type I error rate that is larger than the nominal one. This is usually considered to be undesirable; conservative tests tend to be prefered. Hence, $$C(\alpha)=\sup \{x\in \mathbb{N}: F(x)\leq \alpha\},\qquad \alpha\in(0,1).$$ Unless there is an $x$ such that $F(x)=\alpha$, we therefore have $C(\alpha)=Q(\alpha)-1$.
The reason for the discrepancy is that `qwilcox` has been designed to compute quantiles and not critical values!
-
(+1) Good, simple, concise description. :) – cardinal Jul 18 '12 at 12:31
Remember that the rank sum test statistic is discrete and so you need to use a critical value such that the tail probability is $\geq$ to the specified $\alpha$. For some sample sizes equal to alpha cannot be achieved and that is my guess as to why you need the +1.
-
4
So why is +1 needed in R and not in the usual tables? – MånsT Jul 17 '12 at 11:14
1
@this.is.not.a.nick: perhaps more importantly, $0.0236723<0.025$ whereas $0.02868937>0.025$, which means that in the former case the actual significance level will be $<0.05$ and that in the latter it will be $>0.05$. Usually people tend to prefer to err on the right side, i.e. to have a lower significance level than the nominal one (meaning that the values from the tables are preferable). – MånsT Jul 17 '12 at 11:33
1
Right to both Procrastinator and MansT. Actually the definiton of significance level requires that the tail probabilities do not sum to anything higher than alpha. I talk about this in my paper with Christine Liu on the sawtoothed behavior of the power function for exact binomial tests via the Clopper-Pearson method (see American Statistician (2002)). – Michael Chernick Jul 17 '12 at 11:56
2
– MånsT Jul 17 '12 at 12:20
2
@Michael: Agreed. In some sense, `qwilcox` does what it's supposed to do, but not what you'd expect it do to. – MånsT Jul 17 '12 at 12:42
show 6 more comments
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085681438446045, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/33440/find-whether-n-closed-curves-intersect
|
# Find whether n closed curves intersect
I have a number of closed curves (contours) which I want split into groups of mutually intersecting curves. The contours are made of straight lines and bezier curves. How could I do that? Thanks!
-
How much is "a number"? If there aren't that many, you could probably brute-force check by taking two contours at a time... – J. M. Apr 17 '11 at 9:12
Yep, there are a little number of contours. I would do exactly that. However, I really wonder how would I check for intersections? And I should also clarify, because I didn't say it right the first time, that I'm interested in checking not the curves, but the closed regions they encircle. Could you suggest something? Sorry if I'm asking for an obvious thing, I'm slow with math. :-) – Albus Dumbledore Apr 17 '11 at 9:38
Could you tell us why you need this? Knowing the background is helpful for choosing the level of the answer. – Sam Nead Apr 17 '11 at 16:47
## 1 Answer
Hopefully the package that allows you to draw Bezier curves also allows you to a) draw arbitrary lines and b) compute the number of intersections between a line and a Bezier curve. With those to things given: Suppose that $\alpha$ and $\beta$ are contours. From your comment I assume that $\alpha$ and $\beta$ do not self-intersect, nor do they intersect each other. Let $B$ be the region (a disk) bounded by $\beta$. So there are two cases to ponder. Either $\alpha$ is contained in $B$ or it isn't. So:
Suppose that $p$ is a point in the plane. Let $L_p$ be the vertical ray based at $p$. Now suppose that $q$ is a point on $\alpha$. Using the primitives of your program or otherwise, compute $N_q = |L_q \cap \beta|$, the number of intersections between $L_q$ and $\beta$. In general, $\alpha$ is contained in $B$ if and only if $N_q$ is odd.
There is a subtle point here: the ray $L_q$ needs to meet $\beta$ transversely for this to work. That is, $L_q$ must not be tangent to any point of $\beta$. One hack to get around this: compute the parity of $N_q$ for several random points $q \in \alpha$ and accept the "generic" answer.
By the way, Google finds a nice introduction to Bezier curves on-line - http://processingjs.nihongoresources.com/bezierinfo/
-
Just perfect! Thanks for the bother! – Albus Dumbledore Apr 17 '11 at 17:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491332769393921, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/105263?sort=oldest
|
## Are there F_un Lie algebras ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Background See WP-article on F_1 = F_{un} = Field with one element (and also this MO question). Paraphrasing someone: we do not know what is it, but it is not a field :). For this question it is enough to keep in mind J. Tits idea (1957) that Weyl groups should be thought as semisimple groups over F_1. E.g. symmetric group S_n = GL_n(F_1).
Q1 What might be Lie algebras over F_1 ? In particular for GL_n ? What numerology should correspond to gl_n(F_1) ? I.e. are there some numbers related to gl_n(F_q) which have a limit when q->1 (may be renormalized like with GL_n(F_q)) ?
Comments on further questions are also welcome:
Q2 To what extent representation theory of S_n can be thought as limit q->1 of representation theory of GL_n(F_q) ? (There is some paper "Translating the Irreducible Representations of S_n into GL_n(F_q)", but I do not quite understand it).
Q3 What might be "orbit method" to construct representations of S_n = GL_n(F_1) ?
Q4 What might be Langlands correspondence over F_1 ? Should it be related to bijection between irreps of S_n and its conj. classes (keep in mind that GL^L=GL).
-
Lie algebras in characteristic p are already bad enough, and capture so little of the group situation. E.g. a function on the line that's invariant under translation (a group action) must be constant, but a function with derivative zero (the Lie algebra action) is merely required to be a polynomial in x^p. If p=1, this looks like no condition at all...? – Allen Knutson Aug 27 at 13:36
@Allen thank you, it is very valuable comment... may be one should ask about huger algebra including d_x^p/p! ("divided differences" its name if I remember correctly)... – Alexander Chervov Aug 27 at 14:58
## 1 Answer
I will only attempt to answer the first question.
$n$-dimensional vector space over $\mathbf{F}_1$ is the same as a pointed set with $n+1$ elements. It is natural to call $GL_n(\mathbf{F}_1)$ the group of automorphisms of $\mathbf{F}_1^n$ and $\mathfrak{gl}_n(\mathbf{F}_1)$ the monoid of endomorphisms. There are at least two notions of morphisms:
1. Plain morphisms of pointed sets. The monoid of endomorphisms has cardinality $(n+1)^n$.
2. Maps of pointed sets which are injective if you throw away the basepoints (see http://arxiv.org/abs/1006.0912). It is not too hard to see that the cardinality of $End(\mathbf{F}_1)$ is `$$\sum_{k=0}^n\left(\begin{array}{c}n\\ k\end{array}\right)\frac{n!}{(n-k)!}=\sum_{k=0}^n\frac{(n!)^2}{k!(n-k)!^2}.$$` Here $k$ is the number of elements that don't go to the basepoint.
Note, that in both cases the group of automorphisms is the same ($S_n$).
-
1
This is nicer to think of as the monoids of partial transformations and of partial permutations of an n-element set by dropping the base point. They are called the full partial transformation monoid and the symmetric inverse monoid (or rook monoid) respectively. – Benjamin Steinberg Aug 22 at 22:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221342206001282, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/163814-divisors.html
|
# Thread:
1. ## Divisors
Prove, that a composite number 2^p - 1 (where p is a prime) has at least 2 different prime factors.
How I tried to solve this problem:
I tried to write 2^p -1 as x^n (x, n are integers, n>1, just to prove that it can't be this way). I know that the only prime factors of 2^p - 1 can be numbers like 2kp+1 and I also tried this way. What is more, I tried with two cases- n odd and n even, separately. But I didn't manage to prove anything. Please, help. Thank you in advance.
2. x^n-1=(x-1)*(x^{n-1}+x^{n-2}+...+x+1)
EDIT: NOT HELPING TOO MUCH.
2^31 - 1 is prime.
3. Yes, but in my question if you have an equation x^n-1=(x-1)*(x^{n-1}+x^{n-2}+...+x+1) then x=2 and we have to prove that (x^{n-1}+x^{n-2}+...+x+1) has at least 2 prime factors. How?
4. Originally Posted by conrad
Prove, that a composite number 2^p - 1 (where p is a prime) has at least 2 different prime factors.
How I tried to solve this problem:
I tried to write 2^p -1 as x^n (x, n are integers, n>1, just to prove that it can't be this way). I know that the only prime factors of 2^p - 1 can be numbers like 2kp+1 and I also tried this way. What is more, I tried with two cases- n odd and n even, separately. But I didn't manage to prove anything. Please, help. Thank you in advance.
I'll try to show a more general result: Let $n, a$ and $m$ be positive integers. If $2^n-1=a^m$, then $2^n-1=a$. To put it differently, $m$ must equal 1. (execpt for the case $n=1$, where $2^1-1=1^m$)
We suppose that $n\geq2$. It's clear that $a$ is odd.
If $m$ is even, then $a^m$ would be of the form $4k+1$. But $2^n-1$ has the form $4q+3$; therefore $m$ is odd.*
The following identity holds, since $m$ is odd: $a^m+1=(a+1)(a^{m-1}-a^{m-2}+\cdots+a^2-a+1)=2^n$. Note that the term $a^{m-1}-a^{m-2}+\cdots+a^2-a+1$ is odd and thus relatively prime to $2^n$. So $2^n$ must be equal to $a+1$, namely $a=2^n-1$.
Back to the original problem... Assume to the contrary that $2^p-1$ has only one prime factor, that is $2^p-1=q^m$, where $q$ is prime. But the result implies that $2^p-1=q$, contradiction.
* To see why: For any odd $a$ we have $a\equiv\pm1\pmod4$. Then $a^m\equiv1\pmod4$ if $m$ is even. But $2^n-1\equiv-1\pmod4$, so $1\equiv-1\pmod4$, contradiction. Sorry if this part is tedious...
5. Thank you! You helped me so much. I'm very grateful!
6. @melese, maybe you have and idea how to solve this one:
We have:
z = (2^(mn) - 1)/[(2^m - 1)(2^n - 1)], where (2^m - 1) and (2^n - 1) are prime numbers.
Prove that (2^m - 1) and (2^n - 1) are not the only prime factors of z.
I tried to solve it writing z = (2^m - 1)^a * (2^n - 1)^b and proving that it is not correct. But I don't know how. I also noticed that it would be sufficient to prove that 2^(mn) - 1 can't be written as ((2^m - 1)^x * (2^n - 1)^y but I also have no idea how to prove it. I tried with comparing highest exponents, but nothing..
Thank you very much in advance!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9648839235305786, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/6198/fermi-level-in-disordered-amorphous-and-or-organic-semiconductors
|
# Fermi level in disordered amorphous and/or organic semiconductors
So, the Fermi level in crystals is pretty easy to understand. Been using it and talking about it in terms of the highest occupied level forever. However, I'm now reading about disordered systems. A lot of researchers mention the existence of empty states randomly distributed above and below the Fermi level. But isn't the Fermi level necessarily the energy at which all energies below are filled states?
EDIT: So I suppose chemical potential is the more correct term to use (instead of Fermi energy) because not at T = 0. Anyways, consider this quote: "the localized gap states near $E_F$ are spread in energy with singly occupied gap states below $E_F$, doubly occupied states lying above $E_F$, and empty states distributed randomly."
-
2
It's difficult without context to know exactly what you have in mind, but that definition of $E_F$ really only holds at zero temperature -- at any $T>0$ states above the (nominal) Fermi level are thermally occupied. – wsc Mar 2 '11 at 1:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354962706565857, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/13742/list
|
## Return to Question
2 added symbolic dynamics flag
1
# How can generic closed geodesics on surfaces of negative curvature be constructed?
As far as I understand it the closing lemma implies that closed geodesics on surfaces of negative curvature are dense. So: how can they be constructed in general?
A concrete answer that dovetails with the construction of such surfaces with constant negative curvature and genus $g$ from regular hyperbolic $(8g-4)$-gons along lines indicated by Adler and Flatto and gives the endpoints of the geodesics in the Poincaré disk model would be ideal. More useful still would be a way to construct all the closed geodesics that cross the boundaries of translates of the fundamental $(8g-4)$-gon some specified number of times (I am pretty sure this ought to be a finite set, but I couldn't say why off the top of my head).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289889335632324, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/10/11/a-hodge-star-example/?like=1&source=post_flair&_wpnonce=5b12459335
|
# The Unapologetic Mathematician
## A Hodge Star Example
I want to start getting into a nice, simple, concrete example of the Hodge star. We need an oriented, Riemannian manifold to work with, and for this example we take $\mathbb{R}^3$, which we cover with the usual coordinate patch with coordinates we call $\{x,y,z\}$.
To get a metric, we declare the coordinate covector basis $\{dx,dy,dz\}$ to be orthonormal, which means that we have the matrix
$\displaystyle g^{ij}=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}$
and also the inner product matrix
$\displaystyle g_{ij}=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}$
since we know that $g_{ij}$ and $g^{ij}$ are inverse matrices. And so we get the canonical volume form
$\displaystyle\omega=\sqrt{\det\left(g_{ij}\right)}dx\wedge dy\wedge dz=dx\wedge dy\wedge dz$
We declare our orientation of $\mathbb{R}^3$ to be the one corresponding to this top form.
Okay, so now we can write down the Hodge star in its entirety. And in fact we’ve basically done this way back when we were talking about the Hodge star on a single vector space:
$\displaystyle\begin{aligned}*1&=dx\wedge dy\wedge dz\\ *dx&=dy\wedge dz\\ *dy&=-dx\wedge dz=dz\wedge dx\\ *dz&=dx\wedge dy\\ *(dx\wedge dy)&=dz\\ *(dx\wedge dz)&=-dy\\ *(dy\wedge dz)&=dx\\ *(dx\wedge dy\wedge dz)&=1\end{aligned}$
So, what does this buy us? Something else that we’ve seen before in the context of a single vector space. Let’s say that $v$ and $w$ are two vector fields defined on an open subset $U\subseteq\mathbb{R}^3$. We can write these out in our coordinate basis:
$\displaystyle\begin{aligned}v&=v_x\frac{\partial}{\partial x}+v_y\frac{\partial}{\partial y}+v_z\frac{\partial}{\partial z}\\w&=w_x\frac{\partial}{\partial x}+w_y\frac{\partial}{\partial y}+w_z\frac{\partial}{\partial z}\end{aligned}$
Now, we can use our metric to convert these vectors to covectors — vector fields to $1$-forms. We use the matrix $g_{ij}$ to get
$\displaystyle\begin{aligned}g(v,\underline{\hphantom{X}})&=v_xdx+v_ydy+v_zdz\\g(w,\underline{\hphantom{X}})&=w_xdx+w_ydy+w_zdz\end{aligned}$
Next we can wedge these together
$\displaystyle\begin{aligned}g(v,\underline{\hphantom{X}})\wedge g(w,\underline{\hphantom{X}})=&(v_yw_z-v_zw_y)dy\wedge dz\\&+(v_zw_x-v_xw_z)dz\wedge dx\\&+(v_xw_y-v_yw_x)dx\wedge dy\end{aligned}$
Now we come to the Hodge star!
$\displaystyle\begin{aligned}*(g(v,\underline{\hphantom{X}})\wedge g(w,\underline{\hphantom{X}}))=&(v_yw_z-v_zw_y)dx\\&+(v_zw_x-v_xw_z)dy\\&+(v_xw_y-v_yw_x)dz\end{aligned}$
and now we’re back to a $1$-form, so we can use the metric to flip it back to a vector field:
$\displaystyle\begin{aligned}g\left(*(g(v,\underline{\hphantom{X}})\wedge g(w,\underline{\hphantom{X}})),\underline{\hphantom{X}}\right)=&(v_yw_z-v_zw_y)\frac{\partial}{\partial x}\\&+(v_zw_x-v_xw_z)\frac{\partial}{\partial y}\\&+(v_xw_y-v_yw_x)\frac{\partial}{\partial z}\end{aligned}$
Here, the outermost $g(\underline{\hphantom{X}},\underline{\hphantom{X}})$ is the inner product on $1$-forms, while the inner ones are the inner product on vector fields. This is exactly the cross product of vector fields on $\mathbb{R}^3$.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 2 Comments »
1. [...] continue our example considering the special case of as an oriented, Riemannian manifold, with the coordinate -forms [...]
Pingback by | October 12, 2011 | Reply
2. [...] can check that this will be automatically zero if we start with an image of the curl operator; our earlier calculations show that is always the identity mapping — at least on with this metric — so if we [...]
Pingback by | October 13, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9008983969688416, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/1964/kernel-bandwidth-in-kernel-density-estimation/6319
|
Kernel bandwidth in Kernel density estimation
I am doing some Kernel density estimation, with a weighted points set (ie., each sample has a weight which is not necessary one), in N dimensions. Also, these samples are just in a metric space (ie., we can define a distance between them) but nothing else. For example, we cannot determine the mean of the sample points, nor the standard deviation, nor scale one variable compared to another. The Kernel is just affected by this distance, and the weight of each sample: f(x) = 1./(sum_weights) * sum(weight_i/h * Kernel(distance(x,x_i)/h))
In this context, I am trying to find a robust estimation for the kernel bandwidth 'h', possibly spatially varying, and preferably which gives an exact reconstruction on the training dataset x_i. If necessary, we could assume that the function is relatively smooth.
I tried using the distance to the first or second nearest neighbor but it gives quite bad results. I tried with leave-one-out optimization, but I have difficulties finding a good measure to optimize for in this context in N-d, so it finds very bad estimates, especially for the training samples themselves. I cannot use the greedy estimate based on the normal assumption since I cannot compute the standard deviation. I found references using covariance matrices to get anisotropic kernels, but again, it wouldn't hold in this space...
Someone has an idea or a reference ?
Thank you very much in advance!
-
2 Answers
One place to start would be Silverman's nearest-neighbor estimator, but to add in the weights somehow. (I am not sure exactly what your weights are for here.) The nearest neighbor method can evidently be formulated in terms of distances. I believe your first and second nearest neighbor method are versions of the nearest-neighbor method, but without a kernel function, and with a small value of $k$.
-
On Matlab File Exchange, there is a kde function that provides the optimal bandwidth with the assumption that a Gaussian kernel is used: Kernel Density Estimator.
Even if you don't use Matlab, you can parse through this code for it's method of calculating the optimal bandwidth. This is a highly rated function on file exchange and I have used it many times.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361730217933655, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/187742-minimal-sufficient-statistics-joint-mass-function-print.html
|
# Minimal sufficient statistics for a joint mass function
Printable View
• September 10th 2011, 09:20 PM
tttcomrader
Minimal sufficient statistics for a joint mass function
Two teams play a series of games, stopping as soon as one of the team has three wins. Assume the games are independent and that the chance the first team wins is an unknown parameter $\theta \in (0,1)$. Let X denote the number of games the first team wins, and Y the number of games the other teams wins.
a) Find the joint mass function of X and Y.
b) Find a minimal sufficient statistic.
c) Is it complete?
My solution so far:
a)
I have the joint density being $P_ \theta (X=x,Y=y)= \theta ^x(1- \theta )^y$ where $x,y \in \{ 0,1,2,3 \}$ and $2 < x+y<6$
But then I'm pretty much stuck with finding minimal sufficient statistics here, any help, please? Thank you.
All times are GMT -8. The time now is 03:31 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088712930679321, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/44971/k3-surface-criteria?answertab=votes
|
# K3 surface criteria
Suppose I have an affine equation $f(x, y) = 0$ which after homogenizing becomes $f(X, Y, Z) = 0$ in $\mathbb{P}^{3}$. Are there ways to check that $f$ represents a K3 surface?
-
Btw, if you're interested in K3 surfaces then there are good introductions available in Beauville's "Complex algebraic surfaces" (and his Astérisque K3 surface seminar notes, but they are hardcore) and in Huybrecht's notes (available for free on his website). Avoid Barth, Peters, Van de Ven on your first pass; it's a reference book and isn't actually meant to be read. – Gunnar Magnusson Jun 12 '11 at 17:37
## 2 Answers
There sure are. Let's denote by $X$ the surface defined by $f$, and let's suppose that $X$ is smooth. For any compact complex surface, being K3 is equivalent to being simply connected and having trivial canonical bundle.
A hypersurface in $\mathbb P^n$ is connected and simply connected by the Lefschetz theorem, so we only need to find a condition ensuring that the canonical bundle is trivial.
This condition is given by the adjunction formula, which says that if the polynomial $f$ is of degree $d$, then
$$K_X = ( K_{\mathbb P^3} \otimes \mathcal O(d) )_{|X} = \mathcal O_X(d-4).$$
This bundle is trivial if and only if $d = 4$, or in other words, if $f$ is a quartic.
A fun exercise involving the adjuction formula is to see that there are very few K3 surfaces given as complete intersections in $\mathbb P^n$. In fact, they only exist in dimension 4, 5 and 6 if I remember correctly.
-
I am convinced that $f(X,Y,Z)$ represents a K3 surface iff it is a quartic (fourth-degree) polynomial, see page 16 of this review of K3 surfaces:
http://arxiv.org/abs/hep-th/9611137
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952328622341156, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/26694/list
|
2 edited tags
1
# Terminology for relation on sets
Does the following relation between sets have a name or any special properties:
$X\bigcirc Y$ iff $X \cap Y = \emptyset$ or $X\subseteq Y$ or $Y\subseteq X$.
Although this is rather basic, it is part of something much grander.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297966957092285, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/7894/graviton-emission-from-d-branes/7897
|
# Graviton Emission from D-Branes
I'm working through Polchinski's book on string theory, and I ran into something that I don't think I understand. I'm hoping that someone who knows this stuff can help me out.
Before calculating the Dp-brane tension in Chapter 8, Polchinski says that we could have obtained the same result by calculating the amplitude for graviton emission from the D-brane (instead of calculating closed string exchange between two D-branes). It seems like we would do this by placing a graviton vertex operator on the disk with Dirichlet boundary conditions on (25-p) coordinates and Neumann on the rest. But doesn't the amplitude with only one vertex operator vanish, since there aren't enough ghost insertions to get a nonzero result? I must be misunderstanding something, because it seems like if we only fix the real and imaginary parts of the position of the vertex operator, then we have to divide by the volume of the rest of the CKG of the disk, which is infinity. Any ideas or hints? Thanks.
-
## 1 Answer
After you fix the closed string vertex operator, the remaining group of isometries is one dimensional and its volume is finite. For example if the vertex operator is fixed at the center of the disk, the remaining isometries are rotations. The volume of that group in some units is $2\pi$. Figuring out the precise constants involved is a bit messy though.
There is another subtlety with this calculation - the on-shell graviton is pure gauge and strictly speaking this amplitude is zero. But with appropriate limiting procedure you can extract the tension as the proportionality constant involved in a slightly off-shell amplitude (which is slightly ill-defined). This is another reason why the annulus calculation is cleaner.
-
Good, +1, but could you please add a comment about the counting of the ghost number of this disk diagram? Of course, the answer could be uninteresting by your saying that you may insert any PCO operators anywhere to cancel it but one should still know why, what the rules are. – Luboš Motl Apr 1 '11 at 20:56
@Luboš: No time at the moment, if you want to write a more complete answer, I'll delete mine. – user566 Apr 1 '11 at 21:03
Thanks for the answer, I think I understand why the CKG is finite. But wouldn't there still be a c\tilde{c} needed to fix the graviton vertex operator, which has a vanishing expectation value? @Lubos, is this what you're referring to? – user2888 Apr 1 '11 at 21:11
No, @Moshe, thanks for your kind offer but I would have to study it again because I don't have the full answer on the top of my head now. It was interesting enough for me to read another person's answer carefully, but not interesting enough for me to review all the ghost numbers of various diagrams and what to do with them haha. – Luboš Motl Apr 2 '11 at 12:56
1
Dear user2888, yes, exactly, I am talking about the required minimal number of $c$ and $\tilde c$ operators inserted somewhere on the disk which is needed for the amplitude to be nonzero. Note that the Veneziano amplitude vanishes for less than 3-4 open strings. I don't know what's the orthodox treatment right now but the $c,\tilde c$ simply have to be saturated by hand so that one gets a nonzero result because the right result is nonzero, and the residual conformal symmetry is not a problem because $U(1)$ that preserves the disk and the central point's vertex operator is compact. – Luboš Motl Apr 2 '11 at 13:02
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475098252296448, "perplexity_flag": "head"}
|
http://m-phi.blogspot.com/2012/03/is-arithmetic-necessary-condition-for.html
|
# M-Phi
A blog dedicated to mathematical philosophy.
## Thursday, 1 March 2012
### Is arithmetic a necessary condition for Gödel incompleteness?
This is really a question for Jeff, but I hope others will be interested as well. Here is the thing: next week NewAPPS will be hosting a symposium on a text by Paul Livingstone which presents a comparative analysis of Gödel’s incompleteness theorems, and Graham Priest and Derrida (yes indeed!) on diagonalization. It is an interesting text, even though some of the more metaphysical claims seem a bit over-the-top to me (as I will argue in my contribution to the symposium). But anyway, so I’ve been thinking about the whole Gödel thing again and how wide-ranging the conclusions drawn by Livingstone really are, and this got me thinking about the conditions that a system must satisfy for the Gödel argument to go through. It is clear that containing arithmetic is a sufficient condition for the argument to go through, as arithmetic allows for the encoding which then gives rise to the Gödel sentence.
But my question now is: is containing arithmetic a necessary condition in a system for the Gödel argument to go through? It seems to me that the answer should be negative. In fact, I recall that many years ago I heard Haim Gaifman saying that any other suitable encoding technique would be enough for the argument to run; arithmetization is just a particularly convenient encoding method. So my first question is whether this is indeed correct, i.e. that containing arithmetic is not a necessary condition for a Gödel incompleteness argument to take off. The second question is whether there are interesting examples of systems that can be proved to be incomplete by a Gödel argument even though they do not ‘contain arithmetic’ (I have the feeling that even the concept of ‘containing arithmetic’ might need to be clarified).
Thoughts, anyone?
#### 25 comments:
1. I confess to not really understanding the question; Gödel's incompleteness theorem is fundamentally about arithmetic. To quote Kleene's statement of the theorem:
"Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory" (Kleene, Mathematical Logic 1967, p. 250).
If you have a theory which is not capable of expressing basic arithmetic, then it doesn't make sense to speak of Gödel's results with respect to it, because the theorem doesn't apply.
1. The point is that you could run an analogous argument (encoding, diagonalization etc) for other theories/formal systems, so as to generalize the Godel's results which are indeed originally intended to be about arithmetic specifically. Something like using the Henkin method to prove the completeness of different logics, but then use encoding and diagonalization to prove the incompleteness of a formal system.
2. As far as I remember Gödel is about coding and decoding using arithmetic to be sure of uniqueness of the code. But it is a rare to code 'all' as if one can code 'all chinese eat rice' unless 'all' is just a formal expression.
3. I believe Sara's comment above is entirely correct. Moreover, let me state the obvious. The coding language, which is arithmetic, is only half of the story; there is of course a coded language, which is also arithmetic. It will contain primitive recursive functions (in particular arithmetic ones) and a provability relation in that language. If you forget about it, the whole diagonalization makes no sense: it is generated precisely by using arithmetic to express formal properties of arithmetic (in particular about provability). This will hold for any formal language that is as expressible as arithmetic, i.e. that contains it. In other words, yes, it is a necessary condition.
4. Not sure if it's relevant to your question, but there's a very interesting paper called "Undecidability Without Arithmetization" (http://philpapers.org/rec/GRZUWA) by Andrzej Grzegorczyk that answers (positively) a similar-sounding question with respect to undecidability, i.e.: "Are there interesting examples of systems that can be proved to be undecidable by a Gödel argument even though they do not ‘contain arithmetic’?". In the paper he proves the undecidability of a finitely axiomatizable first-order theory of concatentation on texts, i.e., strings in a formal language, and, hence the undecidability of first-order logic, in a metatheory that is itself based on a (richer) theory of concatenation that includes no arithmetic and, hence, in a metatheory that makes no use of arithmetization. As decidability in logic is standardly defined via arithmetization and recursive functions, the trick is coming up with an appropriate (and extensionally equivalent) notion of decidability in such a metatheory. He argues (persuasively, it seems to me) that this is a more natural approach to the question of decidability in logic, as formulas in logical languages are (intuitively) texts.
1. Chris, that's excellent! That's exactly the kind of thing I was looking for, so thanks so much for the reference. If anyone knows of other papers along these lines, I'd be very interested.
5. Anonymous
I'm sure you already think about it but your idea seems close to what Priest defends. For him, diagonalization and encoding are the sources of many paradoxes.
Perhaps you could find more technical papers from his bibliography, (see e.g. In Contradiction and Beyond the limits of thoughts).
Mathieu
6. Anonymous
There's a nice, general statement of the Second Incompleteness Theorem in Visser's paper "Can we make the Second Incompleteness Theorem coordinate free?" using the notion of interpretability. It says:
No recursively enumerable theory U interprets Q + Con(U).
Interpretability might thus be seen as an elaboration of the notion of "containment" between theories.
7. It's nice of you to mention me, and agree with you Catarina (and Gaifman too, I suppose). A theory of syntax, with concatenation, such as Grzegorczyk syntax theory TC that Chris mentions is incomplete: on Visser's formulation, TC has some minimal strings (including the null string) and concatenation. So, I think the essence of incompleteness of a theory is a certain kind of syntactical/arithmetic/computational interpretability. It will apply to certain theories of syntax and to certain theories of numbers if they interpret say Q (arithmetic) or TC (syntax).
For the case of incompleteness of a formal system, diagonalization is syntactic. The diagonalization of an expression is the result of concatenating its quotation name with (or substituting its quotation name into some free position of) itself. If $\phi$ is a formula with $x$ free, its diagonalization is $\phi[x/\ulcorner \phi \urcorner]$. E.g., the diagonalization of $x = \underline{0}$ is $\ulcorner x = \underline{0} \urcorner = \underline{0}$. (Well, Tarski introduced a slightly different definition, which is a bit easier to work with: the diagonalization of $\phi$ is $\exists x(x = \ulcorner \phi \urcorner \wedge \phi)$. Amounts to the same thing.)
Quine's version is something like:
"yields falsehood when prefixed by its own quotation" yields falsehood when prefixed by its own quotation.
In Quine's Mathematical Logic (1940), he emphasizes that this all applies to what he calls "protosyntax", but unfortunately, he never really properly specified an axiom system. The last few years has seen some interesting work on this. Google for "Visser, syntax, TC".
But there's a more general notion of diagonalization though, going back to Cantor's original diagonal argument; I think Priest sees this as lying behind the incompleteness theorems and semantic paradoxes.
The thing is that arithmetic, syntax and computation are all heavily intertwined. It's what they have in common which is at the heart of the phenomenon, and this can be clarified by thinking of interpretability (Visser again). And, of course, Gödel himself stated that the "true reason" for incompleteness is in fact the undefinablity of truth, which he discovered prior to discovering the incompleteness results.
Jeff
8. Thanks everyone for the comments, and Jeff in particular. I knew you'd have the answer! :) This is is really cool stuff: I like in particular your "It's what they have in common which is at the heart of the phenomenon", so ultimately my question is a quest for this je ne sais quoi that they have in common.
And obviously I should have thought of Albert Visser (well, that holds of pretty much any technical question of this nature: he's an oracle). I think I'll drop him a note and ask whether he would be interested in joining the discussion.
9. Re Jeff's remark about diagonalization, Gaifman has a nice little paper on his web site showing how Gödel's incompleteness proof "arises naturally from Cantor’s diagonalization method":
http://www.columbia.edu/~hg17/Diagonal-Cantor-Goedel-05.pdf
Also, expanding on the point about interpretability, Q is interpretable in TC. (I have only read this; have not studied the proof.)
1. I took a course on Godel's theorem with Gaifman in 2003, when I was visiting NYC as a grad student. I realize now more and more that my take on Godel's results are still very much shaped by Gaifman's, including the points he makes in this paper. But it was all sort of 'dormant', so it's good to brush it up a bit!
10. There is a paper by John Bell titled "Incompleteness in a General Setting" that may be helpful as well. I happen to have printed it off the other day, but haven't read it yet.
1. Thanks, didn't know about this paper. John Bell happens to have been Graham Priest's supervisor, so there might be some interesting biographical/conceptual connection here too, in the background of Graham's own thinking about incompleteness, diagonalization and paradox.
11. Anonymous
" It is clear that containing arithmetic is a sufficient condition for the argument to go through, as arithmetic allows for the encoding which then gives rise to the Gödel sentence."
Nit picking here... Dan Willard's systems have arithmetic of a sort, and don't fall into the incompleteness. (His "addition" and "multiplication" are not total functions) I personally feel that they do help clarify where the boundry of "Necessary" conditions are.
12. There is also some older work by Ray Smullyan along the lines of Grzegorczyk' paper, showing that a theory of concatenation is essentially incomplete (I am working from memory, so I don't have a precise reference). And of course, any theory of hereditarily finite sets interprets Q and is therefore incomplete. In fact, Visser has shown that even a theory as weak as adjunction can be bootstrapped to something for which incompleteness can be shown (adjunction is the function f(x,y) = x union {y}).
13. Here's another thought. One version of G1 (from Boolos & Jeffrey) is:
(G1) No recursively axiomatizable extension of Q is both consistent and complete.
If this is to apply to a theory $T$, it follows that $T$ have no finite models. For suppose $\mathcal{M} \models T$ is finite. Let $T^{+} = Th(\mathcal{M})$. Then $T^{+}$ is a complete consistent recursively axtiomatizable extension of $T$.
E.g., let $T$ be all validities. Then $T$ has a 1-element model, say $\mathcal{M}$. Then $Th(\mathcal{M})$ is complete, consistent and recursively axiomatizable (it is axiomatized by a single non-logical axiom: $\forall x \forall y(x = y)$).
The theories Q, TC and AST (Visser's adjunctive set theory, which Aldo mentions) have no finite models.
Jeff
14. "it is axiomatized by a single non-logical axiom: $\forall x \forall y(x=y))$."
Oops, too quick. That's not quite right. It's axiomatized by $\forall x \forall y(x=y))$, plus axioms like $\exists x Px$ or $\forall x \neg Px$, or $\exists x Pxx$ or $\forall x \neg Pxx$, etc., for each primitive predicate $P$, saying if the extension of $P$ is empty or not.
15. One thing worth pointing out/discussing is that although one can prove analogues of the incompleteness theorems in theories of concatenation or tree structures or whatever, it isn't obvious that such theories (really: all "sufficiently strong" formal theories) aren't simply (or don't simply contain) "arithmetic in disguise" anyway. I'm pretty sure I'm getting this from my own conversations with Haim, as a matter of fact.
To put it even more sloppily, it seems that arithmetic, formal syntax, and computability are all intimately tied together, ontologically speaking. There seems to be a sense in which theories of all three are "about the same sorts of things".
So long as we're plugging Gaifman papers, I strongly recommend "On Ontology and Realism in Mathematics", which addresses some of these issues. It's posted on his website. I think it's the best paper I've read in the Philosophy of Mathematics in a very long time, but I'm a bit biased.
16. Nate, thanks for your comment, and the suggestion of Gaifman's article. I read it a while back and thought it was interesting.
"it seems that arithmetic, formal syntax, and computability are all intimately tied together, ontologically speaking."
Right - yes, I think that gets to the heart of it.
(Just up above I said,
"The thing is that arithmetic, syntax and computation are all heavily intertwined. It's what they have in common which is at the heart of the phenomenon, and this can be clarified by thinking of interpretability".)
1. Ah, of course. I read your comment fast and ended up saying something very similar. Ummm...I agree.
17. Panu Raatikainen
I'd like to emphasize that the theory does not have to have anything to do with arithmetic; it can deal (its intended intepretation) e.g. with grandmothers and other ancestor, if you like.
All that matters that its language is formalized, and that elementary arithmetic, as an uninterpreted formal theory, can be relatively interpreted in it. Then the theory is incomplete and essentially undecidable.
1. Yes, definitely - interpretability is the key issue - and whatever it is that arithmetic, syntax and computation all have in common.
An example of the kind of thing you have in mind is the Field/Shapiro/Burgess debate in the early 80s concerning the conservativeness of adding mathematics to a physical geometric theory (which interprets arithmetic) N which doesn't prove (the coded formulation of) its own consistency, Con(N). Adding a set-theoretic comprehension principle leads to a proof of this geometric statement.
Jeff
18. Luis Estrada-González
I find Gaifman treatment very similar to William Lawvere's 1969 category theoretic approach ("Diagonal arguments and Cartesian closed categories", http://www.tac.mta.ca/tac/reprints/articles/15/tr15.pdf). The similarities can be made more visible consulting Noson Yanofsky's 2003 paper in the BSL explaining Lawvere's work in non-categorial terms ("A universal approach to self-referential paradoxes", a preprint is available here: http://arxiv.org/pdf/math/0305282v1.pdf).
1. Many thanks for this, Luis. Really interesting.
Subscribe to: Post Comments (Atom)
• 2 hours ago
• 1 day ago
• 1 day ago
• 2 days ago
• 2 days ago
• 5 days ago
• 1 week ago
• 1 week ago
• 1 week ago
• 3 weeks ago
• 4 weeks ago
• 5 weeks ago
• 1 month ago
• 2 months ago
• 3 months ago
• 1 year ago
## Archive
• May (3)
• April (10)
• March (19)
• February (9)
• January (14)
• December (10)
• November (5)
• October (7)
• September (3)
• August (12)
• July (13)
• June (10)
• May (5)
• April (5)
• March (11)
• February (5)
• January (8)
• December (6)
• November (3)
• October (10)
• September (13)
• August (12)
• July (15)
• June (14)
• May (23)
• April (21)
• March (14)
• February (2)
• September (1)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467865824699402, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/8158?sort=oldest
|
a question about Gromov-Witten invariant
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Do the Gromov-Witten invariants count the morphisms from a curve to a variety over $\mathbb{C}$?
-
4
fpqc, don't be a jerk. I'm sure you're aware that many mathematicians and grad students aren't native English speakers, cut them some slack. – Charles Siegel Dec 8 2009 at 5:40
3
@fpqc: there are many non-native speakers of English in the site, myself included. Instead of making fun of people, you could suggest a way of rewriting it in a manner that is more correct. – Alberto García-Raboso Dec 8 2009 at 5:41
You're right. I was trying to point out that the way the question was worded didn't make any sense while trying to also be funny. I'm sorry, HYYY and Alberto. I didn't mean anything by it. – Harry Gindi Dec 8 2009 at 6:15
2 Answers
HYYY, the answer is a qualified "yes." I'm not an expert (read a few papers at the beginning of the year before deciding that enumerative geometry wasn't going to be my area) and I know that the answer is a definitive yes for rational curves in homogeneous spaces. However, more generally than that, the Kontsevich moduli space of stable maps isn't a manifold, but merely an orbifold, and so has rational cohomology but not integral. So you get rational numbers. Worse, if your curves aren't rigid, they might count negatively, so there will be rational and negative Gromov-Witten invariants, in general.
This paper is a great introduction, and works out the first big theorem that GW invariants proved, in a case where they do, in fact, count curves.
-
For more on negative GW invariants, see this question mathoverflow.net/questions/7823/… – Kevin Lin Dec 8 2009 at 12:57
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To further qualify Charles's yes: that these moduli spaces are orbifolds instead of manifolds does result in rational numbers, but this is quite natural and not much of a problem. The orbifolds here are just resulting because we're counting things that have automorphisms, here, for instance, the map from P^1 to P^1 given by the polynomial z^d has Z_d as its automorphisms: we can multiply a point in P^1 by a dth root of unity and not change where it maps to). Whenever you count things with automorphisms it's quite natural to count each thing weighted by 1/(the size of its automorphism groups), or to rigidify the things we're counting by adding some kind of extra structure so they no longer have automorphisms.
As an example: Cayley's formula that there are n^(n-2) trees on n labeled vertices - the labeling of the vertices guarantees that the objects we're counting do not have automorphisms, and we get an integer - we've rigidified the problem. If we wanted to count the number of trees on n unlabeled vertices, the problem is much more difficult. However, if we weight each such tree by the inverse of its automorphism group, then the problem has a nice answer again: it's simply n^(n-2)/n!. My point is: the rationality is not the ugly part of what's going on.
The ugly part is that these moduli spaces of maps are not even orbifolds: they have much worse singularities, and can have different components of different dimension. From deformation theory, we expect these moduli spaces to have a certain dimension. To get a finite number, we put conditions on the map that cut this dimension down until its zero. Geometrically, you should think of each of these conditions as a cycle on the moduli space, and we want to intersect them. Doing this intersection naively doesn't work when the space is singular, and furthermore the moduli space might be smooth but have a dimension different than what we were expecting. But a lot of hard work shows that these spaces have a "virtual fundamental class" of the dimension that we expect, and using this we can proceed as above to get a number. But in doing this, we've lost the sense in that we're counting something.
But it strikes me that perhaps that's not necessarily what the questioner was after; most typically this is done for smooth, projective varieties of C, but somehow the part that really matters is the symplectic structure: Gromov-Witten invariants can be defined for any symplectic manifold - they will all have almost complex structures J that "play nicely" with the symplectic form omega, and we're "counting" these maps. Or: all this works for orbifolds (which are really smooth objects), but not singular spaces.
The over $\mathbb{C}$ bit is pretty necessary, I think - people have looked a little at doing in positive characteristic, but one big problem is that the orbifold stuff, which I was just telling you isn't really a problem, can be a big problem in positive characteristic if the order of your automorphisms aren't coprime with the characteristic.
-
Paul, my understanding is that it's still conjectural that algebraic GW and symplectic GW give equivalent info, along with pretty much every other curve-counting scheme. If they've been shown to be the same (even, say, for genus 0, where things don't get SO bad) could you provide a reference? – Charles Siegel Dec 8 2009 at 12:50
Charles: See this paper by Siebert arxiv.org/abs/math/9804108 and this paper by Li-Tian arxiv.org/abs/alg-geom/9712035 – Kevin Lin Dec 8 2009 at 12:55
Charles: I think there has been a lot of recent progress in showing equivalence of Gromov-Witten, Donaldson-Thomas, and Pandharipande-Thomas theories, but I don't know much about this stuff and I don't know any references off the top of my head. – Kevin Lin Dec 8 2009 at 13:00
The equivariant GW/DT correspondence has been proven for toric 3-folds in MOOP: front.math.ucdavis.edu/0809.3976. There's been a huge explosion in the study of the DT-type sheaf theories recently - the names here are Joyce and Kontsevich-Soibelman - but I know less about this than I should, and any link I'd give you would be almost random. – Paul Johnson Dec 8 2009 at 13:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537027478218079, "perplexity_flag": "middle"}
|
http://ulissesaraujo.wordpress.com/2011/01/01/a-search-algorithm/
|
Ulisses Costa Blog
A* search algorithm
1 01 2011
Time to talk about efficiently pathfinding and graph traversal algorithms. The first algorithm related with graphs pathfinding I learned was Dijkstra’s algorithm and I remember the feeling for learning how to find the shortest path (minimal cost to be generic) in a graph, it was amazing to me, so that I learned a few more and did a simple academic GPS core system.
Dijkstra’s algorithm is truly beautiful, but unfortunately the complexity it too high to be considered time efficient.
If you want to go from point $A$ to $B$ Dijkstra’s wil search all the surrounding nodes, as you can see in this image:
So I start to find more efficient implementations for the pathfinding problem and I discover A Star.
A Star algorithm
I find this algorithm in wikipedia, and I will paste it here because there are a few things I want to explain.
function A*(start,goal)
// The set of nodes already evaluated.
closedset := the empty set
// The set of tentative nodes to be evaluated.
openset := set containing the initial node
// The map of navigated nodes.
came_from := the empty map
// Distance from start along optimal path.
g_score[start] := 0
h_score[start] := heuristic_estimate_of_distance(start, goal)
// Estimated total distance from start to goal through y.
f_score[start] := h_score[start]
while openset is not empty
x := the node in openset having the lowest f_score[] value
if x = goal
return reconstruct_path(came_from, came_from[goal])
remove x from openset
add x to closedset
foreach y in neighbor_nodes(x)
if y in closedset
continue
tentative_g_score := g_score[x] + dist_between(x,y)
if y not in openset
add y to openset
tentative_is_better := true
elseif tentative_g_score < g_score[y]
tentative_is_better := true
else
tentative_is_better := false
if tentative_is_better = true
came_from[y] := x
g_score[y] := tentative_g_score
h_score[y] := heuristic_estimate_of_distance(y, goal)
f_score[y] := g_score[y] + h_score[y]
return failure
function reconstruct_path(came_from, current_node)
if came_from[current_node] is set
p = reconstruct_path(came_from, came_from[current_node])
return (p + current_node)
else
return current_node
If you are familiarized with Dijkstra’s algorithm you probably noticed a lot of coincidences in the algorithm. You are right!
So, we have some structures to use: the $closedset$ is the set of nodes already evaluated by A Star, $openset$ containing the nodes being evaluated, $g\_score$ being the commulative distance to this node, $h\_score$ being the heuristic result for this node (I will explain this in a minute) and $f\_score$ being the sum of $g$ and $h$.
A good thing about A Star is the nodes it needs to search until find the Best-First-Search path:
Seems good right? All the juice of this algorithms lies on the heuristic function, I like the result of the Manhattan distance, but you can read this blog post and find out more about this subject.
Basically this heuristic is empirical knowledge, this particular Manhattan distance calculates the distance from point $A$ to point $B$ in a grid.
I tried to use it in a Geometric graph and it worked fine too!
Point of view
I’m particularly interested in optimizing this algorithm as much as possible and will be doing that using C++.
So, I spend 30 minutes observing the code and I was already very familiarized with Dijkstra’s. I think as a Software Engineer you have to find good algorithms to your problem, deeply understand them, but your job is not done after that! After find a good solution and understand it to the point you are able to explain it to a non-CS person you have to *observe* the code, talk with it and, believe me, don’t make the literal implementation of it, It will be slow, or at least it probably could be more efficient!
So, let’s forget about the problem this algorithm solves and try to identify inefficient chunks in the algorithm. The first thing that cames to my mind is: we need to have a bunch of structures to keep a lot of node related information, a lot of vectors, sets and so.
So, let’s identify what I mean by that:
function A*(start,goal)
...
while openset is not empty
...
x := the node in openset having the lowest f_score[] value
...
foreach y in neighbor_nodes(x)
...
g_score[y] := tentative_g_score
h_score[y] := heuristic_estimate_of_distance(y, goal)
f_score[y] := g_score[y] + h_score[y]
With this chunk of code I want to highlight that we are iterating throw all neighbors for each $x$ belongs to $openset$, then we get the minimum $f$ from $openset$ and change the $g,h,f$ arrays for $y$.
The first thing hitting me is, I can make $openset$ a MinHeap and I can keep all the information for each $*\_score$ in the node itself, like so I won’t be wasting time in accessing positions in a set and I just make a question to the node object.
So, I start to put all this information in the nodes side and keep only track of the locally created minHeap. This is the result:
LinkedList<Edge*> AStar::findPath(Node* start, Node* goal) {
MinHeap<Node*, Comparator> openSet;
bool tentative_is_better = false;
float h = heuristic(start,goal);
start->setStatus(NODE_OPEN);
start->getOrSetScore().set(NULL, h,0,h);
openSet.insert(start);
while(!openSet.isEmpty()) {
Node x = openSet.getMin();
AStarScore & xScore = x->getOrSetScore();
if(x == goal) return process(ret,x);
openSet.removeMin();
x->setStatus(NODE_CLOSE);
ArrayList<Edge> & neighbors = x->getEdges();
for(int i = 0; i < neighbors.getLength (); i++) {
Node *y = neighbors [i]->getDstNode();
AStarScore & yScore = y->getOrSetScore();
GeoNodeStatus yStatus = y->getStatus();
if(yStatus == NODE_CLOSE) {
continue;
}
float tentative_g_score = xScore.g_score + x->getEdge(y)->getCost();
if(yStatus != NODE_OPEN) {
y->setStatus(_version,GEONODE_OPEN);
tentative_is_better = true;
} else if(tentative_g_score < yScore.g_score) {
tentative_is_better = true;
} else {
tentative_is_better = false;
}
if(tentative_is_better) {
yScore.parent = x;
yScore.g_score = tentative_g_score;
yScore.h_score = heuristic(y,goal);
yScore.f_score = yScore.g_score + yScore.h_score;
openSet.insert(y);
}
}
}
return process(ret,goal);
}
Where $process$ is the function that iterate throw the nodes, starting with $goal$ and go the parent until $start$ node and construct the path in reverse order, from $start$ to $goal$.
One side note, A star is so close to Dijkstra, that if you make you $heuristic$ function return always zero, A star will work just like Dikjstra (the first image).
Basically the ideas I want to transmit here is the A star algorithm, it implementation and (the most important one) the process of how to look to an algorithm. Summarizing I think a Software Engineer could not just implement algorithms and substitute all the words $insert(lista,elem)$ in the algorithm for $list.push\_back(elem)$ in C++ and so. I think we should look for an algorithm as a valuable aid and if we convert it to code we must improve it. Ultimately copying an algorithm to code is a good opportunity to leave our personal touch in the code.
Acknowledgments
Images from here.
Information
• Date : January 1, 2011
• Tags: algorithm, astar, c/c++, code, English, graph, programming
• Categories : Uncategorized
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372345805168152, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.