url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/advanced-statistics/116955-finding-constant-t-distribution.html
# Thread: 1. ## finding a constant for t distribution I have a homework problem asking me to find the constant c that will make the statistic have a t-distribution. The rv X1...X5 are iid with a standard normal distribution. The statistic is: T = c(X1 + X2)/sqrt (X3^2+X4^2+X5^2) I see that the denominator looks like a Chi-Square with 3df and the numerator could be the summation of X. I know I need X-N(0,1) in numerator and U-Chi(rdf) in denominator for a t-distribution t(rdf). I want to square both sides to start but that looks more like a F-distribution. Any hints would be appreciated. Thanks Fred1956 2. Let $A=X_1 + X_2$ and $B=X_3^2+X_4^2+X_5^2$ $A\sim N(0,2)$ hence ${A\over \sqrt{2}}\sim N(0,1)$ and $B\sim \chi^2_3$ THUS ${{A\over\sqrt{2}}\over\sqrt{{B\over3}}}\sim t_3$ You could have squared everything and obtain an $F_{1,3}$ It looks like $c=\sqrt{3/2}$ 3. ## finding a constant for t distribution I see you get the A~N(0,2) by adding summing the variances for X1 and X2. But how to you get the A/sqrt2. I would assume from another rule for variance but am not sure what. Thanks Fred1956 4. $V(aX+b)=a^2V(X)$ but that doesn't give you normality You should know that, if $X\sim N(\mu,\sigma^2)$ then ${X-\mu\over\sigma}\sim N(0,1)$ which is what I used. I'm sure you covered that if you're examining t distributions. 5. ## finding a constant for t distribution Thanks I am trying to understand the math operations that apply. Do you treat the tilde (~) like an equal sign and then you can perform math operations across it. It looks like you subtrated mu from both sides and then divided by sigma squared but you ended up with sigma instead of sigma squared under the X-mu. Can you clearify the math operations for me. Fred1956
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404823184013367, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/133630/divisor-summatory-function-for-squares
# Divisor summatory function for squares The Divisor summatory function is a function that is a sum over the divisor function. $$D(x)=\sum_{n\le x} d(n) = 2 \sum_{k=1}^u \lfloor\frac{x}{k}\rfloor - u^2, \;\;\text{with}\; u = \lfloor \sqrt{x}\rfloor$$ http://en.wikipedia.org/wiki/Divisor_summatory_function#Dirichlet.27s_divisor_problem I am looking for a formula or an efficient algorithm (complexity less than $O(x)$) to calculate the sum of the dividers of the squares. $$E(x)=\sum_{n\le x} d(n^2)$$ e.g. $$E(3)=d(1)+d(4)+d(9)=1+3+3=7$$ - What's "met"? Is that supposed to say "with"? Also, note that if you write out a word like that in $\TeX$, it gets interpreted as juxtaposed variable names and therefore italicized. To get proper formatting for text inside $\TeX$, use `\text{...}`. Also note that you can get displayed equations by enclosing them in double dollar signs. Displayed equations look nicer and are easier to read; single dollar signs are intended only for inline equations. – joriki Apr 18 '12 at 23:52 Your requirement $O(n)$ makes no sense, since $n$ is a dummy summation variable. Do you mean $O(x)$? – joriki Apr 18 '12 at 23:53 "met" is dutch for "with" I edited the text – wnvl Apr 19 '12 at 0:00 1 Two more $\TeX$ hints: You can get "$\TeX$" using `\TeX`, and you can see the $\TeX$ commands for anything you see on this site by selecting "Show Math As ... TeX Commands" in the context menu (right-click on the formula). – joriki Apr 19 '12 at 0:09 2 – joriki Apr 19 '12 at 0:32 show 5 more comments ## 1 Answer The number of divisors of a square is the divisor function convolved with the square of the Möbius function (see $g(n)$ here) $$e(n)=d(n^2)=(d\star\mu^2)(n)$$ and since $$\mu^2(n)=\sum_{d^2|n}\mu(d)$$ therefore $$e(n)=\sum_{a \left| n \right.} d \left( \frac{n}{a} \right) \sum_{b^2 \left| a \right.} \mu \left( b \right)$$ which can be simplified and rewritten as $$e(n)=\sum_{b^2 \left| n \right.} \mu \left( b \right) d_3 \left( \frac{n}{b^2} \right)$$ where $d_3(n)$ is the number of ways that a given number can be written as a product of three integers. This identity can be verified by noting that $e(n)$ is multiplicative and checking at prime powers which yields $e(p^a)={2a+1}$ and can be compared with $d(p^a)={a+1}$. In particular note that $d_3(p^a)={\binom{a+2}{2}}$ (see $d_k$ here). Then the summation of the number of divisors of the square numbers can be computed as: $$E(x)=\sum_{n\le x} d(n^2) =\sum_{n \leq x} \sum_{b^2 \left| n \right.} \mu \left( b \right) d_3 \left( \frac{n}{b^2} \right)$$ which can be reorganized as: $$E(x)=\sum_{b \leq \sqrt{x}} \mu \left( b \right) \sum_{n \leq x / b^2} d_3 \left( n \right)$$ $$E(x)=\sum_{a \leq \sqrt{x}} \mu \left( a \right) D_3 \left( \frac{x}{a^2} \right)$$ where $D_3$ is the summatory function for $d_3$. Since $D_3(x)$ can be computed in $O(x^{2/3})$ time complexity using the three dimensional analogue of the hyperbola method, $E(x)$ can also therefore be computed in $$O(\int_{a=1}^\sqrt{x}{(\frac{x}{a^2})}^{2/3}da)=O(x^{2/3})$$ which is better than $O(x)$ as desired. By taking an $O(x^{1/3})$ algorithm to compute $D(x)$ and using it for $D_3(x)$, this bound can be reduced to $O(x^{5/9})$. You can find such an algorithm and one formula for $D_3(x)$ in my article here which uses the notation $T(n)$ and $T_3(n)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8961687088012695, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/165246/are-the-rings-kt3-t4-t5-and-kt4-t5-t6-gorenstein
# Are the rings $k[[t^3,t^4,t^5]]$ and $k[[t^4,t^5,t^6]]$ Gorenstein? Here is question 18.8 of Matsumura's commutative ring theory. It asks whether the rings 1. $k[[t^3,t^4,t^5]]$, 2. $k[[t^4,t^5,t^6]]$ are Gorenstein. I got that 1) is not Gorenstein, but 2) is Gorenstein (by computing the socle). Just wanted to check if I am correct. I don't need the answer necessarily, a yes or a no will suffice. Thanks. - 1 You should explain what your reasoning was for getting your answers. That way, people won't tell you things you already know, and if you were confused about something, people will know to explain it. – Zev Chonoles♦ Jul 1 '12 at 15:00 The above rings are 1 dimensional, so i went modulo a system of parameter and computed the socle, if the socle is a 1 dimensional vectors space then it is Gorenstein. I dont necessarily need the complete solution, basically i am hoping someone could tell me if i am correct or not. – messi Jul 1 '12 at 15:08 You are right: 1. is not Gorenstein, and 2. is Gorenstein. – YACP Jul 1 '12 at 16:29 thanks navigetor23, you can write what you wrote above as an answer and i will accept it. This is useful as Matsumura does not give answer to this question. – messi Jul 1 '12 at 16:45 You may also write any other results that enable us to conclude whether $k[[t^{a_1},t^{a_2},t^{a_3}]]$ is gorenstein or not, where $a_i$ are non-negative integers. – messi Jul 1 '12 at 17:23 ## 1 Answer Let $k$ be a field and $R$ a graded $k$-algebra. Then $R$ is Gorenstein iff $R_m$ is Gorenstein, where $m$ is the irrelevant maximal ideal of $R$. (This is exercise 3.6.20(c) from Bruns & Herzog.) If $R$ is a Noetherian local ring, then $R$ is Gorenstein iff its completion $\widehat{R}$ is Gorenstein. (This is Proposition 3.1.19(c) from Bruns & Herzog.) Let $k$ be a field and $S$ a numerical semigroup. Then $k[S]$ is Gorenstein iff $S$ is symmetric. (This is Theorem 4.4.8 from Bruns & Herzog.) The examples from Matsumura are completions of affine semigroup rings with respect to their irrelevant maximal ideals. For instance, $k[[t^3,t^4,t^5]]$ is Gorenstein iff $k[t^3,t^4,t^5]$ is Gorenstein iff $S=<3,4,5>$ is symmetric and this is not the case. On the other side, in the second example $S=<4,5,6>$ is symmetric. - what is the conductor of $<3,4,5>$ and $<4,5,6>$? – messi Jul 1 '12 at 17:49 For the first the conductor is 3, and for the second it is 8. – YACP Jul 1 '12 at 17:50 thanks, i will check and see if i get it as 3 and 8 respectively, if not, i might ask for further help. Thanks again, this answer is quite helpful for me. – messi Jul 1 '12 at 17:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9149571061134338, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/259185/non-constant-entire-function/259188
# Non-constant entire function Does there exist a non-constant entire function $f$ which is constant for $|z|<1?$ - ## 1 Answer No. This follows from the Identity Theorem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8225389122962952, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/22472/nested-sequences-of-balls-in-a-banach-space
# Nested sequences of balls in a Banach space This seems to be a fairly easy question but I'm looking for new points of view on it and was wondering if anyone might be able to help (by the way- this question does come from home-work, but i've already solved and handed it, and i'm posting this out of interest, so no HW tag) Let $B_n=B(x_n,r_n)$ be a sequence of nested closed balls in a banach space $X$ prove, that $\bigcap_1^\infty B_n\neq\emptyset$ As I said before, it should be rather simple. When the radii decrease to 0, it's just a matter of selecting any sequence of points in $B_n$, and it must be Cauchy- and the limit is in the intersection. My question is what to do when the radii do not decrease to 0? I got some tips about multiplying the balls by a sequence of decreasing scalars, or reducing the radii so that they decrease to 0, but found too many pathological cases for both methods. Finally- I used a geometric arguemnt (which i've shown to work in any normed space) that if $B(x_1,r_1)\subset B(x_2,r_2)$ then $\| x_1-x_2\|\leq|r_1-r_2|$. This turned out to be some kind of technical catastrophe, but it worked... Still, if anyone knows of a more elegant solution, I'd love to hear about it Thanks - What's so catastrophic about the geometric argument? All you need is that the affine line spanned by $x_1$ and $x_2$ is isometrically isomorphic to the ground field, then it's just a 1d (or 2d if your space is complex) picture. – Chris Eagle Feb 17 '11 at 9:27 "[...] be a sequence of nested in a banach space". Seems like you missed something there? Should it be "nested closed balls"? – kahen Feb 17 '11 at 9:50 yeah, it's nested sequence- as the title suggests :) i'm changing it – kneidell Feb 17 '11 at 9:56 It's not true in general. That property is called spherical completeness and fails for the p-adic complex numbers, for example. – George Lowther Feb 17 '11 at 10:05 1 @George: Perhaps the simplest example of a complete metric space where this fails is the natural numbers with the metrc $d(m,n)=1+1/(\min (m,n))$. – Chris Eagle Feb 17 '11 at 11:35 show 1 more comment ## 1 Answer I don't know if this is more elegant, but that's about the best I can come up with at the moment and probably essentially the same as your argument. Consider first the situation $B_{\leq r}(x) \subset B_{\leq s}(y)$. It is easy to see that $r \leq s$. Claim. $\|y - x\| \leq s - r$. Proof. If $x = y$ there is nothing to prove, so let's assume $x \neq y$. The point $z = x - r \frac{y-x}{\|y - x\|}$ belongs to $B_{\leq r}(x)$ and hence also to $B_{\leq s}(y)$. Therefore $\|y - z\| \leq s$. On the other hand, $$y - z = y - x + \frac{r}{\|y - x\|} (y - x) = \underbrace{\left(1 + \frac{r}{\|y - x\|}\right)}_{\lambda} (y - x),$$ so $s \geq \lambda \|y - x\| = \|y - x\| + r$ and hence $\|y - x\| \leq s - r$. This means that a nested sequence of closed balls $B_{\leq r_{n}}(x_{n})$ has the following properties: 1. The sequence $r_{n}$ is monotonically decreasing, hence converges to some $r$. 2. If $N$ is such that $r_{N} \leq r + \varepsilon$ then the above claim implies that for all $n\geq m \geq N$ we have $r_m - r_n \leq \varepsilon$, so $\|x_{m} - x_{n}\| \leq \varepsilon$ because $B_{\leq r_{n}}(x_{n}) \subset B_{\leq r_{m}}(x_{m})$. In other words, the centers $x_{n}$ form a Cauchy sequence and their limit point $x$ must belong to $\bigcap_{n = 1}^{\infty} B_{\leq r_{n}}(x_{n})$. Added: As Jonas pointed out, the argument can be made even simpler and doesn't need completeness: Suppose $r_{n} \to r \gt 0$. Then there is $N$ such that $r_{N} \leq 2r$. Then for all $n \geq N$ we have $r \leq r_{n} \leq r_{N} \leq 2r$, so $r_{N} - r_{n} \leq r$ and the claim implies that $\|x_{n} - x_{N}\| \leq r \leq r_{n}$, so $x_{N} \in \bigcap_{n = 1}^{\infty} B_{\leq r_{n}} (x_{n})$. - @kneidell: While writing this I have forgotten about the fact that you've already had this solution. I don't think there is a much easier way. – t.b. Feb 17 '11 at 12:19 3 If the limit $r$ of the radii is positive, then $r_N<2r$ for some $N$, and then $x_N$ will be in every subsequent ball (by the claim). – Jonas Meyer Feb 20 '11 at 4:33 @Jonas: Right. So you needn't even use completeness in that case. Nice! This hasn't occurred to me before, thanks! – t.b. Feb 20 '11 at 4:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470722079277039, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/204786-proving-disjunction-statement.html
1Thanks • 1 Post By Plato # Thread: 1. ## Proving a disjunction statement Hi I have to prove a simple proposition. But I am getting some problems $(q \Rightarrow r) \vee (r \Rightarrow q)$ Now I can do this using truth tables. But I wanted to do it little differently. We can express disjunction as an implication. $\neg (q \Rightarrow r) \Rightarrow ( r \Rightarrow q)$ since this is an implication, we can assume $\neg (q \Rightarrow r)$, so the goal is to prove $( r \Rightarrow q)$. But this itself is an implication, so we can further assume , $r$ and so the goal is $q$ . So our givens are $\neg (q \Rightarrow r) \mbox{ and } r$ and the goal is $q$. Now the givens give us $\neg (\neg q \vee r ) \mbox{ and } r$ $\therefore (q \wedge \neg r) \mbox{ and } r$ So we get our goal, $q$. But I am also getting $\neg r$ and $r$. So I am little confused here....... 2. ## Re: Proving a disjunction statement Originally Posted by issacnewton Hi I have to prove a simple proposition. But I am getting some problems $(q \Rightarrow r) \vee (r \Rightarrow q)$ Now I can do this using truth tables. But I wanted to do it little differently. We can express disjunction as an implication. I am not sure that this is what you mean. But $\begin{align*} \left( {q \Rightarrow r} \right) &\vee \left( {r \Rightarrow q} \right) \\ \left( {\neg q \vee r} \right) &\vee \left( {\neg r \vee q} \right) \\ \left( {\neg q \vee q} \right) &\vee \left( {\neg r \vee r} \right)\text{ rearranged} \\T &\vee T\\ &T\end{align*}$ 3. ## Re: Proving a disjunction statement Thanks Plato. Yes thats one way to do it. But I was trying to use the way like people prove math theorems which are given as an implication. So we assume antecedent and try to prove consequent......... So there must be some error in my proof........ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234129786491394, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/3283/numerical-methods-for-discontinuous-r-s-odes
# Numerical methods for discontinuous r.s. ODEs what are state of art methods for numerical solution of ODEs with discontinuous right side? I'm mostly interested piecewise-smooth right side functions, e.g. sign. I'm trying to solve the equation of a following type: \begin{align*} \dot x &= v\\ \dot v &= \begin{cases} (|F_\text{external}| - |F_\text{friction}|) \mathop{\rm sign} (F_\text{external}) & :|F_\text{external}| < |F_\text{friction}|\\ 0 & : \text{otherwise} \end{cases} \end{align*} - Hi @AndreyShevlyakov and welcome to Scicomp! Is there a particular class of ODE that you're interested in? – Paul♦ Sep 16 '12 at 0:42 Hi Paul! Yes, I'm currently trying to implement a kind of stick-slip friction model. – Andrey Shevlyakov Sep 16 '12 at 19:14 Could you incorporate the equations you want to solve in your question? This will help narrow down the particular methods applicable to your problem. – Paul♦ Sep 16 '12 at 19:46 I've added example to the post – Andrey Shevlyakov Sep 17 '12 at 8:12 1 When I worked on ACSL, it included a root-finder, so you could make it search for the time when velocity equaled zero, and then start up fresh from that point with the new rhs. – Mike Dunlavey Sep 18 '12 at 13:43 ## 3 Answers See David Stewart's new (2011) book on this topic, Dynamics with Inequalities: Impacts and Hard Constraints. Coulomb friction problems are mentioned several times in the analysis chapters. Chapter 8 is devoted to numerical methods for non-smooth ODEs and DAEs. It mostly advocates fully implicit Runge-Kutta methods with special treatment of nonsmoothness. Note Section 8.4.4 which points out that if you do not accurately locate the points of non-smoothness, all methods degrade to first order $\mathcal{O}(h)$ accuracy, therefore implicit Euler (with modifications for nonsmoothness) are popular in practice. Furthermore, solutions of problems with infinite dimensional inequalities are generally not piecewise smooth, therefore the theory provides only $\mathcal{O}(h^{1/2})$ convergence, though in practice, $\mathcal{O}(h)$ is often observed. - Great, thanks! Do you know if there are implementations available somewhere? – Andrey Shevlyakov Sep 17 '12 at 16:54 Not that I know of, but implementation of simple schemes shouldn't be too hard if you have a solver for static variational inequalities. – Jed Brown Sep 18 '12 at 4:24 The most significant reference I know of is David Stewart's thesis, which is more than 20 years old: High Accuracy Numerical Methods for Ordinary Differential Equations with Discontinuous Right-hand Side The abstract references several significant earlier works. A keyword here is differential inclusion. - As Mike Dunlavey already pointed out in a comment, this is often done using so-called zero-crossing functions, i.e. functions $g(t, x(t)) \in \mathbb{R}$ that cross from $>0$ to $<0$ (or vice versa) when the RHS has a discontinuity. For example if you have a moving mass with a block then the distance between the mass and the block can be used as a zero-crossing function. Many ODE solvers (e.g. SUNDIALS CVODE) automatically check if any of the zero-crossing functions changed its sign during the last time step. If this is the case then a root finding method is used to determine the exact location of the root. The solver can then be restarted at that particular position. This is either done automatically by the solver itself or manually by the calling code. - – J. M. May 8 at 9:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8933776617050171, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35898/do-i-have-the-meaning-of-the-property-temperature-correct
Do I have the meaning of the property temperature correct? OKay my book just starts out talking about the vague definition we have for temperature and we ended up with the Zeroth law of Thermodynamics which states: Two systems are in thermal equilibrium if and only if they have the same temperature. So does that mean, the (brief) meaning of the property temperature is a measure of whether one system is in thermal equilibrium with another? - 3 Yes, that's the definition. – Alexander Sep 8 '12 at 1:27 But this yes refers only to the Zeroth law, not to the question appended to it, for which the answer is no. – Arnold Neumaier Sep 9 '12 at 13:44 @ArnoldNeumaier, could you elaborate? – jak Sep 9 '12 at 18:33 I had explained this in my answer. – Arnold Neumaier Sep 9 '12 at 19:59 Woops, for some reason I didn't see your name looks different in another answer... – jak Sep 10 '12 at 0:44 3 Answers The temperature is not a measure whether a system is in equilibrium. Instead, it is one of the parameters characterizing a system _that_is_ in equilibrium. It is that particular parameter that tells how much volume the quicksilver in a conventional thermometer occupies at a fixed pressure. Thus one can read the temperature of a thermometer from a scale attached to the quicksilver container. (From a microscopic point of view, it is a measure of the speed of the random movement of molecules in a substance. The faster the motion the more volume is needed to accomodate the motion.) The role of the zeroth law is somewhat different. If a system is in equilibrium with a thermometer then the combined system has a well-defined temperature, so you are entitled to take what you read off the thermometer as the temperature of the system measured. On the other hand, if the system is in equilibrium and the thermometer is in equilibrium, but the combined system is not (namely for instance when you just brought the two in contact), they will have different temperatures, and it takes a while till a joint equilibrium is reached. (The while may be a lot, depending on how well the system is insulated.) Thus the zeroth law explains why (and under which conditions) one can use thermometers to measure temperature. You must bring the thermometer into contact with the system and then wait until equilibrium is established. Only then the measurement at the thermometer gives the temperature of the system. Edit: If a system is not in equilibrium, there is no unique temperature that can be assigne to the system. Thus the notion of temperature makes sense only in equilibrium. Out of equilibrium you have instead a temperature field. Differences in temperature at neighboring places introduce a thermal force (entropy production) that steers the system to equilibrium. Therefore, a correct version of the statement in your final question is: ''Equality of temperature is a necessary condition for equilibrium.'' It doesn't suffice though, as also pressure and chemical potentials must be equal. Equality of temperature, pressure and chemical potentials indeed implies (for a chemical system) that the system is in equilibrium. See also Chapter 7 ''Phenomenological thermodynamics'' of my book http://lanl.arxiv.org/abs/0810.1019 (The chapter is readable independent of the remainder of the book, and needs little background only.) - Instead, it is one of the parameters characterizing a system _that_is_ in equilibrium., isn't that just changing the wording though or looking at it in a different view? – jak Sep 10 '12 at 0:44 It means something quite different. See the edit at the end. – Arnold Neumaier Sep 10 '12 at 9:48 I totally agree with Arnold's answer and I would like to add a comment on the size of the system. Thermodynamics is defined for systems where N, the number of particles, and V the volume of the system are such that: N->infinity, V-> infinity and N/V is finite. – Shaktyai Sep 10 '12 at 10:32 To further complete Arnold's answer: temperature is a macroscopic parameter with no equivalent at the microscopic level that characterize macroscopic systems at equilibrium. It can be measured by letting the system reach an equilibrium (until no heat is exchanged), with an other system called a thermometer. – Shaktyai Sep 10 '12 at 10:40 The answer is yes, as Alexander comments, but one should say that it is a little surprising--- it is saying that two systems, no matter their internal dynamics, will exchange heat according to a single real valued parameter. So that if you adjust this one parameter to be equal, then the systems will not exchange energy when in contact. This can be stated axiomatically, if A doesn't exchange heat with B, and B doesn't exchange heat with C, then A doesn't exchange heat with C, also A doesn't exchange heat with a copy of A. These axiomatically define equivalence classes of systems "at the same temperature". Further, if A is hotter than B (so energy flows from A to B on average) and B is hotter than C, then A is hotter than C (and this is a relation on equivalence classes). For any two systems A,B with A hotter than B, there is a C which is hotter than B and less hot than A. This tells you that temperature is linearly ordered. All these axiomatic properties are a-priori surprising, since I didn't say anything about the Hamiltonian of the systems involved. They become obvious once you consider that systems maximize their phase-space volume (entropy). Then the temperature is defined statistically as the reciprocal of the rate of change of entropy with energy, so you always increase entropy by flowing heat from hot to cold. This explains why temperature is a linearly ordered real-valued equivalence-class type thing from simple first principles. - My book said something about For systems A and B to be in thermal equilibrium, all the information that is needed is that both A and B are in thermal equilibrium with C. This is not true. I don't understand what that means. It seems to say the opposite of what the Zeroth Law say – jak Sep 8 '12 at 2:45 @jak: It is true--- if A is at the same temperature as C, and if B is too, then A is at the same temperature as B. This is a little surprising a-priori, but not really surprising to human beings, because we come with little temperature sensors on our skin, so we find it intuitive. – Ron Maimon Sep 8 '12 at 3:15 I am a beginner in Thermodynamics, and we use an ancient book called "heat and thermodynamics by Zemansky and Dittman". It's so confusing! You know a better book? I grabbed a first year physics book, but that wasn't sufficient. – jak Sep 8 '12 at 3:23 @jak: The ancient books are usually no worse than the modern ones. Research people generally stopped writing about thermodynamics in the 20th century, since it was subsumed into statistical mechanics. I think the only books one can recommend are little volumes by Planck and Fermi, and they're older than your book I bet. – Ron Maimon Sep 8 '12 at 3:58 Regarding your last sentence, I don't think it says anything about the linearity of the temperature. Couldn't any one-to-one transformation of the temperature scale still fulfill the requirement? I always found this point confusing. $T$ reflects a molecular kinetic energy metric, but why not the square root of the kinetic energy? – AlanSE Sep 9 '12 at 15:49 show 3 more comments That would be definition of temperature in thermodynamic framework. However, as Ron has remarked, it can be understood better in framework of statistical mechanics which in some sense is a more fundamental science than thermodynamics. For a non-isolated system (i.e. a system which is allowed to exchange energy with its surroundings) temperature is a parameter which tells how energies are distributed in the system$^{**}$. More precisely consider a system which can exist in various states (configurations) $|1>, |2>, |3>,...$ of energies $E_1,\:E_2,\:E_3\:,..$ respectively. Then saying that this system has temperature $T$ means that probability for this system to be found in state $|i>$ of energy $E_i$ is ~$exp(-E_i/kT)$. Now consider two systems $A$, and $B$. Suppose $A$ can exist in states $|A1>, |A2>, |A3>,...$ of energies $E^A_1,\:E^A_2,\:E^A_3\:,..$ respectively; and $B$ can exist in states $|B1>, |B2>, |B3>,...$ of energies $E^B_1,\:E^B_2,\:E^B_3\:,..$ respectively. Now suppose we allow these systems to exchange energy with each other. We say that systems $A$, and $B$ have attained equilibrium when the energy distribution in the combined system $A+B$ as well as as in its subsystems $A$, and $B$ is no more changing with time, i.e. when $1.$ The combined system $A+B$ is at a definite temperature $T$ (i.e. energy distribution in $A+B$ is given according to parameter $T$ ) $2.$ Systems $A$ and $B$ themselves are at some definite temperatures $T_A$ and $T_B$. With these definitions of temperature and of equilibrium one can show that at equilibrium we should have $T_A=T_B=T$. (You can try to prove it yourself for the simple case when both the systems $A$ and $B$ have finitely many energy states of distinct energy). ** for an isolated system at energy $E$ temperature is defined in a different way. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382253885269165, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/6423/computational-indistinguishability-and-example-of-non-polynomial-algorithm/6424
# Computational indistinguishability and example of non polynomial algorithm The wikipedia page on computational indistinguishability says that two ensembles are not distinguishable if "any non-uniform probabilistic polynomial time algorithm A" cannot tell them apart. To help me better understand the definition, I searched for an example of an algorithm that did not fit the above restriction---in particular a non polynomial time algorithm---that could differentiate between two computational indistinguishable ensembles. Rather to my surprise, I found none, so I ask, could anyone provide me with an example of one? - 1 Brute force on the input of algorithm $A$? But otherwise I suspect the algorithm would be highly dependent on the underlying cryptographic primitive, the simplest example is RSA where integer factorization is subexponential (but not polynomial) but this isn't a great example, being public-key and all. – Thomas Feb 21 at 19:58 1 Another example is this: distinguish between a stream of output from Blowfish-CTR and AES-CTR (or generally two block ciphers with different block sizes in CTR mode). An algorithm can distinguish them without even touching the keys, using the birthday paradox, with complexity $\approx 2^{32}$, which is not polynomial-time (is exponential) but far better than brute-force. – Thomas Feb 21 at 20:15 ## 2 Answers I am not a complexity theorist, but I believe this fits the requirements. The best known algorithms for factoring are superpolynomial time algorithm so they are not polynomial time. An example of something th superpolynomial time algorithm could distinguish are outputs from the Blum-Blum-Shub PRNG. - Here are two examples of attack algorithms that are not polynomial-time: • Exhaustive search. Consider an algorithm that iteratively enumerates all possible keys, and checks to see if it seems to be correct. The running time of this algorithm is non polynomial: for a $b$-bit key, it will take $2^b$ steps of computation, which is larger than any polynomial of $b$. • Factoring. The running time of the state-of-the-art factoring algorithms is larger than any polynomial of $b$, where $b$ represents the number of bits of the number you want to factor. For instance, the running time of some factoring algorithms is something like $2^{\sqrt{b}}$, which grows faster than any polynomial function of $b$. As far as we know right now, there's no way to factor integers in polynomial time. The definition you are considering only concerns itself with attack algorithms whose running time is a polynomial, so the above two attacks are considered out of scope (they're not considered a "break" of the cryptosystem). Edit 2/22: In the context of distinguishing two probability distributions, if you want a natural example, try exhaustive keysearch: to distinguish AES-CTR output from true random, one algorithm might try all possible $2^{128}$ keys to see if any of them match the observed value from the distribution, and output 1 if it matches, else 0. This algorithm does distinguish the two probability distributions, but it takes exponential time. (Strictly speaking, we should make this a $b$-bit key so that we can apply asymptotic running time, but whatever, hopefully you get the idea.) P.S. If this gets to be a bit much and you need a study break, don't miss Eric Hughes' infamous advice on How to Give a Math Lecture at a Party (lyrics). - I understand the examples you give, but how would you apply them to a (sample from a) probability distribution? What would exhaustive search be like in this case? I mean, there is no "key" for which to check for correctness! – wmnorth Feb 22 at 12:46 @wmnorth, any algorithm that spends, say, exponential time doing some computation before producing an output. (I suspect I must not be understanding your question; why don't you try explaining what you real confusion is?) Have you studied running time analysis of algorithms? If not, any good algorithms textbook should have something on big-O notation and running time analysis -- I very much recommend you read it. – D.W. Feb 22 at 18:34 I'll try to clarify my question. I am familiar with run time analysis of algorithms, and big-O notation, my question is not about that; it's about algorithms (or statistical tests if you will) that are capable of distinguishing two different (but computationally indistinguishable) probability ensembles. From all I've read, being indistinguishable means that there is no non-uniform probabilistic polynomial time algorithm could tell one ensemble from the other. However, I could not find an example of an algorithm that could distinguish the two ensembles, and that's what I asked for. – wmnorth Feb 22 at 22:41 (cont.) You mentioned exhaustive search, which is indeed non polynomial. But what does it mean to do exhaustive search when trying to distinguish two probability ensembles? To give a concrete example, what would it mean to do an exhaustive search in order to try to distinguish the output of a PRNG from the uniform distribution? Does this clarify what I am trying to ask? – wmnorth Feb 22 at 22:46 @wmnorth, sorry, I can't make any sense of your question. I think you must have some hidden assumptions, or must be asking a different question than what you really want to know. What does it mean for an algorithm to take exponential time when trying to distinguish two probability ensembles/ It just means that the running time is exponential. I really don't understand what the confusion is. (If you are asking for a natural example of an exponential time algorithm for distinguishing two probability distributions, that's different; if so, edit your question!) – D.W. Feb 22 at 22:56 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345333576202393, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/35197/massless-limit-of-the-klein-gordon-propagator/35204
# Massless limit of the Klein-Gordon propagator I am working with the propagator associated to the Klein-Gordon equation, as derived in "Quantum Physics a functional integral point of view", James Glimm, Arthur Jaffe or as derived here: http://www.wiese.itp.unibe.ch/lectures/fieldtheory.pdf § 5.4 It turns out that the propagator can be evaluated, and a close-form expression for it can be given, namely: $$C \left( m; \mathbf{x} - \mathbf{y} \right) = \left(\frac{1}{2 \pi}\right)^{-\frac{d}{2}} \left(\frac{m}{\left| \mathbf{x} - \mathbf{y} \right|}\right)^{\frac{d-2}{2}} K_{\frac{d-2}{2}} \left( m \left| \mathbf{x} - \mathbf{y} \right| \right)$$ where $K$ is the modified Bessel function of the second kind. I'd want to take the massless limit in two dimensions; when setting $d=2$ and $m=0$ one of the terms in the r.h.s. of the equation evaluates to $0^0$ while the modified Bessel function goes to infinity. How do I calculate the massless limit for the Klein-Gordon propagator in 2D? Thank you! - – Qmechanic♦ Sep 10 '12 at 15:30 ## 3 Answers A nice way to see how the correlation function behaves is described here where it is shown that the propagator goes as $$C(r)=\frac{1}{2\pi}\log(r)$$ which can also be seen as given by Qmechanics hint. Now the interesting thing is not that it diverges at $r=0$ (this happens even in 4D where $C(r)=1/4 \pi^2 r^2$) but that it also diverges as $r\to \infty$. This is an infrared divergence I had not come across before. The Wikipedia article linked above states that this makes a two dimensional massless scalar field slightly tricky to define mathematically and also that you cannot have spontaneous breaking of a continuous symmetry in two dimensions. Very interesting! - Hint: Use e.g. that the modified Bessel function $K_0$ behaves as minus the logarithm for small arguments near zero. Reference: 1. Abramowitz & Stegun, Handbook of Mathematical Functions, p. 375, eq. (9.6.8). For an online version see e.g. here. - I would suggest you first set $d=2$ giving $$C \left( m; \mathbf{x} - \mathbf{y} \right) = \frac{1}{2 \pi} K_0 \left( m \left| \mathbf{x} - \mathbf{y} \right| \right)$$ and then take the massless limit. - Maybe I should've mentioned that, I've tried this approach... When taking the $m \longrightarrow 0$ limit the Bessel function goes to $\infty$ so that the propagator is infinite everywhere... – zakk Aug 30 '12 at 10:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028910398483276, "perplexity_flag": "head"}
http://mathoverflow.net/questions/36050/embeddings-and-triangulations-of-real-analytic-varieties
## Embeddings and triangulations of real analytic varieties ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a follow up question to my answer here http://mathoverflow.net/questions/35156/how-do-you-define-the-euler-characteristic-of-a-scheme/36038#36038 A real analytic space is a ringed space locally isomorphic to $(X,O/I)$ where $X$ is the zero locus of some number of real analytic functions $f_1,\ldots, f_k$ on an open set $U$ of $\mathbf{R}^n$, $O$ is the sheaf of germs of real analytic functions on $U$ and $I$ is the ideal sheaf generated by $f_1,\ldots, f_k$ (see e.g. http://eom.springer.de/a/a012430.htm) I would like to ask if it is true that each real analytic space with a countable base can be embedded as a closed analytic subset of some Euclidean space. The motivation behind this comes from the triangulation theorem for complex algebraic varieties: the only proof of that that I know of (Hironaka's 1974 notes) is based on triangulating analytic subvarieties of Euclidean spaces. So to apply this one must embed a complex algebraic variety as a real subvariety of a Euclidean space. This is easy for projective varieties and is probably possible in general, but I don't know a reference for the general case. (I'm mainly interested in the complex algebraic case, but I don't see why it should be any easier that embedding arbitrary real analytic spaces; however if it is easier, I'd be interested to know.) A related question: is it possible to prove the triangulation theorem (for complex algebraic varieties or in general) without using embeddings in Euclidean spaces? - I imagine Hironoka's triangulation theorem is a mild tweak of Whitehead's theorem that smooth manifolds admit triangulations. The idea of the proof is to take any embedding of the manifold $M$ into Euclidean space, subdivide a triangulation of Euclidean space sufficiently and keep $M$ transverse to the strata of the triangulation, then pull back the stratifications back to $M$, giving a smooth polyhedral decompostion of $M$. Subdivide to a triangulation of $M$. So you really only need an embedding of $M$ in some object that admits a triangulation, and enough flexibility to get transversality. – Ryan Budney Aug 19 2010 at 6:09 Are you asking if a real analytic space is a real affine algebraic variety? What kind of embedding are you happy with? There's a characterisation of closed subsets of Euclidean space as something like all spaces that have finite Lebesgue covering dimension, are Hausdorff and are 2nd countable. Perhaps I'm forgetting a criterion, but it's something that appears in many point-set topology texts. – Ryan Budney Aug 19 2010 at 6:30 Ryan -- Hironaka's proof (based on an earlier proof by Lojasiewicz) is by projecting and using induction on the dimension. The idea you mention probably works as well, but some modifications will be necessary: eg the fact that a triangulation is transverse to each stratum does not guarantee a polyhedral decomposition: take a simplex in the 3-space that contains the vertex of a quadratic cone and intersects transversally each stratum. In general the local structure of real analytic spaces can be pretty messy. – algori Aug 19 2010 at 12:44 Re what kind of embedding I'm looking for: as a closed analytic subset. Will clarify that in the posting. I'm not sure point set topology does the trick since it is about continuous embeddings and the image can be a complete mess to which the triangulation theorem does not apply. – algori Aug 19 2010 at 12:50 @algori:There is a proof of triangulation of real analytic spaces by B Giesecke Math Zeutschrift vol 83 ,pages 177-213 yr 1964 – Mohan Ramachandran Jun 13 2011 at 16:08 ## 2 Answers If you just want a proper 1-1 real analytic map whose image is a real analytic variety then the result is theorem 2 page 593 of a paper of Tognoli and Tomassini in Ann.Scuola.Norm.Pisa (3) vol 21 yr 1967 pages 575-598. This means there is no control over the differential of the map.I am assuming that the real analytic space has finite dimension.If the dimension of the Zariski tangent spaces of a connected reduced real analytic space is bounded then it can be real analytically embedded in euclidean space, see paper by Aquistapace Broglia and Tognoli Ann.Scuola.Norm.Pisa (4) vol 6 yr 1979 p 415-426. - Thanks, Mohan. Will take a look at the paper. – algori Aug 22 2010 at 3:34 The second reference I mention answers question 1 for complex algebraic varieties.Since all these proofs eventually use embedding theorem for stein spaces or manifolds one needs the bound on the dimension of Zariski tangent spaces.This is true for abstract real algebraic varieties. – Mohan Ramachandran Aug 23 2010 at 15:51 Remark:A necessary condition to embed a real analytic space as a real analytic subspace of Euclidean space is that the dimensions of the Zariski tangent spaces is bounded. – Mohan Ramachandran Aug 23 2010 at 17:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I do not know an answer to the embedability question, but the triangulation can be deduced from the existence of a (Thom-Mather) stratification by [Johnson "On the triangulation of stratified sets and singular varieties", Trans. Amer. Math. Soc. 275 (1983), no. 1, p. 333–343] or [Goresky "Triangulation of stratified objects", Proc. Amer. Math. Soc. 72 (1978), no. 1, p. 193–200]; moreover it is known that analytic subvarieties of Euclidean space are Whitney stratified [Whitney "Local properties of analytic varieties", Differential and Combinatorial Topology (A Symposium in Honor of Marston Morse), Princeton Univ. Press, Princeton, N. J., 1965, p. 205–244 & "Tangents to an analytic variety", Ann. of Math. (2) 81 (1965), p. 496–549]. Moreover in [Mather "Notes on topological stability", 1970, Harvard University & "Stratifications and mappings", Dynamical systems (Proc. Sympos., Univ. Bahia, Salvador, 1971), Academic Press, New York, 1973, p. 195–232] you can find the result that Whitney stratified sets are Mather stratified. Finally, Mather stratification are given by local conditions (there might be an issue gluing the local strata but I do not think so), so these result should imply that analytic varieties are triangulable. I do not think it is the most efficient to do so. - Benoit -- thanks. Indeed, it sounds very plausible that one can glue a stratification of an analytic manifold out of stratifications of the neighborhoods. – algori Aug 19 2010 at 13:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8892431259155273, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/101220-fundamental-theorem-calc-integration-print.html
# Fundamental Theorem of Calc. Integration Printable View • September 8th 2009, 04:38 PM Casas4 Fundamental Theorem of Calc. Integration Evaluate the definite integral using the Fundamental Theorem of Calculus. You will need accuracy to at least 4 decimal places for your numerical answer to be accepted. You can also leave your answer as an algebraic expression involving square roots. I understand the FTC when you have x in the limits but since there is no x I'm confused on how to solve this. Thanks! • September 8th 2009, 04:44 PM luobo Quote: Originally Posted by Casas4 Evaluate the definite integral using the Fundamental Theorem of Calculus. You will need accuracy to at least 4 decimal places for your numerical answer to be accepted. You can also leave your answer as an algebraic expression involving square roots. I understand the FTC when you have x in the limits but since there is no x I'm confused on how to solve this. Thanks! $I=\sqrt{2+3t^4}+C$ $I=\sqrt {2+3\times 6^4} - \sqrt {2+3\times 5^4}$ • September 8th 2009, 04:48 PM skeeter Quote: Originally Posted by Casas4 Evaluate the definite integral using the Fundamental Theorem of Calculus. You will need accuracy to at least 4 decimal places for your numerical answer to be accepted. You can also leave your answer as an algebraic expression involving square roots. I understand the FTC when you have x in the limits but since there is no x I'm confused on how to solve this. Thanks! $\int_5^6 \frac{6t^3}{\sqrt{2+3t^4}} \, dt = \left[\sqrt{2+3t^4}\right]_5^6$ All times are GMT -8. The time now is 12:47 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290189743041992, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/181521/metric-of-a-graph
# Metric Of A Graph The following is question 6 from page 99 of Walter Rudin's Principles Of Mathematical Analysis. I'm having trouble understanding what the metric of the graph might be (which, as far as I can tell, is not defined in the text or the problem)... If f is defined on E, the graph of f is the set of points $(x,f(x))$, for $x \in E$. In particular, if E is a set of real numbers, and f is real-valued, the graph of f is a subset of the plane. Suppose E is compact, and prove that f is continuous on E if and only if its graph is compact. I think I've been able to prove the forward result. Suppose that E is compact. Rudin proves a theorem in the text that states the image of a compact metric space under a continuous function is also compact. Therefore, we know that $f(E)$ is compact. Now suppose that $\lbrace G_\alpha \rbrace, \alpha \in A$ is an open cover of the graph, where $G_i = B_i \times C_i$ and $B_i \in E, C_i \in f(E)$. Then $\lbrace A_\alpha \rbrace$ and $\lbrace B_\alpha \rbrace$ are open covers for E and $f(E)$, respectively. Because these sets are compact, their open covers contain finite subcovers, $\lbrace A^\prime_\beta \rbrace$ and $\lbrace B^\prime_\gamma \rbrace$, respectively. Thus, the set of all combinations of $(A^\prime_\beta, B^\prime_\gamma)$ forms a finite open subcover of the graph, proving that the graph is compact. Actually, I'm really confused at this point, because it's just occured to me while typing the above that I cannot assume that each set $G_i$ can be represented as a set $\lbrace (x,y) \mid x \in A_i, y\in B_i \rbrace$ for open sets $A_i \subseteq E$ and $B_i \subseteq f(E)$. So at this point, I'm not sure what to do, since I am unable to figure out what the distance metric might be in the metric space containing the graph. Is there a convention for this sort of problem? Did Rudin want the reader to only consider real-valued functions for f? - 2 Based on the words "the graph of $f$ is a subset of a plane," I would interpret this as asking about functions $\mathbb{R} \rightarrow \mathbb{R}$. The point in the proof you're struggling with is a general issue in showing that a product of compact spaces is compact. Since you get to be in a metric space, I'd recommend using the convergent subsequence definition of compactness instead. – Kevin Carlson Aug 12 '12 at 2:23 1 @KevinCarlson Thank you Kevin, that makes sense! If you wouldn't mind retyping your comment as an answer, I can mark it correct. – Andrew Aug 12 '12 at 2:37 Andrew-that's done, glad it helped. – Kevin Carlson Aug 12 '12 at 6:54 ## 2 Answers Based on the words "the graph of f is a subset of a plane," I would interpret this as asking about functions $\mathbb{R}\rightarrow \mathbb{R}.$ The point in the proof you're struggling with is a general issue in showing that a product of compact spaces is compact. Since you get to be in a metric space, I'd recommend using the convergent subsequence definition of compactness instead. I'll note, however, that you don't need the full strength of this result here, as Brian's answer shows. - Here, $E$ could be any compact metric space. Define the function $f_1: E \to E \times E$ by $f_1(x) = (x, f(x))$. Clearly this function is continuous, so $f_1(E)$ which is the graph is compact. Now assume that the graph is compact. We show that the inverse of image of every closed subset of of $E$ is closed in $E$. Define the maps $q_x, q_y$ by $(x, y) \to x$ and $(x, y) \to y$ respectively. If $U \subset E$ is closed, then $f^{-1}(U) = q_x(q_y^{-1}(U) \cap f_1(E))$. Both $q_x$ and $q_y$ are continuous, so $q_y^{-1}(U)$ is closed, so $q_y^{-1}(U) \cap f_1(E)$ is a closed subset of the compact set $f_1(E)$ and is therefore compact itself. By continuity of $q_x$, $f^{-1}(U)$ is compact in $E$, and since $E$ is Hausdorff (all metric spaces are), $f^{-1}(U)$ is closed. - You do a lot of really cool stuff in this answer. However, I disagree with the line "Clearly this function is continuous ..." Let $d_{E \times E}:E \times E \to \mathbb{R}$ be the metric $d_{E \times E}(x,y) = 1$ if $x \neq y$ and $d_{E \times E}(x,y) = 0$ otherwise. Continuity of $f_1$ then fails to hold. – Andrew Aug 12 '12 at 12:41 It will always be the case that $f_1$ is continuous. To see this, take $U \subset E \times E$ to be open. Then $U = \bigcup_{i \in A} V_i \times K_i$, where $V_i, K_i$ are open in $E$, so $f_1^{-1}(U) = \bigcup_{i \in A} f_1^{-1}(V_i \times K_i)$ = $\bigcup_{i \in A} V_i \cap f^{-1}(K_i)$, which is open. The metric on $E \times E$ depends on $E$, so if $E \times E$ is a discrete space, then $E$ will be discrete as well. In that case, the continuity of $f_1$ is even more obvious. – Brian Aug 12 '12 at 15:41 What do you mean when you say "The metric of $E \times E$ depends on E"? Can you explain the dependence? Also, why should it be possible to write that "$U = \bigcup_{i \in A} V_i \times K_i$ where $V_{i},K_{i}$ are open in $E$"? Does this follow from the axioms of a metric space? It's possible that there are some added axioms imposed for Cartesian product of generalized metric spaces. But at this point, I'm inclined to believe that the original intent of the question was to assume that $E \subseteq \mathbb{R}$. – Andrew Aug 12 '12 at 17:04 I think the original question from the text was somewhat misworded. – Andrew Aug 12 '12 at 17:11 1 – Brian Aug 12 '12 at 17:23 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9661908745765686, "perplexity_flag": "head"}
http://mathoverflow.net/questions/78637/continuous-function/78656
## continuous function ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose the countable subspace $D$ is dense in the separable Tychonoff space $X$ and $f$ is a continous function from $D$ to the closed unit interval. What are some conditions on $X$ or $D$, which make $f$ continuously extendable over $X$? - 1 Are you aware of Tietze's Extension Theorem? It's related, but with a $T_4$ space $X$ rather than $T_{3.5}$. See: en.wikipedia.org/wiki/Tietze_extension_theorem – David White Oct 20 2011 at 4:12 1 But here $D$ is dense, so Tietze applies only if $D=X$... – Pietro Majer Oct 20 2011 at 8:30 ## 4 Answers A relevant paper is Taĭmanov, A. D. On extension of continuous mappings of topological spaces. (Russian) Mat. Sbornik N.S. 31(73), (1952). 459–463. 56.0X The MR of this paper is: Let $S$ be a $T_1$-space, $A$ a dense subspace of $S$, and $R$ a compact Hausdorff space. Let $f$ be a continuous mapping of $A$ into $R$. Then $f$ admits a continuous extension over $S$ if and only if for all disjoint closed subsets $A_1,A_2$ of $R$, the relation `$(f^{-1}(A_1))^-\cap(f^{-1}(A_2))^-=0$` obtains (closure in $S$). From this result, a theorem of Yu. M. Smirnov [Uspehi Matem. Nauk 6, no. 4(44), 204--206 (1951)] is easily proved, as well as a theorem of Vulih [Mat. Sbornik N.S. 30(72), 167--170 (1952); MR0048790 (14,70c)]. A final corollary is a special case of a theorem widely known and recently published by Katětov [Fund. Math. 38, 85--91 (1951); MR0050264 (14,304a)]. I remembered this because as a graduate student I used it to give a (I think new at the time) proof that every compact Hausdorff space is a continuous image of a compact totally disconnected compact Hausdorff space (which, in turn, I use these days to reduce proving the Riesz representation theorem for $C(K)$ to the case where the compact space $K$ is totally disconnected). - O,thanks! it is very beautiful! – Paul Oct 21 2011 at 0:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The criterion for "EVERY continuous map from $D$ to $[0, 1]$ has a continuous extension to $X$" is that any two disjoint zerosets in $D$ have disjoint closures in $X$. You can find this in Chapter 6 of Gillman and Jerison's classic "Rings of Continuous Functions". They also consider the "local problem" of continuously extending a single map at length in some of the exercises, e.g. given $f:D\rightarrow Y$ (not necessarily $Y=[0, 1]$) Exercise 6G characterizes the largest subspace of $X$ to which $f$ can be continuously extended in terms of $z$-filters. - Thank you very much. But I can't find the book of "Rings of Continuous Functions". Could you tell me where i can find this book? – Paul Oct 21 2011 at 0:29 You'll probably have to go through some sort of interlibrary loan, although Amazon lists some used copies available for purchase. – Todd Eisworth Oct 21 2011 at 19:43 The situation is analogous to the particular case of $X$ a metric space, for any Tychonoff space $X$ is uniformisable, and a real valued function $f$ on a dense subset $D$ of a uniform space $X$ is certainly continuously extendable to $X$ provided it is uniformly continuous. This is also a necessary condition if $X$ is compact, for any continuous function on a compact uniform space is always uniformly continuous. - Is "Any Tychonoff space X is uniformisable" means that every Tychonoff space is a uniform space? – Paul Oct 20 2011 at 9:34 Pietro provided a link to Wikipedia in his answer. But the answer is essentially yes; the topology on $X$ will be compatible with a construction that makes $X$ into a uniform space. – Christopher A. Wong Oct 20 2011 at 9:47 1 Yes, it admits a uniform structure (not unique in general). So the complete answer is: $f$ is uniformly continuous on $D$ wrto one such uniform structures. – Pietro Majer Oct 20 2011 at 10:31 2 To be specific, $f$ is extendable if and only if it is uniformly continuous wrt (the restriction to $D$ of) the fine uniformity on $X$. – Emil Jeřábek Oct 20 2011 at 13:35 1 With respect to – Richard Rast Oct 21 2011 at 1:38 show 1 more comment At least in the case of a metric space $X$, such a function $f$ extends from $D$ to all of $X$ if and only if $f$ maps Cauchy sequences to Cauchy sequences (note that this is a weaker condition than uniform continuity). As mentioned by Pietro, for your general Tychonoff space $X$, you make it a uniform space, so I think you can generalize my statement above to the following: $f$ extends if and only if $f$ maps Cauchy nets to Cauchy nets. - Also note that a function maps Cauchy nets to Cauchy nets if and only if it is uniformly continuous. – Pietro Majer Oct 20 2011 at 10:32 1 @Pietro: Are you sure? For example, every continuous function from $\mathbb R$ to any uniform space also maps Cauchy nets to Cauchy nets. (Let `$\{x_a\}_{a\in D}$` be a Cauchy net. There is $a_0$ such that $|x_a-x_{a_0}|\le1$ for every $a\ge a_0$, hence all such $x_a$ are confined to the compact interval $I=[x_{a_0}-1,x_{a_0}+1]$. Then $f$ is uniformly continuous on $I$, which implies that `$\{f(x_a)\}_{a\in D}$` is Cauchy.) – Emil Jeřábek Oct 20 2011 at 13:24 OTOH, Christopher’s condition is clearly not necessary, even in the metric case. For instance, if $D=X$, then every continuous $f$ trivially extends, but in general does not map Cauchy sequences to Cauchy sequences (e.g., take $X=\mathbb Q$, $f(x)=0$ for $x<\pi$, $f(x)=1$ for $x>\pi$). – Emil Jeřábek Oct 20 2011 at 13:31 @Emil: oh yes, my distraction! – Pietro Majer Oct 20 2011 at 14:46 1 @ Emil: In the metric case, I suppose I was thinking of the process of completion as uniquely determined by Cauchy sequences. So perhaps what I really want is that $f$ maps Cauchy sequences to Cauchy sequences, wherever these Cauchy sequences converge to a point $p \notin D$, $p \in X$. – Christopher A. Wong Oct 20 2011 at 16:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161729216575623, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/101322/list
## Return to Question 2 added 40 characters in body Is it true that for every reductive algebraic $G$ over ${\mathbb C}$ with a Lie algebra $\mathfrak g$ there is an open neighborhood $U$ of the identity in $G$ and an algebraic function (in a sense of algebraic geometry) $L: U\to \mathfrak g$ which satisfies the following properties of logarithm: (1) $L$ is $G$-equivariant with respect to the $G$-action on $G$ by conjugation and the Adjoint $G$-action on $\mathfrak g,$ (2) $L(e)=0,$ (3) $dL$ is an isomorphism at $e$, (4) For some maximal torus $T$ in $G$, $L(T\cap U)$ lies in the Lie algebra of $T.$ For $G=GL(n,\mathbb C)$, the embedding $L:GL(n,\mathbb C)\to gl(n,\mathbb C)$ works. For $G=SO(n,\mathbb C)$, the Cayley Transform works: $L(A)= (I-A)(I+A)^{-1}$. Cayley transform has a version for symplectic matrices as well. Is there a construction which works for all $G$? If not, are there known ad hoc constructions for exceptional groups? 1 # Cayley Transform for all reductive groups a.k.a an algebraic logarithm Is it true that for every reductive algebraic $G$ over ${\mathbb C}$ with a Lie algebra $\mathfrak g$ there is an open neighborhood $U$ of the identity in $G$ and an algebraic function (in a sense of algebraic geometry) $L: U\to \mathfrak g$ which satisfies the following properties of logarithm: (1) $L$ is $G$-equivariant with respect to the $G$-action on $G$ by conjugation and the Adjoint $G$-action on $\mathfrak g,$ (2) $L(e)=0,$ (3) For some maximal torus $T$ in $G$, $L(T\cap U)$ lies in the Lie algebra of $T.$ For $G=GL(n,\mathbb C)$, the embedding $L:GL(n,\mathbb C)\to gl(n,\mathbb C)$ works. For $G=SO(n,\mathbb C)$, the Cayley Transform works: $L(A)= (I-A)(I+A)^{-1}$. Cayley transform has a version for symplectic matrices as well. Is there a construction which works for all $G$? If not, are there known ad hoc constructions for exceptional groups?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9010574221611023, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/homework?page=20&sort=newest&pagesize=50
# Tagged Questions Applies to questions of primarily educational value - not only questions that arise from actual homework assignments, but any question where it is preferable to guide the asker to the answer rather than giving it away outright. 1answer 169 views ### Inhomogeneous Effective Mass in a 2D Lattice Consider a tight-binding square lattice in 2D. This lattice has two different nearest neighbor tunneling rates along the x and y directions; call them $J_{x}$ and $J_{y}$. All longer range tunneling ... 1answer 140 views ### Clarification on a Goldstein formula steps (classical mechanics) At page 20 of Classical Mechanics' Goldstein (Third edition), there are these two steps given between eqs. (1.51) and (1.52): \sum_i m_i \ddot {\bf r}_i \cdot \frac{\partial {\bf r_i}}{ \partial ... 0answers 168 views ### Simple heat transfer question [closed] You add an unknown volume of milk of $5.2 ^\circ C$ to a cup of coffee ($40 mL$ of water, temperature: $80.3 ^\circ C$). After a while of stirring the temperature reaches $73.2 ^\circ C$. The ... 3answers 2k views ### Finding Angular Acceleration of rod given radius and angle A uniform rod is 2.0 m long. The rod is pivoted about a horizontal, frictionless pin through one end. The rod is released from rest at an angle of 30° above the horizontal. What is the angular ... 1answer 154 views ### Finding Max Radius of Propeller (angular velocity) I am looking at a question from University Physics The given answer Whats the intuition behind using the below diagram of finding $v_{tip}$? I was looking the the $v_{tan}$ not knowing how ... 1answer 300 views ### How do I find the initial velocity in this problem? An X-ray tube gives electrons constant acceleration over a distance of $20\text{ cm}$. If their final speed is $2.0\times 10^7\text{ m/s}$, what are the electrons' acceleration? I know this ... 2answers 108 views ### Solar Thermal/Solar Photo-voltaic calculations [closed] This is my final high school assignment. I'm asked to prepare a research document on Solar Thermal and Solar Photo-voltaic, to prove that these two are feasible alternative power sources to power the ... 1answer 419 views ### Maximum Kinetic energy of a spring The block in the figure below lies on a horizontal frictionless surface and is attached to the free end of the spring, with a spring constant of 35 N/m. Initially, the spring is at its relaxed ... 0answers 86 views ### Bandgap Spacing in Photonic Crystals I am doing some self-study on photonics and have encountered the following question: We know that amorphous electronic crystals such as amorphous silicon have a bandgap. Can amorphous photonic ... 2answers 258 views ### Linear motion with variable acceleration Consider the following problem I pull a mass m resting at x = 0 on a frictionless table connected to a spring with some k by an amount A and let it go. What will be its speed at x=0? I know how to ... 1answer 335 views ### Find magnetic scalar potential for superconducting sphere In regions where $J = 0$, the curl of the magnetic field $B$ is necessarily zero (since $\nabla \times B = \mu_0 J$). Therefore $B$ can be written as $B = -\nabla V_m$, where $V_m$ is a scalar ... 1answer 4k views ### Abiotic oil vs the traditional theory of oil deposit formation [closed] I am curious to see what people think of the abiotic theory of oil deposit formation versus the traditional theory. I have long wondered how enough organic material became trapped underground to ... 0answers 489 views ### Finding the period and frequency for simple harmonic motion [closed] A 1 lb weight is suspended from a spring. Let y give the deflection (in inches) of the weight from its static deflection position, where “up” is the positive direction for y. If the static ... 2answers 587 views ### Why might the normal force on a box not be equal to its weight? Very simple homework question which I managed to get wrong: "The weight of a box sitting on the floor points directly down. The normal force of the floor on the box points directly up. Need these ... 1answer 95 views ### Is Force equal to components in different dimensions of Force or distance of those components I'm trying to understand to find my homework problem. Its a simple concept of finding force. I wonder if you have two formulas then is the total Force = force from x + force from y. Or ... 4answers 175 views ### Find total energy and momentum of an moving electron in a rest frame I have an electron moving with speed $u'$ in a frame $S'$ moving with speed $v'$ relative to a rest frame $S$. How do I find the total energy and momentum of the electron in the rest frame $S$? I ... 0answers 80 views ### How to figure out an elastic constant? [closed] Im doing this study and i have this question which I'm not 100% sure on, got me pretty stumped. anyone think they can help me?! When a bowstring is pulled back in preparation in preparation for ... 1answer 173 views ### Can I find a potential function in the usual way if the central field contains $t$ in its magnitude? I'm working on a classical mechanics problem in which the problem states that a particle of mass $m$ moves in a central field of attractive force of magnitude: $$F(r, t) = \frac{k}{r^2}e^{-at}$$ ... 0answers 182 views ### Finding transcendental equation for the energy of a particle in delta potential well near infinite potential barrier [closed] I'm having trouble finding the transcendental equation for a particle in a delta potential settled near an infinite potential wall. The potential is given by V(x) = \begin{cases} \infty & x ... 0answers 84 views ### Wireless signal strength My question is possibly somewhat misplaced, but I'll try to explain as best as I can. Suppose I have a transmitter with a frequency of 2500MHz and a power of 1W. It radiates uniformly in all ... 1answer 404 views ### Calculate acceleration and time given initial speed, final speed, and travelling distance? [closed] A motorcycle is known to accelerate from rest to 190km/h in 402m. Considering the rate of acceleration is constant, how should I go about calculating the acceleration rate and the time it took the ... 2answers 783 views ### Calculating work done on an ideal gas I am trying to calculate the work done on an ideal gas in a piston set up where temperature is kept constant. I am given the volume, pressure and temperature. I know from Boyle's law that volume is ... 3answers 259 views ### Same momentum, different mass The question is: if A bowling ball and ping pong ball are moving at same momentum and you exert same force to stop each one which will take a longer time? or some? which will have a longer ... 1answer 112 views ### helium balloon tied to a car [closed] A helium balloon is tied to the seat of a car. The doors and windows of the car is closed. If the car now starts moving, in which direction will the balloon move- front or back? 2answers 236 views ### How does one calculate the volume of a nucleus and the volume of an atom (in this case hydrogen)? The hydrogen atom contains 1 proton and 1 electron. The radius of the proton is approximately 1.0 fm (femtometers), and the radius of the hydrogen atom is approximately 53 pm (picometers). 1answer 320 views ### Calculate the UPS Capacity in amp-hours [closed] I am trying to find out the UPS capacity in amp-hours for my HP UPS system. I've already done some calculations based on the UPS information from the HP Power Manager software. Bellow are my ... 1answer 259 views ### Internal forces in a truss and its geometry I'm to work out the internal forces in a truss, but I can't get my head around the geometry of the truss itself. I'm starting to think there may have been information on the diagram which I missed. ... 1answer 58 views ### 1D Acoustical Relations beyond nearest neighbor couplings Consider some 1D Lattice of atoms with nth neighbor coupling of strength k_{n}. I'm looking for the dispersion relation for acoustical phonons under these conditions. I start with the Lagrangian, ... 0answers 221 views ### Capacitance of this unusual capacitor This capacitor is composed of two half spherical shelled conductors both with radius $r$. There is a very small space between the two parts seeing to that no charge will exchange between them. ... 1answer 44 views ### Rays in Symmetric Resonator I'm having some trouble figuring out how to get started on this question: If I have a symmetric resonator with two concave mirrors of radii $R$ separated by a certain distance, after how many round ... 2answers 366 views ### Deriving the Poynting Theorem I am trying to derive the Poynting theorem. So far, I've only been able to narrow down which equations I think I'll need to do so. These are the equations: Maxwell's Equations: \nabla\times{\bf E} ... 0answers 74 views ### How do I figure out the normal force on a person on an accelerating platform? [closed] My question is an elaboration on this question: Force on rope with accelerating mass on pulley The elaboration is to determine what the Force that the platform exerts on the person. Assume the ... 1answer 208 views ### What is the general relativistic calculation of travel time to Proxima Centauri? It has already been asked here how fast a probe would have to travel to reach Alpha Centauri within 60 years. NASA has done some research into a probe that would take 100 years to make the trip. But ... 3answers 294 views ### Jumping on earth versus jumping on the moon Given the following problem: On the moon the acceleration due to gravity is $g_m = 1.62 m/s^2$. On earth, a person of mass $m = 80 kg$ manages to jump $1.4 m$. Find the height this person will ... 1answer 305 views ### How to get the gradient potential in polar coordinate In polar coordinate, $$\nabla U = \frac{\partial U}{\partial r}\hat{\mathbf{r}} + \frac{1}{r}\frac{\partial U}{\partial \theta}\hat{\mathbf{\theta}} .$$ Can anyone show me how to get this result? 1answer 163 views ### Bouncing Ball Pattern If a ball is simply dropped, each time a ball bounces, it's height decreases in what appears to be an exponential rate. Let's suppose that the ball is thrown horizontally instead of being simply ... 2answers 164 views ### Proof of $Dq-qD=1$ where $D=\frac{\partial }{\partial q}$ is the differential operator Can anyone provie me the proof of $Dq-qD=1$ where $D=\frac{\partial }{\partial q}$ refers to the differential operator? Or if it's something special to quantum mechanics, why is it? Is this ... 2answers 88 views ### Proof of $T=\sqrt{2y/a}$ in uniformaly accelerating object [closed] Suppose that there is a object that does a y-axis-only free fall to ground. The initial distance from the ground is defined as $H$. How does one prove that time the object takes to reach the ground ... 0answers 72 views ### Describing the movement of the object in a particular situation in Lagrangian way Suppose there is a object M, (sliding motion) moving by the initial speed $v$ and the initial location $x_0$. Otherwise noted, friction is assumed to be nonexistent. It then meets a circular mold ... 3answers 206 views ### How do I integrate $\frac{1}{\Psi}\frac{\partial \Psi}{\partial x} = Cx$ How do I integrate the following? $$\frac{1}{\Psi}\frac{\partial \Psi}{\partial x} = Cx$$ where $C$ is a constant. I'm supposed to get a Gaussian function out of the above by integrating but don't ... 1answer 151 views ### Questions regarding solving the Brachistochrone problem using Lagrangian brachistochrone problem: Suppose that there is a rollercoaster. There is point 1 ($0,0$) and point 2 ($x_2, y_2)$. Point 1 is at the higher place when compared to the point 2, so the rollercoaster ... 2answers 125 views ### Frequency of a tuning fork in a vacuum Consider this equation of a damped harmonic oscillator such that: $$\ddot{x}+2\gamma\dot{x}+\omega^2_0=0$$ with: $\gamma=\frac{b}{2m}$ and $\omega_0=\sqrt{\frac{k}{m}}$ Finally, we know that the ... 1answer 359 views ### Adiabatic expansion [closed] I'll start off by saying this is homework, but I ask because I don't understand how the math should work (I don't just want an answer, I'd like an explanation if possible). I understand if this is ... 0answers 110 views ### Antenna Power and gain calculation [closed] I have a wireless security related question, the second part confused me: Your wireless network usually has a range of 100 feet. However you are having a (confidential) meeting in a 10’ x 10’ x ... 1answer 1k views ### Distribution of charge on a hollow metal sphere A hollow metal sphere is electrically neutral (no excess charge). A small amount of negative charge is suddenly placed at one point P on this metal sphere. If we check on this excess negative charge a ... 1answer 410 views ### Friction due to air drag at high speeds I am trying to set up this problem, but I am not sure how to go about doing so. (From University Physics, Young & Freedman): You throw a baseball straight up. The drag force is proportional to ... 0answers 36 views ### What is some analogous experiment about the black holes by using the diary product like eggs, milk? [duplicate] Possible Duplicate: Black hole analog experiment? I will explain my situation a little bit: My teacher assign a experimental project that must be including the diary product: egg and milk ... 1answer 170 views ### A differential equation of Buckling Rod I tried to solve a differential equation, but unfortunately got stuck at some point. The problem is to solve the diff. eq. of hard clamped on both ends rod. And the force compresses the rod at both ... 2answers 492 views ### Barrier in an infinite double well I am stuck on a QM homework problem. The setup is this: (To be clear, the potential in the left and rightmost regions is $0$ while the potential in the center region is $V_0$, and the wavefunction ... 1answer 357 views ### Schrödinger equation with complex potential In 1 dimension what is the solution of the Schrödinger equation with potential $$V(x) = V_r + i V_i$$ Potentials are constant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275545477867126, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/15672/phase-space-of-a-discrete-dynamical-system
# Phase space of a discrete dynamical system Suppose a dynamical system of one variable $x$ with discrete time-steps. I've seen in some papers a type of graph in which $x(n+1)$ is plotted versus $x(n)$. My questions are : 1/ Can this be considered as the phase portrait of the system ? 2/ Does this method has a specific name ? 3/ Has there been any studies with regard to the topology of this space ? - 3 – Slaviks Oct 13 '11 at 19:11 Thanks, exactly what I was thinking about ! – AlexPof Oct 14 '11 at 11:45 ## 2 Answers The phase space dynamics of the discrete dynamical system is just what you describe--- x(n+1) as a function of x(n). The phase space itself is the range of values of the x(n), whatever space they might live on, while the dynamics is the function that specifies the evolution in one step in time. The connection with mechanical phase space is provided by a Poincare section. The Poincare section describes a dynamical continuous system by its intersections with a given surface in the full phase space. For a 1d motion, you can consider the half-line x=0,p>0, or in canonical action-angle coordinates $\theta$ fixed, J arbitrary. When you have a separable integrable motion, you take any one of the $\theta$ variables and define a surface by setting it to zero. Then the motion will intersect this surface once every period. In mechanical phase space, the phase-space volume is conserved, but this is not so for maps. The condition of transversal intersection means that the map from the Poincare surface to itself can get The topological properties of maps on ### Topological properties The properties of maps on spaces are as complicated as you like. The question is then which topological properties are you interested in? The simplest topological theorems on maps is the Brouwer fixed point theorem, which can be restated as follows: • Link the points x and f(x) by a path. If you draw a contractible sphere, and you find that as you go around the boundary, this x-f(x) map has a nonzero winding, then there is a fixed point inside this sphere. The winding of a sphere around another sphere is the index of the map--- it is how many times the sphere covers the other sphere in the map. The Brouwer theorem is classical. Another classical theorem of this sort is Sharkovskii's theorem: • There is a linear order on periods of periodic cycles in 1d maps, such that each periodic orbit of length l implies that there is a periodic orbit of length l' whenever l' is greater than l. Some other results are given by symbolic dynamics, the coarse grained position as a function of time. The notions of the entropy of a dynamical system is related to this. These results are not really topological in character, but they are general, and give qualitative insight, so they are similar. Many further results can be found here, http://elib.tu-darmstadt.de/tocs/35981431.pdf - Yes, given a discrete dynamics q(t-1), q(t), q(t+1) ..., one can view to Poincare plot q(t+1) vs q(t) as a representation of the phase space dynamics of the system. However, considering the variable q as the generalized coordinate of the discrete system, we would get closer to a phase space description if we would define the generalized momentum p(t) := q(t) - q(t-1), and plot the 'momenta' p(t) versus the 'positions' q(t). An example would be the reversible dynamics based on q(t+1) - 3 q(t) + q(t-1) = 0. Using the momenta as defined above, this dynamics can be re-written in phase-space form: q(t+1) = 2 q(t) + p(t) p(t+1) = q(t) + p(t) in which we recognize Arnold's cat map. This is an area preserving map with chaotic properties. Usually maps like these are studied by applying periodic boundary conditions. in that case the phase space has the topology of a torus. But one should not assign any specific meaning to that (convenient) choice. Not sure what else to say about the topology of the phase space of discrete dynamical systems. In principle, you can define it the way you want it to be. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156162738800049, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/82167/how-can-two-horizontal-forces-acting-at-one-point-be-at-right-angles
# How can two horizontal forces acting at one point be at right angles? I was looking at this exam question, and i had trouble figuring what it means. This question is on mechanics. Two horizontal forces X and Y act at a point O and are at right angles to each other.. I can't seem to understand how it is possible that there can be two horizontal forces acting on a point and are at right angles to each other. @dsa If it helps, you could conveniently assume that force $X$ acts along the x-axis and force $Y$ along y-axis, the vertical axis being the z-axis. – Srivatsan Nov 14 '11 at 22:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9679080247879028, "perplexity_flag": "head"}
http://deltaepsilons.wordpress.com/2009/10/20/unramified-extensions/
# Delta Epsilons Mathematical research and problem solving ## Unramified extensions October 20, 2009 Posted by Akhil Mathew in algebra, algebraic number theory, number theory. Tags: discrete valuation rings, Nakayama's lemma, unramified extensions trackback As is likely the case with many math bloggers, I’ve been looking quite a bit at MO and haven’t updated on some of the previous series in a while. Back to ANT. Today, we tackle the case ${e=1}$. We work in the local case where all our DVRs are complete, and all our residue fields are perfect (e.g. finite) (EDIT: I don’t think this works out in the non-local case). I’ll just state these assumptions at the outset. Then, unramified extensions can be described fairly explicitly. So fix DVRs ${R, S}$ with quotient fields ${K,L}$ and residue fields ${\overline{K}, \overline{L}}$. Recall that since ${ef=n}$, unramifiedness is equivalent to ${f=n}$, i.e. $\displaystyle [\overline{L}:\overline{K}] = [L:K].$ Now by the primitive element theorem (recall we assumed perfection of ${\overline{K}}$), we can write ${\overline{L} = \overline{K}(\overline{\alpha})}$ for some ${\overline{\alpha} \in \overline{L}}$. The goal is to lift ${\overline{\alpha}}$ to a generator of ${S}$ over ${R}$. Well, there is a polynomial ${\overline{P}(X) \in \overline{K}[X]}$ with ${\overline{P}(\overline{a}) = 0}$; we can choose ${\overline{P}}$ irreducible and thus of degree ${n}$. Lift ${\overline{P}}$ to ${P(X) \in R[X]}$ and ${\overline{a}}$ to ${a' \in R}$; then of course ${P(a') \neq 0}$ in general, but ${P(a) \equiv 0 \mod \mathfrak{m}'}$ if ${\mathfrak{m}'}$ is the maximal ideal in ${S}$, say lying over ${\mathfrak{m} \subset R}$. So, we use Hensel’s lemma to find ${a}$ reducing to ${\overline{a}}$ with ${P(a)=0}$—indeed ${P'(a')}$ is a unit by separability of ${[\overline{L}:\overline{K}]}$. I claim that ${S = R[a]}$. Indeed, let ${T=R[a]}$; this is an ${R}$-submodule of ${S}$, and $\displaystyle \mathfrak{m} S + T = S$ because of the fact that ${S/\mathfrak{m'}}$ is generated by ${\alpha}$ as a field over ${k}$. Now Nakayama’s lemma implies that ${S=T}$. Proposition 1 Notation as above, if ${L/K}$ is unramified, then we can write ${L=K(\alpha)}$ for some ${\alpha \in S}$ with ${S=R[\alpha]}$; the irreducible monic polynomial ${P}$ satisfied by ${\alpha}$ remains irreducible upon reduction to ${k}$. There is a converse as well: Proposition 2 If ${L=K(\alpha)}$ for ${\alpha \in S}$ whose monic irreducible ${P}$ remains irreducible upon reduction to ${k}$, then ${L/K}$ is unramified, and ${S=R[\alpha]}$. Consider ${T := R[X]/(P(X))}$. I claim that ${T \simeq S}$. First, ${T}$ is a DVR. Now ${T}$ is a finitely generated ${R}$-module, so any maximal ideal of ${T}$ must contain ${\mathfrak{m}T}$ by the same Nakayama-type argument. In particular, a maximal ideal of ${T}$ can be obtained as an inverse image of a maximal ideal in $\displaystyle T \otimes_R k = k[X]/(\overline{P(X)})$ by right-exactness of the tensor product. But this is a field by the assumptions, so ${\mathfrak{m}T}$ is the only maximal ideal of ${T}$. This is principal so ${T}$ is a DVR and thus must be the integral closure ${S}$, since the field of fractions of ${T}$ is ${L}$. Now ${[L:K] = \deg P(X) = \deg \overline{P}(X) = [\overline{L}:\overline{K}]}$, so unramifiedness follows. Next up: totally ramified extensions, differents, and discriminants.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 69, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9136275053024292, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/10142-domain-rule.html
# Thread: 1. ## Domain Rule If I am trying to calcuate the domain. There is an absolute value, so would I set the part inside the absolute value brackets to: greater than or equal to zero, greater than zero, or not equal to zero. Thanks for the help. 2. Originally Posted by qbkr21 If I am trying to calcuate the domain. There is an absolute value, so would I set the part inside the absolute value brackets to: greater than or equal to zero, greater than zero, or not equal to zero. Thanks for the help. Perhaps it would help if you posted the function... -Dan 3. I just looking for a general rule... denominator: =/ (not equal to zero) square root: >= (greater than = to) log:> (greater than) absolute value: ? 4. Originally Posted by qbkr21 I just looking for a general rule... denominator: =/ (not equal to zero) square root: >= (greater than = to) log:> (greater than) absolute value: ? Then consider the archetype: $y = |x|$ This function is defined on all real numbers, so the domain is $( -\infty, \infty)$. The basic idea is that there is a y value for every real x value. -Dan 5. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8898282647132874, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52821/presentation-of-extentions-of-groups
## Presentation of extentions of groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Presentation of a semi-direct products of $N$ by $H$ can be written from presentations of $N$ and $H$. But for other extensions of $N$ by $H$ (cyclic, central etc.), which are not semi direct products, can we write presentation of the extension from presentation of $N$ and $H$? - 1 In general, you need to know two things: what each relator in H is equal to in N, and what generators of H conjugate generators of N to. For example, suppose I have the extension C_2 -> G -> C_2, with presentations <x | x^2> and <y | y^2>. Suppose I know x^y = x and y^2=x, then my presentation for G is <x,y | x^2, y^2=x, x^y=x>. – Steve D Jan 22 2011 at 7:15 1 The question is: how is the extension given to you in the first place? – Alex Bartel Jan 22 2011 at 8:25 2 This is a duplicate of this (very poorly worded, subsequently closed) question: mathoverflow.net/questions/44631/… , which received a very good answer from Derek Holt. – HW Jan 22 2011 at 16:08 As per Holder's program, it is possible construct all groups if we know simple groups. "Semi-direct product" is a nice tool to construct many groups from two known groups; but when we try to find all "p-groups" then semi-direct products are not sufficient. We move towards general extension of groups. There may be many extensions of a group $N$ by $H$, which are isomorphic. But if we have presentations of these extensions, we can determine isomorphism classes easily. Therefore, I would like to know, whether presentation of extension can be written from presentation of $N$ and $H$. – RDK Jan 23 2011 at 4:37 What do you mean by 'it is possible to construct all groups'? How do you want to describe the groups? As presentations? It is clearly possible to list all presentations. On the other hand, the isomorphism problem for groups is unsolvable, so you can't possibly hope to classify groups up to isomorphism. – HW Jan 23 2011 at 18:50 show 1 more comment ## 1 Answer To start with, following Alex's comment above, I think you need to look at some books on cohomology of groups to specify how the extension is going to be given. That means looking at the Schreier theory. Many years ago the Schreier theory of extensions was adapted to give exactly this by Turing.(Little known paper of his.) Ronnie Brown and I gave a more modern treatment of it in a paper in the Proceedings Royal Irish Academy.(On the Schreier theory of non-abelian extensions: generalisations and computations, (Proc.Royal Irish Acad., 96A, (1996) 213-227.)) That is quite general, but a simple version of the question can found discussed in many books on combinatorial group theory, such as D. L. Johnson, 1980, Topics in the theory of group presentations , number 42 in London Math. Soc Lecture NotesMS Lecture Notes, Cambridge University Press. and D. L. Johnson, 1997, Presentations of groups , volume 15 of London Mathematical Society Student Texts , Cambridge University Press, Cambridge. The advantage of these is that they do not require an imense expenditure of time to get to the heart of the problem. (An interesting follow on is to examine ways in which to start with an extension of groups, plus resolutions of the two ends and give a resolution of the middle term.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184467792510986, "perplexity_flag": "middle"}
http://mathdl.maa.org/mathDL/60/?pa=content&sa=viewDocument&nodeId=3404
Search ## Search Supplements: Keyword Advanced Search # Supplements Page 1 of 1 show printer friendly send to a friend # Supplement to 'Fermat's Spiral and the Line Between Yin and Yang' by Taras Banakh (Lviv National University), Oleg Verbitsky (Humbodt-Universität zu Berlin) and Yaroslav Vorobets (Texas A&M University) [Accompanies article to appear in American Mathematical Monthtly.] [applet written by Tom Leathrum] The graph here shows the unit circle and the polar curve $$\theta=\pi r^2 T$$, a "Fermat spiral," where $$T$$ is the value of the "turn" parameter (which can be set with the first slider in the applet).  The branch of this curve with $$0\leq r\leq 1$$ can be highlighted in red by selecting the "highlight curve" checkbox.  The semitransparent blue region is congruent to the gray region, but reflected across the $$x$$-axis and rotated counterclockwise by $$\pi R$$ radians, where $$R$$ is the value of the "rotate" parameter (which can be set with the second slider in the applet).  The blue region can be removed from the graph by unselecting the "show reflected" checkbox (this also disables the "rotate" slider). Note: In order to view this applet, you must use a browser with the Java plug-in (version 1.6) installed.  Math expressions in this page are displayed using jsMath -- you may need to install missing fonts. #### Facts about Applet: • For any value of the "turn" parameter, the blue region is the reflection of the gray region across the line through the origin which forms an angle half the angle of rotation of the blue region. • For any value of the "turn" parameter, the maximal subset of the gray region that is symmetric with respect to reflection through a line is the intersection of the gray region with its reflection through the line, i.e. with the blue region when the angle formed by the line is half the angle of rotation of the blue region. • When the "turn" parameter takes the value 1.0, for any angle of rotation of the blue region, the intersection of the gray and blue regions has area exactly 1/4 of the area of the circle. The first fact is apparent from the graph. The article includes proofs of the other two facts (albeit stated in more general terms), along with other important symmetry properties of the gray and blue regions. The excerpt below provides background for the statement (proven in the article) of an important uniqueness property for the Fermat spiral in the case where the "turn" parameter takes the value 1.0. #### Excerpt from Article: From the mathematical point of view, the yin-yang symbol is a bipartition of the disk $$D$$ by a certain curve $$\beta$$. We aim at identifying this curve and deriving an explicit mathematical expression for it. Such a project should apparently begin with choosing a set of axioms for basic properties of the yin-yang symbol in terms of $$\beta$$: (A1) $$\beta$$ splits $$D$$ into two congruent parts. (A2) $$\beta$$ crosses each concentric circle of $$D$$ twice. (A3) $$\beta$$ crosses each radius of $$D$$ once (besides the center of $$D$$, which must be visited by $$\beta$$ due to (A2)). Denote the symmetry group of the disk $$D$$ by $$Sym(D)$$. As is well known, it consists of reflection and rotation symmetries. Focusing on these intrinsic symmetries of the disk, we will call a set $$X\subseteq D$$ symmetric if $$s(X)=X$$ for some nonidentity $$s\in Sym(D)$$. Suppose now that $$D$$ has unit area. In fact, instead of area we will more often refer to the more general concept of Lebesgue measure. We call a set $$A\subseteq D$$ perfect if it has measure 1/2 and any symmetric subset of A has measure at most 1/4. (A4) $$\beta$$ splits $$D$$ into perfect sets (from now on it is supposed that $$D$$ has unit area). (A5) $$\beta$$ is smooth, i.e., has an infinitely differentiable parameterization $$\beta:[0,1]\to D$$ with nonvanishing derivative. (A6) $$\beta$$ is algebraic in polar coordinates. A classical instance of a curve both smooth and algebraic in polar coordinates is Fermat's spiral. Fermat's spiral is defined by the equation $$a^2r^2=\theta$$. The part of it specified by the restriction $$0\leq\theta\leq\pi$$ (or, equivalently, $$-\sqrt{\pi}/a\leq r\leq\sqrt{\pi}/a$$) is inscribed in the disk of area $$(\pi/a)^2$$. Theorem 1.1 Fermat's spiral $$\pi^2 r^2=\theta$$ is, up to congruence, the unique curve satisfying the axiom system (A1)-(A6). Note that the factor of $$\pi^2$$ in the curve equation in Theorem 1.1 comes from the condition that $$D$$ has area 1. As all Fermat's spirals are homothetic, we can equally well draw the yin-yang symbol using, say, the spiral $$\theta=r^2$$. Varying the range of $$r$$, we obtain modificatons as twisted as desired. [Note:  The applet uses a unit circle, which has area $$\pi$$.  At this scale, the Fermat spiral for Theorem 1.1 is given by the equation $$\pi r^2=\theta$$, corresponding to the "turn" parameter taking value 1.0.  Extra "twists" as described above can be drawn using higher values for the "turn" parameter.] #### References: • Banakh, Taras, Oleg Verbitsky, and Yaroslav Vorobets, "Fermat's Spiral and the Line Between Yin and Yang," to appear in American Mathematical Monthly. • MetaPost code for figures in Monthly article (applet was based on this code). • Nelson, Roice, "Searching for a Little More Hidden Symmetry," http://roice3.blogspot.com/2009/02/searching-for-little-more-hidden.html (February 22, 2009). Banakh, Taras, Oleg Verbitsky and Yaroslav Vorobets, "Supplement to 'Fermat's Spiral and the Line Between Yin and Yang'," Loci (February 2010), DOI: 10.4169/loci003404
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 41, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8806774616241455, "perplexity_flag": "middle"}
http://letsplaymath.net/2009/09/29/do-your-students-understand-division/
[I couldn't find a good picture illustrating "division." Niner came to my rescue and took this photo of her breakfast.] I found an interesting question at Mathematics Education Research Blog. In the spirit of Liping Ma’s Knowing and Teaching Elementary Mathematics, Finnish researchers gave this problem to high school students and pre-service teachers: • We know that: $498 \div 6 = 83$. How could you use this relationship (without using long-division) to discover the answer to: $491\div6=?$ [No calculators allowed!] The Finnish researchers concluded that “division seems not to be fully understood.” No surprise there! Check out the pdf report for detailed analysis. ## My Own Research I wondered what my students would do with the problem. Chickenfoot has been working on geometry and algebra 2, so it took him a few minutes to drag his mind back to arithmetic, but then the question was easy for him. He falls in with the 30% of students who “produced either rigorous and complete solutions or correct solutions with missing elements in justification.” Next up, Princess Kitten, who was working on Backwards Math division this morning. Unfortunately, she found that rather traumatic, so I hesitated to challenge her with another hard problem. We negotiated a trade: one more tough problem today, in exchange for no math requirement at all tomorrow. ## Thinking It Through Kitten has just finished up a unit about long division, so this problem looked easy to her, until I told her that long division was not allowed. “Can I do short division?” Nope, none of that, either. Short division is what we call long division when we do the subtraction steps mentally. Looking at the problem again, she immediately recognized that 491 was smaller than 498, and she knew that subtraction would have something to do with the answer. She wrote down “7 diff,” meaning the difference between 498 and 491. And since the difference was not exactly 6, she told me there would be a remainder. [I didn't want to distract her by asking what would have happened if the difference was 12. I think she would have recognized that any multiple of 6 means no remainder.] Kitten is not comfortable enough with fractions to handle the division completely. Since she’s only beginning 5th grade, I allowed her to answer in the form “___ R __.” At this point, she stumbled. She couldn’t decide what the remainder should be or what number should come before the R. She knew there was a connection between the change in the division problem and subtracting 6, but she kept wanting to subtract from the answer: $83 - 6 = 77$, and later, $83 - 12 = 71$. [A disturbing number of the students in the Finnish study did this, too, but they didn't have the excuse of being in 5th grade!] ## Words are Easier than Numbers Numbers are confusing because they are so abstract. Word pictures are easier to imagine, so Kitten converted the numbers into a word problem: On one side of the street, there are six clubs. On the other side of the street live 498 people, all of whom want to join the clubs. How many people will be in each of the clubs? With this approach, Kitten was able to correctly answer and explain two related problems: if 6 people moved away, $492 \div 6 = 82$, and: if 12 people moved away, $486 \div 6 = 81$. But when 7 people moved away, her clubs came out uneven. One club lost an extra person, and she wasn’t sure what number to use for her answer. I added a new rule: There have to be the same amount of people in each club. What happens when seven people move away? To keep the same number of people in each club, some people will have to be kicked out… Aha! Finally, the remainder made sense to her. ## What Does the Quotient Mean? Still, Kitten wanted to subtract from 83, giving the answer “71 R 5.” As long as everything came out even, she understood that the quotient (answer to the division problem) gave the number of people in each club. But she had trouble applying this concept when the problem involved a remainder. I’m not sure why that mental glitch caused her to revert to subtracting from 83, except perhaps that it was easier to subtract than to admit she didn’t know what to do. I offered her one more problem-solving hint: Smaller numbers are easier to work with. What if we had started with 18 people (3 in each club), and then 7 moved away…? In the end, Kitten managed to come up with the correct answer. And when she finally saw it, she agreed that it made sense. In her own words: “491 is between 486 and 492, so the answer has to be between 81 and 82.” ## What About You? And now, my readers, it’s your turn: • Can you answer the research question? Try to think of at least two different methods. • How do your middle school or high school students handle the question? • How would you explain it to a 5th-grader? Don’t miss any of “Let’s Play Math!”:  Subscribe in a reader, or get updates by Email. ## Have more fun on Let’s Play Math! blog: 9 Comments leave one → 1. September 29, 2009 9:00 am Kitten gets to preview all articles about herself. She saw the Have more fun… links at the bottom of this post and commented: “I’ll tell you how to teach math to a struggling student. DO NOT GIVE THEM THIS PROBLEM!!!” 2. Mary Smith September 29, 2009 8:22 pm Hi, I enjoyed reading about this as we were talking about this today in the 6th grade math class I observe for my college course. I am studying to be a middle school math teacher. Investigating blogs is one of our assignments this week. I enjoyed reviewing your site. In the 6th grade class they were relating fraction, decimal and percent. We talked about 1/8 and got it’s decimal and percent version. From there we looked at 3/8 and tried to decide how we could solve this without division again. The one thing I observed is some kids really get the relationship between numbers and others don’t. They just do what is told to them in an algorithm. They don’t examine the relationship of the numbers and how to see them from different angles (ha ha). I think that has to start in the early elementary years. Don’t you? Thanks, Mary. 3. September 30, 2009 9:34 am Hi, Mary, and welcome to my blog! Middle school math is my favorite topic, so I hope you find plenty of helpful ideas here. I think you are right, that it’s best to start this sort of teaching as early as possible. That is why I emphasize number bonds and mental math in the early grades, and also why we do a lot of oral story problems, even in preschool. Still, I think there are quite a few things you can do with middle school students to build up this kind of thinking. Mental math helps develop number sense and flexibility. The fraction calculation you mention and the division in the article above are both rather advanced topics in mental math. You may want to start your students on easier concepts, like addition and subtraction, and work up to the harder stuff. Here are a couple of links that might help: Mental Mathematics Strategies 5 Tips for Using Mental Math in Your Classroom 4. Alexseil Parker September 30, 2009 1:19 pm The first thing I saw was that this posting was for middle grades, but I noticed at the end you asked the question about how the high school students would respond to the question. Sadly to say, I think the response would be the same no matter what age, but I struggled with it. I am an aspiring high school math teacher. Great job as I scroll though the blog. I will definitely keep up with it! Alexseil 5. September 30, 2009 1:27 pm My “Grades 5+up” category is for any arithmetic-based post for middle school and beyond — even college level, as long as algebra or geometry are not required. I think I’ve tagged this article for both high school and middle school. The original researchers dealt with upper-grade high school students and college freshmen, so that is the target level for the question. Since I have a strong interest in younger students, however, the bulk of my post deals with how my 5th-grader fought her way to some level of understanding. She did not fully master the problem, but I believe we made progress. 6. September 30, 2009 6:23 pm Denise, I really like this problem! Will have to have my kids play with it. Thanks for posting it. 7. Mario Torrence October 19, 2009 10:27 am Hello, I’m investigating blogs as one of our assignments this week in my math ed class. I’ve been in the classroom for a few years and it’s really encouraging to find so many resources to help my students learn. Thanks. In my 6th grade math class we are relating fractions, to decimals and percents. In converting fractions to percents I realized that, just like in the research, my kids didn’t realize the meaning of division. They could process fractions with small numerators and denominators but the larger the difference between the numerators and the denominators, the harder it was for them to visualize the problem. Like Denise, I observed is some kids really get the relationship between numbers and others don’t. How do you teach kids how to realize if their answers are reasonably or not? 8. October 20, 2009 1:56 pm For a first approximation, students should ask themselves, “Will this be more or less than 1/2?” For better estimates, it helps to memorize a few “benchmark” conversions: 1/100 = 0.01 = 1% 1/10 = 0.1 = 10% 1/4 = 0.25 = 25% 1/3 = ~0.33 = ~ 33% 1/2 = 0.5 = 50% 3/4 = 0.75 = 75% etc. Then students can compare the fractions they are working with to these basic ones, to get an estimate of how reasonable their answers are. Of course, the biggest problem may be to convince them to make an estimate at all. Many students just want to put down any sort of answer and don’t care whether it is reasonable or not. Somehow you need to convince them that an incorrect answer now means more pain for them later — because they will get extra homework, perhaps? But if they get these problems correct, they get a break on the homework? My kids always like to think they are getting a break.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9706698656082153, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/25461?sort=newest
## Modular forms with prime Fourier coefficients zero ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can you give a non-trivial example of an integer weight cusp form which does not lie in the old subspace and it has $a_p=0$ for all primes $p$? If such a form cannot exist then why? - 1 Naïve heuristics suggest that no such modular form exists, and surely no such eigenform exists, as you very well know. I thought one could use results about eigenform as an input to prove the general case, but I failed. The (first) difficulty I encountered is that for an arbitrary modular forms, it is not obvious how to relate a_p with the Hecke operator T_p. – Olivier May 21 2010 at 9:27 Can you elaborate on your first comment? What heuristics suggest no such modular form exist? – Idoneal May 21 2010 at 10:59 A common heuristic is to regard the coefficients a_n as random with respect to congruence. So a_p already has a very slight chance of being divisible by p, let alone zero. Like I said, this is very naïve and just a suggestion. Outside of eigenforms, all I seem to be able to do is give rough estimates of the proportion of non-zero coefficients. They wouldn't tell you anything about coefficients at primes. – Olivier May 21 2010 at 11:42 That said, if your question has some definite purpose, perhaps it could help if you explained how such a modular forms would (or would not) help. Presumably, if you have a construction that produces a certain output specifically at primes, this construction tells you something about a_p, and this is exactly what I lack in my (very amateur) understanding of the problem. – Olivier May 21 2010 at 11:46 Idoneal, please add some context to your question. – S. Carnahan♦ May 21 2010 at 12:49 show 1 more comment ## 2 Answers Write $f=\sum c_i f_i$ as a sum over new eigenforms. Your condition is thus equivalent to $\sum c_i \lambda_i(p)=0$ for all $p$. Taking the absolute value squared of this and summing over $p\leq X$ gives $0=\sum_{i,j}c_i \overline{c_j} \sum_{p\leq X} \lambda_i(p)\overline{\lambda_j(p)}$. By the pnt for Rankin-Selberg L-functions, the inner sum over primes is $\sim X (\log{X})^{-1}$ if $i=j$, and is $o(X (\log{X})^{-1})$ otherwise. Taking $X$ very large we obtain $0=cX(\log{X})^{-1}+o(X(\log{X})^{-1})$, so contradiction. - Very nice David! I wasn't thinking in this direction at all. So in fact you have proved the result for $a_p = 0$ for a set of positive density. – Idoneal May 21 2010 at 15:42 I'm not sure if that's true - all I've proven is that $\sum_{p \leq X} |a_p|^2 \sim cX (\log{X})^{-1}$. It's not obvious to me how to deduce that $a_p$ can't vanish a small but positive percent of the time - all this shows, I think, is that $a_p$ can't vanish a hundred percent of the time! :) – David Hansen May 21 2010 at 15:49 Yes, you are right. I wasn't careful. In fact for CM forms a_p=0 half the time. By the way, I wonder if there is a way to prove this using linear algebra and the multiplicity one principle as Olivier suggested. – Idoneal May 21 2010 at 16:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It is only possible to write f as a sum over Hecke eigenforms, as David does, in a space of congruence modular forms (i.e., forms on a congruence subgroup of SL2(ℤ) ). On a noncongruence subgroup, the Hecke operators send all genuinely noncongruence forms to 0. (G. Berger, Hecke operators on noncongruence subgroups) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466159343719482, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/207315/is-there-a-fundamental-theorem-of-calculus-for-line-integrals-with-respect-to-ar
# Is there a fundamental theorem of calculus for line integrals with respect to arc length? I know that there's a fundamental theorem for line integrals. That is, suppose $C$ is a smooth curve given by $r(t)$, $a \leq t \leq b$ and suppose $\nabla f$ is continuous on $C$. Then $$\int_{C}\nabla f\cdot dr = f(r(b)) - f(r(a)).$$ Is there something similar for integrals of the form $$\int_{C}f(s)\, ds = \int_{a}^{b}f(x(t), y(t))\sqrt{x'(t)^{2} + y'(t)^{2}}\, dt?$$ - Isn't this just the normal fundamental theorem? Ultimately you're going to get a 1D integral out of that. – Robert Mastragostino Oct 4 '12 at 19:21 ## 1 Answer There is an inequality that can easily be proved using Schwarz' inequality: $$\bigl|f(r(b))-f(r(a))\bigr| \leq\int_a^b \bigl|\nabla f(r(t))\bigr|\ \bigl|r'(t)\bigr|\ dt = \int_C |\nabla f|\ ds\ .$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312938451766968, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/differential-equations?page=5&sort=unanswered&pagesize=50
# Tagged Questions Questions on (ordinary) differential equations. For questions specifically concerning partial differential equations, use the (pde) tag. 0answers 87 views ### The ordinary differential equation $\frac{d^2y}{dx^2}-q(x)y = 0$ , $0≤x<∞$ , $y(0)=1$, $y'(0)=1$ multiple choice question Assuming $$\frac{d^2y}{dx^2}-q(x)y = 0,\;\; 0≤x<\infty ,\;\;y(0)=1,\;\;y'(0)=1$$ wherein $q(x)$ is monotonically increasing continuous function,then which one of the following is true. (a) ... 0answers 75 views ### Solve equation by Fourier Series Given the equation $\Omega = a(x) \, + \langle \omega \mid \nabla_x \log \lambda(x) \rangle,$ where $x \in \mathbb{T}^n, \, a(x) > 0, \, \Omega > 0, \, \omega \in \mathbb{R}^n.$ I have to ... 0answers 49 views ### [High Scool/College DE]Synchronization of metronomes I'm not sure about the level of Discipline, since i'm from a country that does education differently than US. Basicly i'm working on an assignment that requires me to learn something that isn't in our ... 0answers 84 views ### Lipschitz Questions I want to ask one general question, and after that I would like to know if my method is correct (for determining whether a function is Lipschitz with respect to y) Is the following statement true? ... 0answers 55 views ### Behaviour of solutions of a differential system at bifurcation values What are the local and global behavior of solutions of $$\left\{\begin{array}{ll}r'&=r−r^2\\ \theta'&=(\sin\theta /2)^2+a\end{array}\right.$$ at all bifurcation values? 0answers 58 views ### Find the limit of the following integral I need help finding, $$\lim_{t\to\infty}\int_0^t \exp((t-s)A)g(s)\,\mathrm{d}s$$ when $$\lim_{t\to\infty} |g(t)|=g_0$$ Here A is a nxn matrix, whose eigenvalues satisfy $$\Re(\alpha_j)<0$$ and ... 0answers 75 views ### Ordinary Differential Equation in three variables I have the following ODE: $\frac{dy}{dz} + \frac{y(x+y)}{(y-x)(2x+2y+z)} = 0$ where z is a function of x and y, i.e. $z(x,y)$. For an example in two variables, I used integrating factor but I ... 0answers 87 views ### Laplace's transform Let $$x:[0,\infty) \to \mathbb{C}^n$$ such that $$|x(t)|\leq Me^{\alpha t}$$ for some constants $M \geq 0$ and $a\in \mathbb{R}$. Then the Laplace transform \mathbb{L(x)(s)}=\int_0^\infty ... 0answers 111 views ### How simple is it to solve this Differential Equation How to solve this Differential Equation? How simple is it to solve this Differential Equation? Any guidelines? Any hint? How to approach the solution? Have anybody seen things like it before? ... ... 0answers 23 views ### What's the physical meaning of the boundary value problems at resonance? Many papers deal with the boundary value problems at resonance. But how to understand the problems at resonance? Why do they talk about these kinds problems? What is the physical meaning? 0answers 71 views ### Showing a differential equation has a unique solution in $C[0, 1]$ Show that $$F(f)(t) = t^2 + \frac{t}{3}f(t) + \frac{1}{5}\int_0^t e^uf(u) du$$ is a contraction on $(C[0, 1), d_u)$. Deduce that the differential equation (15 − 5t)\frac{df}{dt} = (5 + 3e^{t})f + ... 0answers 133 views ### Unable to find Lipschitz constant for $y'=(t-1)\sin(y)$ Given the problem: $$y′ = (t − 1)\sin(y),\;\;\;y(1) = 1$$ find an approximation for $y(2)$. Give an upper bound for the global error taking $n = 4$ (i.e., $h = \frac{1}{4}$) The goal is to find an ... 0answers 68 views ### Partial Differential Equation Eigenvalue of zero question In the event that I'm solving a partial differential equation through separation of variables, if I end up with an eigenvalue of zero, what do I do with the corresponding eigenfunction? That is to ... 0answers 87 views ### Problem on Hill's Equation Show that the equation $$\frac{d^2\space y} { d\space x^2}+ y\sin^2 (100t)=0$$ has only bounded solutions. I was trying to prove $|y(1)(p) + y(2)(p)|< 2$ where $y(1)$ and $y(2)$ are $2$ ... 0answers 49 views ### Applying the Lagrange Euler Formulation I was doing my tutorial on Lagrange-Euler formulation for robotic systems when i came across a slight problem. Referring to the picture in the link, I would like to know if my answer (equation 1) ... 0answers 42 views ### How can we apply differential equations to optics. Differential equations in itself is a very complex topic. I read this article on a website that we can apply differential equations to optics and something like brachistochrone problem. What is this ... 0answers 95 views ### A question about nonlinear ODE and chaos I'm just being curious, but is it true or false, that every 3 dimensional nonlinear ordinary differential equations, after rightful parameterizing, can become chaotic? If not, what kind of 3-D ODE can ... 0answers 30 views ### What is the definition of Cauchy function associated with the differential or difference equations? What is the definition of Cauchy function associated with the differential or difference equations? Where can I find the details? 0answers 68 views ### Poisson rate regression for grouped data: How to derive alpha and beta A study of patients’ survival was classified by sex (female or male) with follow-up of patients until the patient died or the study ended. We have the following information: $y_1$ - Number of deaths ... 0answers 28 views ### variation of a final state due to changes in period (where the period is a parameter) I have a simple ordinary differential equation $\frac{dx}{dt}=f(x,t,p,T)$ $x(0) = x_0$, $x(T) = x_T$ where $p$ and $T$ are constant parameters. How do I compute $\frac{dx_T}{dT}$ ? Thanks! NOTE: I ... 0answers 38 views ### Differential Inequality Conditions to Determine Exponential Growth/Decay I'm kind of new to differential equations and I was looking at differential inequalities. I was wondering if I had a second-order differential inequality of the form $f''(x) + af'(x) + b \leq 0$ where ... 0answers 107 views ### Equilibrium point of a certain ODE system Suppose given a system of ODEs $x' = sx^{r} - x\left(sx^{r}+ty^{r}\right) ;$ $y' = ty^{r} - y\left(sx^{r}+ty^{r}\right) ,$ such that $x+y\equiv 1$ and $s,t,r\in\mathbb{R}_{>0}$. The points ... 0answers 40 views ### Compute Rayleigh quotient for ODE I am trying to find Rayleigh quotient for this equation: $u''(r) + [\frac{1-4n^2}{4r^2} + \lambda - 2n\beta -\beta^2r^2]u(r) = 0$, where $0 \le r \le 1$. Is there any way to compute eigenvalue ... 0answers 41 views ### How to compute the values of this function ? ( Fabius function ) How to compute the values of this function ? ( Fabius function ) It is said not to be analytic but $C^\infty$ everywhere. But I do not even know how to compute its values. Im confused. Here is the ... 0answers 90 views ### damped harmonic oscillator driven by a stochastic momentum (not force) Could you give references for solutions or solutions to the following problem: Given: damped harmonic oscillator driven by stochastic force of very short duration (= stochastic momentum). Find: ... 0answers 17 views ### Is it possible to further simplify the following equation? Is it possible to write the following equation in an even simpler form? (In other words does this have any specific implications on the form $\vec f(\vec x)$ can take?) {\partial f_j(\vec x)\over ... 0answers 30 views ### Monge-Ampere equation I'm considering the Monge-Ampere equation in $\mathbb{R}^n$: $\mathrm{det}(D_{ij}u)=f$. I know that its linearized coefficient matrix is $\mathrm{cof}(D_{ij}u)$, i.e. the co-factor matrix of the ... 0answers 44 views ### Stochastic differential equations and experimental data If we have a set of experimental data: $$X=\{x_1,x_2,\ldots,x_N\}$$ is it possible to write down an equation of the kind: $$dx(t)=b(x(t))\,dt+\sigma(x(t))\,dB(t)$$ describing the process from which ... 0answers 57 views ### Differential equations with different constants for different sub-domains I remember that when I was studying differential equations, there was an example with solutions of the form $f(x) + C_1$ for $x>0$ and $f(x)+C_2$ for $x<0$ where $C_1$ and $C_2$ may be different ... 0answers 51 views ### Van der Pol method in a quasilinear equation with multiple fixed points within a cycle. My question is about details of application of the van der Pol - Andronov method to analysis of quasilinear ordinary differential equations. Before formulating the question, let me first give ... 0answers 48 views ### Can a rate be proportional to a shape? This question may be a little vague, but it has a point. I woke up this morning with an idea. Let's say I wanted to design a projectile that has a velocity proportional to its 'shape'. When the ... 0answers 55 views ### Functional equation for the given function For instance, there is functional equation for Lambert W function $z=W(z) e^{W(z)}$ And moreover, there is differential one: $z(1+W)\frac{dW}{dz}=W$. At the same time, there is no known functional ... 0answers 115 views ### maple code for exp-func. for solving PDE's & non-linear ODE's? How can I create the Maple code using exponential-function solving the equation below? $u_t = \gamma u_x+6u(u_x)^2+(3u^2-1)u_{xx}-u_{xxxx}$ $u_t =u_{xx}-u^3+u,$ \$\alpha u''(x) = \beta ... 0answers 175 views ### Solving the boundary value problem by means of Galerkin method I have a task which should be solved with Galerkin method: $$y''-0.5x^2y+2y=x^2 \\ y(1.6)+0.7y'(1.6)=2 (1)\\ y(1.9)=0.8 (2)$$ I already solved it with other methods so the correct answer I know, ... 0answers 51 views ### Question about differential equation notation I'm trying to read the paper "Particle flow for nonlinear filters with log-homotopy" by Daum and Huang. ( http://144.206.159.178/ft/CONF/16415230/16415269.pdf ) As ... 0answers 23 views ### EDP with complementary terms I am considering a problem of a two-dimensional ODE involving Karush, Kuhn and Tucker conditions on one of the unknowns. After a few algebraic manipulations, I end up having to solve the following ... 0answers 120 views ### how does solution of simultaneous DE of first order relate to total DE As far as I've understood $${dx \over P} = {dy \over Q} ={dz \over Q} \hspace{2 cm} (1)$$ Gives system of curves, $v= 0$ and $u=0$ be it's two solution. The solution of $(1)$ is the intersection of ... 0answers 130 views ### Phase Plane Analysis Classify the fixed point at the origin and sketch an accurate phase portrait for the following system: $$dx/dt = 36x-16y$$ $$dy/dx = -3x+28y$$ Am I correct in thinking that I need to write these two ... 0answers 114 views ### About the Legendre differential equation Consider the Legendre differential equation $$(1-x^2) y'' - 2xy' + n(n+1)y = 0$$ Then its solution is given by $$y = c_1 P_n (x) + \text{an infinite series}$$ In fact \$y = c_1 P_n (x) + c_2 Q_n ... 0answers 69 views ### Collision of eigenvalues of a linear ODE (Krein collisions) I am trying to understand the so called Krein collisions in Hamiltonian mechanics but I shall formulate the question in a rather general way. Suppose we have the following linear ODE: \$ \dot{v}= ... 0answers 117 views ### Intuition for PDE Change of Variables The algebraic manipulations for changing variables in PDE/ODE problems are often very simple once you know the transformation to use (at least at my level it's just applying the chain rule carefully). ... 0answers 118 views ### Sturm-Liouville Eigenvalue Question Consider the regular Sturm-Liouville Problem: $$-\frac{d}{dx} \Bigg( p(x)\frac{dv}{dx} \Bigg)=\lambda \rho (x)v$$ $$\alpha _1v(0)-\beta _1v'(0)=0$$ $$\alpha _2v(L)-\beta _2v'(L)=0$$ with ... 0answers 105 views ### How to derive to inverse z transform of $\sqrt{\frac{1-a^2}{1-\frac{a}{z}}}$ from Laguerre differential equation? How can I derive the inverse z-transform of: $$\sqrt{\frac{1-a^2}{1-\frac{a}{z}}}$$ If Maple is not the way, how to derive manually? With Maple code I encounter some problems ... 0answers 80 views ### What is the correct differential equation for the Laguerre function? I would like to derive the correct Laguerre function from the differential equation but the differential equations seems different from the original one. What is the correct differential equation and ... 0answers 44 views ### homoclinic solutions to an autonomous fourth-order ODE Let $b, l > 0$ and $\mu > 2$. Let $F \in C^2([0,\infty))$ and $f = F'$ with $f(q)/q \to 0$ as $q \to 0^+$, $f(q)/q$ increasing, and $qf(q) \geq \mu F(q) > 0$ for $q > 0$. Consider the ... 0answers 121 views ### solving two systems of equation implicitly I have been trying to solve the following two systems of equations simultanously and I'm very hesitant on how to go about it. Whether I need predictor-corrector methods, if I need to linearize the ... 0answers 77 views ### Literature on Riccati equations (algebraic and differential) Advise me please some book on algebraic and differential Riccati equations: I'm interested in such questions as theorems of existence, uniqueness and extendibility of solutions of differential ... 0answers 54 views ### $n$-th derivative of the prolate spheroidal function For a given real number $c>0$ define functions $\left(\psi_{k,c}(\cdot)\right)_{k\ge0}$, as an eigenfunctions of the Sturm-Liouville operators $L_c$ defined ... 0answers 172 views ### Semi implicit integration - stability issues I am trying to decide whether to use semi-implicit integration vs. explicit integration (particularly Position Verlet over Semi implicit Euler). Although the Verlet approach is widely used and is ... 0answers 46 views ### Implications of given solutions This has been solved! Thanks to everyone who read and thought about it Suppose lines of the form $(x_0,y)$ and $(x,y_0)$ for any given $x_0,y_0\in \mathbb R$ are solutions to the system of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203046560287476, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/202918-isosceles-trapezoid-proof-print.html
# Isosceles Trapezoid Proof Printable View • September 4th 2012, 09:35 AM Senators88 Isosceles Trapezoid Proof An isosceles trapezoid has points A,B,C, and D where AD and BC are parallel. Prove that any isosceles trapezoid can be inscribed in a circle. Do this by finding a unique point 0 which is equidistant from points A,B,C, and D. Write a proof on how to construct this circle. Note: Sides AB, BC, and CD have length 1 and side AD has length square root of 3. • September 6th 2012, 08:45 AM kalyanram 1 Attachment(s) Re: Isosceles Trapezoid Proof Draw a perpendicular bisector of the sides $AD$ and $DC$ mark their point of intersection as $O$. Now $O$ is equidistant from $A,D$ and $C$. With $O$ as the center and radius $OA$ ( $=OD= OC$) draw a circle this circle passes through $A,D$ and $C$. This will also pass through $B$. As quadrilateral ABCD is cyclic. $Q.E.D$. (Refer to the figure attached). Kalyan. All times are GMT -8. The time now is 01:19 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155779480934143, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/206306/how-do-i-prove-that-a-graph-is-a-topological-manifold/206634
# How do I prove that a graph is a topological manifold I'm having problems with that: Prove that M is a topological manifold. $f:U \to \mathbb R^k$, U $\subset \mathbb R^{n}$ open, f continuous M = $\{\left(x,y\right)\in \mathbb R^{n+k} \mid x \in U, y=f(x) \}$ - What is "aberto"? – Paul Oct 3 '12 at 2:37 2 Portugese for open. – KReiser Oct 3 '12 at 4:10 @Paul sorry, my fault – user42912 Oct 3 '12 at 14:13 ## 2 Answers Well, $\Psi(x) = (x,f(x))$ provides a patch from $U \subseteq \mathbb{R}^n$ into $\mathbb{R}^{n+k}$. However, given what you say thus far I think I can at most say $M$ is a topological manifold. We need further data about $f$ to say more. - it's true, I want to prove just that M is a topological manifold – user42912 Oct 3 '12 at 4:04 1 I can't understand why it's a topological manifold. For each point in M we have to find a neighborhood and a homeomorphism between this neighborhood and an open subset of $\mathbb R^{n+k}$. – user42912 Oct 3 '12 at 14:33 @user42912 is $\Psi$ continuous as constructed? Is it injective? – James S. Cook Oct 4 '12 at 6:23 yes it's continuous, because its components functions are continuous and it's injective. Is it a homeomorphism? – user42912 Oct 16 '12 at 10:00 @user42912 precisely. Note the surjectivity is clear as the codomain $U \times f(U)$ is clearly attained. The injectivity of $\Phi$ is clear from the $x$ in $(x,f(x))$. And continuity can also be seen since $\Phi$ is the cartesian product of continuous maps. – James S. Cook Oct 18 '12 at 6:55 To show that $M$ is a topological $r$-manifold you would like to show that every point $m$ in $M$ is contained in an open set that is homeomorphic to an open subset of $\mathbb R^r$. Maybe we should first think about what the dimension $r$ is in this case. Points in $M$ are of the form $(x,f(x)) = (x_1, \dots, x_n, f(x))$. Since $f$ is determined by $x_1, \dots, x_n$ the dimension of $M$ is $n$. Now let $(x,f(x))$ be a point in $M$. We would like to find an open set containing $(x,f(x))$ and a homeomorphism from the set to an open subset of $\mathbb R^n$. The whole space $M = U \times f(U)$ is of course open and contains $(x,f(x))$. If we can find a homeomorphism from $U \times f(U)$ to an open subset of $\mathbb R^n$ then we're done. As pointed out by commenter in the comments, the map $h: U \to U \times f(U), x \mapsto (x,f(x))$ is continuous and bijective and its inverse $h^{-1}: U \times f(U) \to U, (x,f(x)) \mapsto x$, which is the projection, is also continuous hence $h$ is a homeomorphism between $M$ and $U \subset \mathbb R^n$. - 2 Why is $f^{-1}(O) \times O \subset M$? (It can't be true because $M$ is $n$-dimensional and $f^{-1}(O) \times O$ is $n+k$-dimensional). – commenter Oct 3 '12 at 15:40 @commenter Hah, true. Let me think about this and fix it, I think the idea I had is right. – Matt N. Oct 3 '12 at 16:12 2 Use what James said: if $f$ is continuous then $\Psi \colon x \mapsto (x,f(x))$ is a homeo of $U$ onto $M$ because $M \ni (x,y) \mapsto x \in U$ is a continuous inverse of $\Psi$. Thus, $\Psi$ maps open subsets of $U$ to open subsets of $M$. – commenter Oct 3 '12 at 19:48 2 The projection $\pi \colon \mathbb{R}^n \times \mathbb{R}^k \to \mathbb{R}^n$ is certainly continuous. The inverse of $\Psi$ is the restriction of $\pi$ to the graph of $\Psi$. – commenter Oct 10 '12 at 10:52 1 Looks good. I think you can leave this answer up (no need to delete it), it might be useful for those who didn't follow James's hint... – commenter Oct 12 '12 at 15:34 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.968126118183136, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/50316-sum-infinite-series.html
# Thread: 1. ## sum of infinite series this is for my stats class and i dont have a calc book with me... the sum of 1/(3^x) where x is from 1 to infinity thanks 2. $\frac{a}<br /> {{1 - r}} = \frac{{\frac{1}<br /> {3}}}<br /> {{1 - \frac{1}<br /> {3}}}$ 3. what do a and r represent, and why are they both 1/3 in this case? thanks 4. a is the first term of the series, and r is the common ratio (the ratio between consecutive terms). 5. This is a geometric series so you use the formula plato gave to find the sum 6. ok what if the ratio is 5x/6, is there another formula i can use? this is for solving the sum of: x * (1/6) * (5/6)^(x-1) where x is 1 to infinity 7. $\sum\limits_{k = J}^\infty {ax^k } = \frac{{ax^J }}<br /> {{1 - x}},\,\,\left| x \right| < 1$ 8. Originally Posted by Plato $\sum\limits_{k = J}^\infty {ax^k } = \frac{{ax^J }}<br /> {{1 - x}},\,\,\left| x \right| < 1$ does a = x*(1/6) then? that wont work because then the answer still has an x in it. 9. Originally Posted by Dubulus ok what if the ratio is 5x/6, is there another formula i can use? this is for solving the sum of: x * (1/6) * (5/6)^(x-1) where x is 1 to infinity To be honest, I cannot say that I really follow what you are trying to do. Here is an example: $\sum\limits_{x = 1}^\infty {\left( {\frac{5}{6}} \right)^x } = \frac{{\frac{5}{6}}}{{1 - \frac{5}{6}}}$ 10. Originally Posted by Dubulus ok what if the ratio is 5x/6, is there another formula i can use? this is for solving the sum of: x * (1/6) * (5/6)^(x-1) where x is 1 to infinity The correct formula for this particular sum is $\frac{a}{(1-r)^2}$. 11. Originally Posted by icemanfan The correct formula for this particular sum is $\frac{a}{(1-r)^2}$. my problem is that there is an "x" in the sum. this to me means that each new factor in the series isn't changing by a constant ratio. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650092124938965, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/97247-rectangular-hyperbola-print.html
# rectangular hyperbola Printable View • August 7th 2009, 05:13 AM s_ingram rectangular hyperbola Hi folks, I am struggeling with the following problem: The tangent at the point P(ct, c/t) (t>0) on the hyperbola $xy = c^2$ meets the x-axis at A and the y-axis at B. The normal at P to the rectangular hyperbola meets the line y = x at C and the line y = -x at D. The normal at P meets the hyperbola again at Q and the mid point of PQ is M. Prove that as t varies, the point M lies on the curve $c^2(x^2 - y^2)^2 + 4x^3y^3 = 0$ I have calculated the points as follows: $A(2ct,0), B(0, 2c/t), C(\frac{c(t^2 + 1)}{t}, \frac{c(t^2 + 1)}{t}), D(\frac{c(t^2 - 1)}{t}, \frac{-c(t^2 - 1)}{t})$ I have the point Q at $(\frac {-c}{t^3}, -ct^3)$ and the mid point of PQ, point M at $(\frac{c(t^4 - 1)}{2t^3}, \frac{-c(t^4 - 1)}{2t})$ The locus of M is usually found by eliminating T from the coordinates of the point, but I can't seem to get anywhere. I end up with $t^2 = \frac{-y}{x}$ and we have $xy = c^2$ but I can't eliminate t completely. Should I try working back from the given result? I feel there has to be a better way! • August 7th 2009, 05:34 AM malaygoel you have $x=\frac{c(t^4 - 1)}{2t^3}$ $y=\frac{-c(t^4 - 1)}{2t}$ hence, $\frac{y}{x}=-t^2$....(1) $xy=\frac{c^2(t^4-1)^2}{4t^4}$....(2) Sustituting the value of $t^2$ from eq(1) to eq(2) $xy=\frac{c^2\{(\frac{-y}{x})^2-1\}^2}{4(\frac{-y}{x})^2}$ All times are GMT -8. The time now is 03:53 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416695237159729, "perplexity_flag": "middle"}
http://www.ams.org/bookstore-getitem/item=MEMO-165-785
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education The Connective K-Theory of Finite Groups R. R. Bruner, Wayne State University, Detroit, MI, and J. P. C. Greenlees, University of Sheffield, UK SEARCH THIS BOOK: Memoirs of the American Mathematical Society 2003; 127 pp; softcover Volume: 165 ISBN-10: 0-8218-3366-9 ISBN-13: 978-0-8218-3366-7 List Price: US\$59 Individual Members: US\$35.40 Institutional Members: US\$47.20 Order Code: MEMO/165/785 See also: Connective Real $$K$$-Theory of Finite Groups - Robert R Bruner and J P C Greenlees This paper is devoted to the connective K homology and cohomology of finite groups $$G$$. We attempt to give a systematic account from several points of view. In Chapter 1, following Quillen [50, 51], we use the methods of algebraic geometry to study the ring $$ku^*(BG)$$ where $$ku$$ denotes connective complex K-theory. We describe the variety in terms of the category of abelian $$p$$-subgroups of $$G$$ for primes $$p$$ dividing the group order. As may be expected, the variety is obtained by splicing that of periodic complex K-theory and that of integral ordinary homology, however the way these parts fit together is of interest in itself. The main technical obstacle is that the Künneth spectral sequence does not collapse, so we have to show that it collapses up to isomorphism of varieties. In Chapter 2 we give several families of new complete and explicit calculations of the ring $$ku^*(BG)$$. This illustrates the general results of Chapter 1 and their limitations. In Chapter 3 we consider the associated homology $$ku_*(BG)$$. We identify this as a module over $$ku^*(BG)$$ by using the local cohomology spectral sequence. This gives new specific calculations, but also illuminating structural information, including remarkable duality properties. Finally, in Chapter 4 we make a particular study of elementary abelian groups $$V$$. Despite the group-theoretic simplicity of $$V$$, the detailed calculation of $$ku^*(BV)$$ and $$ku_*(BV)$$ exposes a very intricate structure, and gives a striking illustration of our methods. Unlike earlier work, our description is natural for the action of $$GL(V)$$. Readership Graduate students and research mathematicians interested in algebra, algebraic geometry, geometry, and topology. • General properties of the $$ku$$-cohomology of finite groups • Examples of $$ku$$-cohomology of finite groups • The $$ku$$-homology of finite groups • The $$ku$$-homology and $$ku$$-cohomology of elementary abelian groups
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8515483140945435, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/165405-give-equation-plane-parallel-plane.html
# Thread: 1. ## give an equation for the plane that is parallel to the plane.... give an equation for the plane that is parallel to the plane 5x-4y+z=1 and that passes through the point (2,-1,-2). i used the formula f(a,b) + fx(a,b)(x-a) + fy(a,b)(y-b) + fz(a,b)(z-c) and get (-5x+4y+1) + (-5)(x-2) + (4)(y+1) + (1)(z+2) then plug in the points for the rest of the variables -13 + (-5x+10) + (4y+4) + (z+2) -5x+4y+z+3 i'm pretty sure this is wrong and i'm not sure if i used the right formula for this. can anyone explain to me how to do this? is there a simpler way? thanks! 2. You are making it far too hard. $5x-4y+z=5(2)-4(-1)+1(-2)$. DONE! 3. ## give an equation for the plane that is parallel to the plane.... Originally Posted by break Give an equation for the plane that is parallel to the plane 5x-4y+z=1 and that passes through the point (2,-1,-2). i used the formula f(a,b) + fx(a,b)(x-a) + fy(a,b)(y-b) + fz(a,b)(z-c) and get (-5x+4y+1) + (-5)(x-2) + (4)(y+1) + (1)(z+2) then plug in the points for the rest of the variables -13 + (-5x+10) + (4y+4) + (z+2) -5x+4y+z+3 I'm pretty sure this is wrong and i'm not sure if i used the right formula for this. Can anyone explain to me how to do this? is there a simpler way? thanks! Any plane parallel to the plane, $5x-4y+z=1$ will be of the form: $5x-4y+z=D$, where $D$ is a constant. To find what that constant is, plug in the coordinates of any point in the plane. That's basically what Plato did in one step. 4. wow... thanks for clearing this up!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8844581246376038, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/13843/list
## Return to Question 2 corrected English in title 1 # how quickly determine whether some natural number is a degree of any other natural number? We have a natural number $n>1$. We want to determine whether exists any natural numbers $a, k>1$ such that $n = a^k$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8409973382949829, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/47535/f-test-f-4-815-approx-f-4-605-81-60f-4-605-f-4
# F Test: $F_{4, 81}(5\%) \approx F_{4,60}(5\%) - (81 - 60)(F_{4, 60}(5\%) - F_{4,120}(5\%))/60$. Where does this come from? I had to compute the F-test in my ANOVA question and use interpolation if necassary. The first time, I had to work out the values of $F_{4,90}(5\%)$ which I said was approximatley $F_{4, 60}(5 \%)$ and then did my test and got the right answer. In the second bit, in my 3-way ANOVA table, when I'm checking to see if two of the factors interact, I need to get the values for $F_{4, 81}(5 \%)$ and so I thought I'd do the same thing and so its roughly $F_{4, 60}(5 \%)$ and then do it like this. However it turns out you get something like: $$F_{4, 81}(5\%) \approx F_{4,60}(5\%) - (81 - 60)(F_{4, 60}(5\%) - F_{4,120}(5\%))/60$$ Where does this come from and why is it like this? - 3 – whuber♦ Jan 11 at 21:05 @whuber Why is it that I don't do this for $F_{4,80}$ then? – Kaish Jan 11 at 22:35 1 For higher accuracy, you should have. I suspect that the size of the error made (from not interpolating) was small enough that it did not matter. Jaime's answer explains this nicely. – whuber♦ Jan 12 at 17:32 ## 1 Answer This graph shows $F_{4,n}(5\%)$ for $n$ in the range $60$ to $120$, plus the two approximations you have used with your ANOVAs. The maximum error you are going to make by using $F_{4,60}$ for every value in the range is below $3\%$, but if you go for linear interpolation, you can bring that down to $0.6\%$. Why it would be necessary for one part, but not the other, I don't have a clue. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9690380096435547, "perplexity_flag": "head"}
http://mathoverflow.net/questions/41033/realizing-higher-level-fock-spaces
## Realizing higher level Fock spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\mathfrak{g}$ = $\mathfrak{gl}_{\infty}$. To each positive integer $k$ one can associate the level $k$ Fock space $\mathcal{F}_{k}$. For a dominant weight $\lambda$ of level $k$, one can define an action of $\mathfrak{g}$ on $\mathcal{F}_{k}$ so that it has a highest weight vector of weight $\lambda$ which generates the corresponding irreducible $\mathfrak{g}$-module. Let us denote $\mathcal{F}_{k}$ with this action as $\mathcal{F}_{k}(\lambda)$. My question is about realizing these spaces and actions. In the lectures of Kac and Raina "Highest-weight representations of infinite dimensional Lie algbebras", there is a construction of $\mathcal{F}_{1}(\lambda)$ (here $\lambda$ is a fundamental weight) as a subspace of the semi-infinite wege space. In this realization the action of $\mathfrak{g}$ is just the natural action on wedge products. Is there an analogous realization of $\mathcal{F}_{k}(\lambda)$ for higher $k$? What about for $\mathfrak{g}$ = $\hat{\mathfrak{sl}}_{p}$? - ## 3 Answers For $\mathfrak{gl}_{\infty}$ the answer is easy, you realize the Fock space as a direct limit of polynomial representations of finite $\mathfrak{gl}_n$ modules. You can read about the construction here. I worked on the $\hat{sl}_p$ case a few years ago, but got stuck. If you could do it, it would be very nice. - I guess I should say that it is straighforward to extend the $\mathfrak{gl}_\infty$-module I mentioned above to $a_\infty$ in a way analogous to the Kac-Raina level 1 extension. So, you get an action of $\hat{\mathfrak{sl}}_p$ on the Fock space. The problem is that it is not generally irreducible as a $\hat{\mathfrak{sl}}_p$-module. – David Hill Oct 4 2010 at 19:38 Thank you very much for this link. I think this is very close to what I was looking for, although I haven't had a chance to look at it closely yet. A question: from what I understand the level k Fock space has a basis indexed by k multi-partitions. Is this "visible" in your construction? – Oded Yacobi Oct 4 2010 at 20:37 I'll have to think about it more, but my first instinct is that the answer is no. The $k$-multipartition description comes from tensoring together $k$ level 1 representations, each parametarized by partitions. In my paper, I skipped the big space and defined the action in one shot. Then again, maybe the translation is not too bad? I'll think some more. – David Hill Oct 4 2010 at 21:14 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Higher level Fock spaces have been studied in the context of the quantum affine algebra $U_q(\widehat{sl}_n)$. There is a "higher level Fock space" representation for this algebra whose underlying space looks like semi-infinite wedge space. I believe the original reference is Jimbo, Miwa, Misra and Okado "Combinatorics of representations of $U_q(\widehat{sl}_n)$ at $q=0$" http://www.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=AUCN&pg6=AUCN&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=Jimbo&s5=Miwa&s6=Misra&s7=&s8=All&vfpref=html&yearRangeFirst=&yearRangeSecond=&yrop=eq, although there the wedge space structure is not clear. That is explained in Uglov's paper "Canonical bases of higher level $q$-deformed Fock space and Kahzdan Lusztig polynomials" http://arxiv.org/abs/math/9905196. Higher level Fock space is more complicated then the level 1 case. For instance many different irreducible representation occur as direct summands of Fock space. In order to get a realization of a single irreducible highest weight representation, you need to pick off the irreducible subrepresentation generated by a certain overall highest weight vector. On the level of representations, this is difficult. However, in the "crystal limit" (i.e. at $q=0$), this can be done quite easily. The basis of the resulting representation is naturally indexed by $\ell$ tuples of partitions (where $\ell$ is the level), satisfying a couple conditions. This fact has been useful in studying crystal bases of these higher level representations. - Thanks! Does the higher level Fock space carry a rep of $U_{q}(\hat{\mathfrak{sl}}_{n})$ only for generic $q$, or also for $q=1$ or $q$ root of unity? – Oded Yacobi Oct 5 2010 at 0:07 The papers I mentioned deal with generic $q$. I think one should be able to make sense of the construction at $q=1$ though. This is just because the formulas for the actions of $E_i$ and $F_i$ on the standard basis of Fock space make sense at $q=1$. See Theorem 2.1 in Uglov's paper. These seem to make sense at other roots of unity as well...although certainly the structure of the representation would be much much complicated in those cases. – Peter Tingley Oct 5 2010 at 1:58 Representation theory of direct limit Lie algebras (like $\mathfrak{gl}_\infty$) has been studied extensively by Dimitrov, Penkov, and Styrkas. You can find their papers on the arXiv. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228718876838684, "perplexity_flag": "head"}
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevB.82.241306
Synopsis: Insulating behavior in topological insulators Large bulk resistivity and surface quantum oscillations in the topological insulator Bi2Te2Se Zhi Ren, A. A. Taskin, Satoshi Sasaki, Kouji Segawa, and Yoichi Ando Published December 9, 2010 3D topological insulators represent a unique quantum state for matter that is supposed to show insulating behavior in the bulk and spin-dependent metallic conduction on the surface. In practice, the best-known exemplars of materials that show a topologically protected metallic surface state, such as $Bi2Se3$ and $Bi2Te3$, are also conducting in the bulk due to the presence of vacancies. Significant efforts in trying to find a topological insulator that is truly insulating in the bulk have met with little success. Presenting their results as a Rapid Communication in Physical Review B, Zhi Ren and colleagues from Osaka University, Japan, have synthesized a new topological insulator, $Bi2Te2Se$, that approaches insulating behavior in the bulk with a high resistivity. Ren et al. demonstrate variable-range hopping that is the hallmark of an insulator in high-quality single crystals of $Bi2Te2Se$ and Shubnikov-de Haas oscillations coming from the 2D surface metallic state. Surface contribution to the total conductance of the crystal at $6%$ is the largest ever achieved in a topological insulator. From a detailed study of the Hall effect, the authors also determine the transport mechanism in the bulk that reveals an impurity band in the band gap along with hopping conduction of localized electrons. These results pave the way for exploiting the unique surface conduction properties of topological insulators. – Sarma Kancharla ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122599959373474, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/277045/easiest-way-to-determine-all-disconnected-sets-from-a-graph
# Easiest way to determine all disconnected sets from a graph? Suppose that I have a un-directed graph of nodes and edges, I would like to know all sets of nodes that do not connect with any other nodes in the graph. Here is a concrete example to help you picture what I'm asking. In the following graph, all x nodes are connected to their adjacent (diagonal included) x nodes and the same goes for o nodes and b nodes. x o o b x o b b x I wrote an algorithm that does this by taking a node and using depth first search to find all nodes connected to it. Then I remove those nodes from the graph and repeat with a new node until there are no more nodes left in the graph. I'm starting to think that this isn't the most efficient method and that there has to be a way to do this using an adjacency matrix or something similar. If I were to translate the above graph into an adjacency matrix and name each node (1..9, left to right, top to bottom), it would look like this: ~~ 1 2 3 4 5 6 7 8 9 1 | 0 0 0 0 1 0 0 0 0 2 | 0 0 1 0 0 1 0 0 0 3 | 0 1 0 0 0 1 0 0 0 4 | 0 0 0 0 0 0 1 1 0 5 | 1 0 0 0 0 0 0 0 1 6 | 0 1 1 0 0 0 0 0 0 7 | 0 0 0 1 0 0 0 1 0 8 | 0 0 0 1 0 0 1 0 0 9 | 0 0 0 0 1 0 0 0 0 I put zeros down the diagonal, but I'm not sure if that's right notation for an adjacency matrix. Also, since it's an undirected graph, I know that the matrix is symmetrical down the diagonal. Beyond that, I'm stuck. I just have a feeling that something about this matrix will make it easier to identify the 3 distinct unconnected groups beyond what I've done already. Does anyone have an idea for an algorithm that will help me? Thanks in advance. - 1 Depth first search is $O(|E|)$. How much more efficient were you trying to get? – Michael Biro Jan 13 at 18:59 Very valid question. In my particular case, I'm writing a program, and my current algorithm needs to make a copy of the grid to do its current depth first search because it deletes the node from the grid when it runs. Furthermore, I intend to evaluate the distinct groups further such as if a group breaks up should a node be deleted. My thought was that if I already had an adjacency matrix and a quick way to evaluate a graph using it, then I could just persist the matrix rather than making copy after copy. – Kyle Jan 13 at 19:37 Well, you certainly shouldn't be doing that. I'll write out an answer. – Michael Biro Jan 13 at 20:22 – Rahul Narain Jan 13 at 20:56 ## 2 Answers Say you have an adjacency matrix like the one in your question. You can determine connected components by doing a breadth-first (or depth-first) search in the matrix without having to remake copies or delete vertices. You'll start each connected component search with the first vertex that you haven't placed in a component yet. The first one will be vertex $v_1$: Initialize the connected component $C_1 = \{v_1\}$ and then move across $v_1$'s row in the adjacency matrix. We see that $v_1$ is adjacent to $v_5$, so $v_5$ gets added to the component $C_1 = \{v_1,v_5\}$, and we move on to $v_5$'s row. $v_5$ is connected to $v_1$ (seen already) and $v_9$, so add $v_9$ to $C_1$, and move on to $v_9$, which is adjacent to $v_5$ (seen already). Since we've reached the end of this tree, we're done with this component and get $C_1 = \{v_1,v_5,v_9\}$. Now, take the next vertex that we haven't seen yet ($v_2$) and set $C_2 = \{v_2\}$. $v_2$ is adjacent to $v_3$ and $v_6$, so we get $C_2 = \{v_2,v_3,v_6\}$, and the next vertex to check is $v_3$, which is adjacent to $v_2$ and $v_6$, both seen. Then move to the next vertex $v_6$ and note that its adjacent to $v_2$ and $v_3$ (both seen), so we're done with this component too. On to $C_3$, the same procedure gets us $C_3 = \{v_4,v_7,v_8\}$. All vertices $v_1$ through $v_9$ have been seen at this point so we're done, and the graph has $3$ components. - Thanks. The answer was looking at me in the face. I guess I just needed it spelled out for me. – Kyle Jan 13 at 22:27 [First, let me state that I do not know what algorithms people use to deal with this problem.] The typical Adjacency matrix has 0's along the diagonal, representing that there is no self-loop. However, if you put 1's along the diagonal (i.e. add in self-loops for all vertices), then you will still have a real symmetric matrix that is diagnoalizable. Recall that that the entires of matrix $A^n$ will give you the number of paths of length exactly $n$, from vertex $v_i$ to vertex $v_j$. So, we can take the matrix $A$ and raise it up to power $|V|$, and the connected components of the graph will appear as blocks, which anything that is not connected will have a 0. Note that adding of the 1 is necessary, to extend any path to obtain a path of length exactly $|V|$. Not so sure: There could be variants around this, like calculating $(I-A)^{-1}$ which could be quicker, but not fail proof. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620102047920227, "perplexity_flag": "head"}
http://tallbloke.wordpress.com/2012/02/09/nikolov-zeller-reply-eschenbach/
# Tallbloke's Talkshop Cutting edge science you can dice with ## Nikolov & Zeller: Reply to Eschenbach Posted: February 9, 2012 by tallbloke in Astrophysics, atmosphere, climate, Energy, flames, Incompetence, solar system dynamics Anthony Watts has kindly offered the Talkshop the exclusive on Ned Nikolov and Karl Zeller’s reply to the article by Willis Eschenbach published at WUWT, which we accept, gladly. ## Reply to: ‘The Mystery of Equation 8’ by Willis Eschenbach Ned Nikolov, Ph.D. and Karl Zeller, Ph.D. February 07,2012 In a recent article entitled ‘The Mystery of Equation 8’ published at WUWT on January 23 2012, Mr. Willis Eschenbach claims to have uncovered serious mathematical and conceptual flaws with two principal equations in our paper ‘Unified Theory of Climate‘. In his ‘analysis’, Mr. Eschenbach makes several fundamental errors, the nature of which were so elementary that our initial reaction was to not respond. However, after 10 days of observing the online discussion, it became clear that a number of bloggers have fallen victim to the same confusion as Mr. Eschenbach. Hence, we decided to prepare this official reply in an effort to set the record straight. This will be the only time that we respond to such confused criticism, since we believe that the climate science community has much more serious issues to discuss. ### Demystifying the Mysteries of Equations 7 and 8 We begin with the most amusing claim by Mr. Eschenbach, which he calls ‘the sting in the tale’. First, some background: in our original paper, we use 3 principal equations that form the backbone of our new ‘Greenhouse’ concept. For consistency, we use here the same formula numbering as adopted in the original paper. Equation (2) calculates the mean surface temperature (Tgb) of a standard Planetary Gray Body (PGB) with no atmosphere, i.e. where So is the solar irradiance (W m-2), αgb = 0.12 is the PGB shortwave albedo, ϵ = 0.955 is PGB’s thermal emissivity, σ = 5.6704×10-8 W m-2 K-4 is the SB constant, and cs = 0.0001325 W m-2 is a small constant, the purpose of which is to ensure that Tgb = 2.725K when So = 0.0. The derivation and validation of this formula is discussed in more detail elsewhere. We redefine the ‘Greenhouse Effect’ as a near-surface Atmospheric Thermal Enhancement (ATE) measured by the non-dimensional ratio (NTE) of a planet’s actual mean near-surface temperature (Ts) to the temperature of an equivalent PGB at the same distance from the Sun, i.e. NTE = Ts / Tgb(where Tgb is computed by Eq. 2). We then use observed data on surface temperature and atmospheric pressure (Ps) for 8 celestial bodies to derive an empirical function relating NTE to Ps employing non-linear regression analysis. The result is our Eq. (7), which describes all planetary data points with a high degree of accuracy: The key conceptual implication of Eq. (7) is that, across a broad range of atmospheric planetary conditions, the ATE factor is completely explained by variations in mean surface pressure. In Section 3.3 of our original paper, we specifically point out that NTE has no meaningful relationship with other variables such as total absorbed solar radiation by planets or the amount of greenhouse gases in their atmospheres. In other words, pressure is the only accurate predictor of NTE (i.e. ATE) we found. This fact appears to have completely escaped Mr. Eschenbach’s attention. From Eq. (7) we derive our Equation 8 (the subject of Eschenbach’s analysis) in the following manner. First, we solve Eq. (7) for Ts, i.e Secondly, we substitute Tgb for its actual expression from Eq. (2) to obtain: Thirdly, we combine the fixed parameters 2/5, αgb, ϵ and σ in Eq. (7b) into a single constant, i.e. Fourth, we use the newly computed constant along with the symbol NTE(Ps) representing the EXP term of Eq. (7b) to write our final Eq. (8): Basically, Eq. (8) is Eq. (7b) expressed in a simplified and succinct form, where NTE(Ps) literally means the ATE factor as a function of pressure! Let’s now look at how Mr. Eschenbach interprets Eq. (7) and its relationship to Eq. (8). He correctly identifies that Eq. (7) has 4 ‘tunable parameters’(the correct term is regression coefficients, but never mind this minor terminological inaccuracy for now). He then espouses: Amusingly, the result of equation (7) is then used in another fitted (tuned) equation, number (8). This is the first demonstration of misunderstanding in his analysis (with far reaching consequences as discussed below), where he fails to grasp that Eq. (8) follows simply and directly from Eq. (7) after a few straightforward algebraic rearrangements, and that it contains no additional tunable parameters! Instead, Mr. Eschenbach smugly informs our fellow bloggers that the constant 25.3966 is yet another tunable parameter, which he labels t5 (his Eq. 8sym)?! We point out that the fixed parameters used to produce this constant have been defined and set prior to carrying out the regression analysis that yielded Eq. (7). Indeed, it could not have been any other way, because these parameters are required to estimate the PGB temperatures (Tgb) used in the calculation of NTE values, which are subsequently regressed against observed pressure data. Thus, Eschenbach now leads the readers astray telling them that we use 5 tunable parameters instead of 4. Fascinating! Next, in a state of total confusion, he makes the following stunning proposition: We can also substitute equation (7) into equation (8) in a slightly different way, using the middle term in equation 7. This yields: Ts = t5 * Solar^0.25* Ts / Tgb       (eqn 10 What middle term? This twisted line of reasoning is astounding, because it reveals an utter misunderstanding of basic algebra compounded with an inability to follow content, thus leaving the reader literally speechless! This error leads Eschenbach to his central false claim that our Eq. (8) simply meant Ts = Tgb * Ts / Tgb, and therefore reduces to Ts = Ts!? One can only stand in disbelief before such nonsense! This is what Mr. Eschenbach jubilantly calls ‘the sting in the tale‘. It is a big sting, alright, but in his tail, not ours! He proudly reiterates this ‘finding’ once again in the Conclusion section of his article leaving no doubt in reader’s mind about his analytical ‘skills’. Blinded by a profound misunderstanding, Mr. Eschenbach pompously concludes in regard to the constant 25.3966 that what we have done is “estimate the Stefan-Boltzmann constant by a bizarre curve fitting method”. He further states: “And they did a decent job of that. Actually, pretty impressive considering the number of steps and parameters involved”. Wow! Hands down, such a conclusion could easily qualify for the Guinness Book of Records on Miscomprehension! The rest of Eschenbach’s ‘revelations’ in regard to our Equations (7) and (8) are less flamboyant but equally amusing. He argues that the small constant cs in Eq. (2) is pointless while failing to understand the physical realism it brings to the new model (Eq. 8). Since the goal of our research was not just to derive a regression equation, but to develop a new physically viable model of the ‘Greenhouse Effect’, this constant is important in two ways: (a) it does not allow the PGB temperature to fall below 2.725K, the irreducible temperature of Deep Space, when So approaches zero; and (b) it enables Eq. (8) to predict increasing temperatures with rising pressure even in the absence of solar radiation. Indeed, if we set cs = 0.0, then Eq. (8) would always predict Ts = 0.0 when So = 0.0 regardless of pressure, which is physically unrealistic due to the presence of cosmic background radiation. A major portion of Eschenbach’s criticism focuses on the ‘accusation’ that all we had done is just ‘curve fitting’ devoid of any physical meaning. In an Update to his article, Eschenbach attempts to prove that he can do a better job in fitting a curve through our planetary NTE values using an equation with fewer free parameters. His simplified version of our Eq. (8) has 3 regression parameters (instead of 4) and reads: Figure 1. Absolute errors of predicted planetary mean surface temperatures by Eschenbach’s simplified equation and by N&Z’s Equation (8). Errors are assessed against the observed mean surface temperatures listed in Table 1 of Nikolov & Zeller’s original paper. Note that his expression is in a sense more empirical than our Eq. (8), because the coefficient in front of So has been erroneously treated as a tunable (regression) parameter, hence distorting our PGB Eq. (2). Figure 1 compares the absolute deviations of predicted planetary surface temperatures from their true values (listed in Table 1 of our original paper) using Eschenbach’s regression equation and our Eq. (8). It is obvious to a naked eye that Eschenbach’s formula produces far less accurate results than our Eq. (8). This was also recently quantified statistically by Dan Hunt in an article published at the Tallbloke’s Talkshop. For example, Eschenbach’s equation predicts Earth’s mean temperature to be 295.2K, which is 7.9K higher that observed. This is not a small error, because the last time our Planet was 7.9K warmer than present some 40M years ago the earth surface was ice-free, and Antarctica was covered by subtropical vegetation! Of course, being a construction manager, Mr. Eschenbach likely has a limited understanding of Earth’s climate history and what a 7.9K warmer surface actually means. However, the fact that he claims aloud a superior accuracy of his simplified equation over ours is puzzling to say the least. His exact words were: Curiously, my simplified version actually has a slightly lower RMS error than the N&Z version, so I did indeed beat them at their own game. My equation is not only simpler, it is more accurate This statement blatantly contradicts the evidence. Mr. Eschenbach does not know that we have extensively experimented with exponential functions containing various numbers of free parameters many months before he became aware of our theory, and we have found that it takes a minimum of 4 parameters to accurately describe the highly non-linear relationship between NTE and surface pressure (Eq. 7). The basic implication of Eschenbach’s analysis is that one could indeed use a 3 parameter exponential function to predict planetary temperatures from solar irradiance and surface pressure but with far less accuracy. Truly enlightening! By the way, curve fitting is an integral part of the classic science method. When dealing with an unknown process or phenomenon, taking measurements and using the data to fit curves is the only feasible approach to understand and develop a theory about the phenomenon. This method was extensively used throughout the 18th and 19th Century and a good part of the 20th Century to extract the so-called first principles in physics we currently employ to describe the World. However, arguing about curve fitting really misses the main point of our study. ### Focusing on the Big Picture What Mr. Eschenbach and a number of others have totally failed to grasp is the highly significant fact that the enhancement factor NTE (i.e. the Ts / Tgb ratio) is indeed closely related to pressure, and that no other variable can explain the interplanetary variation of NTE so completely. As Dr. Zeller pointed out in a recent blog post, given the simplicity of Eq. (8), it is a ‘miracle’ how accurately it predicts surface temperatures of planets spanning a vast range of environmental and atmospheric conditions throughout the solar system! This cannot be a coincidence! Rather it suggests the presence of a real physical mechanism behind the regression Equation (7) related to the thermal enhancement effect of pressure. This effect is physically similar (although different in magnitude) to the relative adiabatic heating observed in the atmosphere and described by the well-known Poisson formula derived from the Gas Law (see discussion in Section 3.3. and Fig. 6 in our original paper). Even the mistaken analysis of Mr. Eschenbach could not manage to negate the above truth. He vigorously criticized our Eq. (8) using all sorts of faulty technical arguments only to arrive himself at a similar (albeit less accurate) equation that predicts planetary temperatures as a faction of the same two variables – solar insolation and pressure! His argument that one could arbitrarily use air density instead of pressure is groundless, because pressure as a force is the primary independent variable in the isobaric thermodynamic process of planetary atmospheres. Ground pressure depends solely on the mass of air column above a unit surface area and gravity, while air density is a function of temperature and pressure. In other words, density cannot exist without pressure. For a given pressure, the near-surface air density varies on a planetary scale in a fixed proportion with temperature, so that the product Density*Temperature = const. on average, i.e. higher temperature causes lower density while lower temperature brings about higher density according to the Charles/Gay-Lussac Law for an isobaric process. We now draw attention to a key logical contradiction in Mr. Eschenbach’s approach. In the main text of his article, he makes the central claim that our Eq. (8) represented a mathematical nonsense, since according to his logic, it reduces to Ts = Ts (the TA-DA! moment). Yet, in the Update section, he uses data from Table 1 in our original paper to derive a very similar equation, which he calls a ‘simplified version’ of Eq. (8). So, according to Mr. Eschenbach, our Eq. (8) is numerically meaningless, while his equation based on the same data is mathematically sound. This raises the question, how poor do one’s reasoning skills have to be in order for one to contradict himself in such a ridiculous manner? We will let you be the judge … ### Conclusion We have shown in this reply that all criticism of our Equations (7) and (8) by Mr. Eschenbach is without merit. We emphasized the need for better understanding of and focusing on the big picture that our theory conveys. We propose to shift the discussion from meaningless argumentations about number of regression coefficients or number of significant digits of constants used, to how pressure as a force controls temperature and climate. In this regard, we would like to issue an appeal to all of you, who are capable of carrying out an intelligent discussion at a decent academic level to stop engaging in pseudoscientific, besides-the-point fruitless debates. We are here to discuss and offer a resolution to the current climate science debacle and welcome everyone who shares that goal. We are not here to promote or engage in endless circular talks or teach laymen ‘skeptics’ basic math and high-school level physics. Hence, we will no longer participate in dialogs of the kind that prompted this reply. We urge all sound thinking readers to do the same. Thank you! ### Like this: Comments 1. Stephen Wilde says: “The distribution of solar energy as it moves dynamically through the system is such that temperature is enhanced in the near surface atmosphere relative to higher altitudes. This makes the conduction of heat from ocean to air slower than it would otherwise be. So heat accumulates in the ocean until it is in thermal equilibrium with the air above it.” That is similar to the AGW contention that downwelling IR makes the ocean skin warmer and so heats up the oceans by making the flow of energy from ocean to air slower than it otherwise would be. We need to distinguish between those two propositions. I can accept that pressure could achieve that outcome but not so called downward IR because downward IR would just cause more evaporation whereas the energy flow through the system and hence ocean temperature would already have come into balance with the surface pressure so that without a change in pressure there would be no change in ocean temperature. I don’t believe there is any downward IR anyway. All that is recorded by the sensors is the warmer air near the surface and directly in front of the sensor. Any IR comes from that warm air near the surface and not from up in the sky. Furthermore the evidence from Earth and Venus is that anything other than pressure is irrelevant. 2. tallbloke says: Phil: You raise two issues. On the first, Ned has been working with a NASA scientist on the Diviner data and the N&Z grey body calc is within 6K of the integrated empirical data. On the second, the grey body calc for Earth gives nearly the same result (to within a couple of Kelvin) because an airless Earth wouldn’t have an ocean either. [co-mod: Phil, try these backlinks Ned 6th Jan and same date a search will find more. Try google (or bing or whatever) like this site:tallbloke.wordpress.com search_text --Tim] 3. BenAW says: tallbloke says: February 13, 2012 at 10:37 am Ben AW: Thanks, we can start disagreeing agreeably again now. “Earth is a “BB” that is already at ~275K without any radiation falling on it (oceans bulk temp).” How does it get from 2.75K to 275K? If you mean that’s the temperature the bulk of the ocean stays at overnight after 3.5 billion years of being warmed by the Sun every day, we can move forward I assume the bulk of the oceans do NOT interact with either the atmosphere nor the hot core. They just sit there being oceans. (forget about upwelling etc etc for the moment, these are disruptions of the big picture) The oceans temperature came either from their creation, billions of years ago (remember the hot core?) or a later warming event, like the braking up of Pangea, with some of Earths internal heat escaping, or a major meteorite crash or whatever. Point is, only the surface layer (few hundred meters) is DIRECTLY influenced by solar and cools during the night, having enough heat capacity to have a relatively steady temp over a 24 hr period. I assume the depth of the thermocline will vary slightly over a day, and will certainly vary over the seasons. 4. tallbloke says: Ben AW: I think the oceans have a large degree of thermal inertia, which may enable an oscillation at the length of Milankovitch cycles. I don’t accept your “they just got created that way” argument though. What about all the paleo evidence for thermohaline overturning, internal tides, meridional circulation, coriolis forces, and all the rest of their motions? If there was no overturning, the whole of the deep ocean would be anoxic, and no fish could live in the deep. No. The thermal oceans have the temperature/depth profile they do, and are teeming with life because they are dynamic, not because they are static and stagnant. Sorry, I don’t buy it. 5. Phil. says: Tallbloke the point I raised regarding heat capacity applies for a rocky planet , the existance of an ocean is not relevant. Ned’s calculation does not agree with Diviner, the temperature on the dark side of the moon does not get close to 3K, Ned’s model assumption. Ned’s basic assumption is unrealistic, far more so than the conventional one of uniformity. Why doesn’t Ned do the calculation using a realistic value for surface heat capacity, his enhancement parameter will be smaller and his compensation for the error by excessive pressure terms will be less. 6. Thefordprefect says: davidmhoffer says: February 13, 2012 at 11:40 am ; “spitter cirquit”. I assume you mean voltage multiplier as this takes AC and gives you higher voltage DC out Remember that using R and D in this sort of circuit need to be used sensibly since there is no loss what goes in is still in till it comes out!. http://www.linear.com/designtools/software/ 7. tallbloke says: Phil: The point is that they are not striving to make a perfect model of surface temperature, but to do a calculation using a relatively simple equation which gets the right answer for the average temperature of the grey body. Which it does to within 6K. This is substantially better than the classic misapplication of S-B, which get the wrong answer, ~100K out. On your second point, it might make a very small difference, but I doubt it would make Robert Brown happy. That issue will be dealt with in N&Z’s ‘Reply to comments on the UTC part 2′ which will be published here at the Talkshop when they are good and ready. 8. Phil. says: [snip] Quote what Tim said and give the timestamp link to when and where he said it, and then try again. Thanks – TB. 9. Phil. says: Tallbloke, I’m not going to be able to do that from my phone so you’ll have to wait. Regarding your comparison of the conventional and Ned’s model, it’s inappropriate because they are doing different things. The conventional model addresses the influence of radiatively active gases in the atmosphere (the filtering effect of the atmosphere) whereas Ned is trying to model the effect of having an atmosphere. By neglecting surface heat capacity in his model he makes a significant error. By Holders inequality it’s not possible to get the right average temperature by using the correct integrated flux and the wrong temperature distribution. 1) I stepped through Tim’s comments to this thread and he has said nothing about Holder’s Inequality, so whatever your reply to him is, put it on the thread where he said whatever it is you are replying to, not here. 2) You were discussing Diviner data, which measured the Moon’s surface temperatures, not Earth’s, so stop wriggling. 3) By neglecting heat capacity Ned’s calc comes out 6K low, whereas the classic misapplication of the S-B equation comes out 100K high. You should be able to work out which is better. 4) Whatever the ‘conventional model’ is trying to do, it is wrong because it’s application of the S-B equation is demonstrably incorrect. As you said earlier, if a model fails at the first step, everything afterwards will be wrong too. If the ‘conventional modelers’ want to provide some other rationale for a ghg free atmosphere leading to a 255K Earth surface, tell them to bring it on. 10. davidmhoffer says: thefordprefect; spitter cirquit”. I assume you mean voltage multiplier as this takes AC and gives you higher voltage DC out>>> Yes, same thing. thefordprefect; Remember that using R and D in this sort of circuit need to be used sensibly since there is no loss what goes in is still in till it comes out!.>>> see next comment. the spitter gave me the idea, but a much simpler cirquit is a better description of the actual use case. 11. BenAW says: tallbloke says: February 13, 2012 at 1:11 pm Ben AW: I think the oceans have a large degree of thermal inertia, which may enable an oscillation at the length of Milankovitch cycles. I don’t accept your “they just got created that way” argument though. What about all the paleo evidence for thermohaline overturning, internal tides, meridional circulation, coriolis forces, and all the rest of their motions? If there was no overturning, the whole of the deep ocean would be anoxic, and no fish could live in the deep. No. The thermal oceans have the temperature/depth profile they do, and are teeming with life because they are dynamic, not because they are static and stagnant. Sorry, I don’t buy it. Of course these things are real. Lets stick to the basics first, and worry about secondary effects later. If it makes you happy, fine, the oceans have accumulated their present profile over billions of years. See: http://er.jsc.nasa.gov/seh/Ocean_Planet/activities/ts2ssac4.pdf second page Also: http://earthguide.ucsd.edu/earthguide/diagrams/woce/ Pacific ocean Only the top layer of the oceans is DIRECTLY heated by solar, with a nice fall of the temp. towards the poles. Around the polar circles the cold deep ocean “surfaces”. The “band” of warm water extending from equator to both poles buffers the incoming solar over the day. This is the basic picture. Continents, ocean currents, upwelling etc. etc just disrupt this basic picture. 12. davidmhoffer says: OK, here’s the brief explanation before I run off to engage in income generation: Consider a capacitor in parallel with an AC voltage. Assume an AC voltage with a peak of 200 volts and an “effective” voltage of 120 volts. (engineering class was too long ago and not enough time to figure out the right numbers at the moment so all you EE’s out there just live with it I’m illustrating a concept not a perfectly valid physical model). “Average” voltage across the capacitor is zero. Slap a diode in series with the capacitor. The voltage across the capacitor will build to peak voltage, which is 200 volts. Now make it real world and put a resistor in parallel with the diode. If the value of the resistor is infinity, the voltage reaches a limit of 200 volts. If we adjust the resistor value downward, the voltage that the capacitor reaches goes downward as well. The warmists would have us believe that the maximum voltage the capacitor can reach is 120 volts. In their model, there is another resistor in the cirquit, in series with the diode. This “charge” resistance exactly equals the resistance of the “discharge” resistor. If that were the case, then the voltage across the capacitor would in fact reach exactly 120 volts as a maximum. But as soon as those two resistances change such that the “charge” resistance is lower than the “discharge” resistance, the voltage across the capacitor will increases above 120. The higher the resistance of the discharge resistor, the closer to the peak voltage of 200 volts the capacitor will get. SW goes into the system with nearly no resistance at all. LW comes out of the system fighting high resistance every step of the way. Heat capacity is equivelant to capacitance. Insolation is just like an AC voltage except that it is a half wave with a flat line between half waves. This is what BenAW was alluding to. This is why surface temperature can get above 255K with nary a GHG in sight. Observational evidence to support this is in Figure 5 of Doug Proctor’s article, and in his detailed explanations. 13. davidmhoffer says: I’m an idiot. the resistor in my comment above goes in parallel with the DIODE not the cap. [Fixed... Maybe ] 14. tallbloke says: Ben AW: I think the reason I’m fighting you is because although your model might be sufficient for your understanding of the overall ‘big picture’ energy balance, it conflicts with my understanding of the solar cycle in relation to ENSO, and other oceanic oscillations. So maybe we can compromise. If you can agree with me that the dynamic aspects of the oceans are vital to our understanding of shorter term climatic variation and not mere ‘secondary effects’, I’ll agree we can put them aside for the sake of the elegant simplicity of your particular Gedanken experiment. Agreed? 15. Phil. says: Close David but the diode should be on the input side so that charging is only half the time, the average voltage will not be zero. There will be a charging half-cycle and a discharging half-cycle, N&Z’s model has a zero capacity. 16. Robert Brown says: I was looking forward to having breakfast with Robert Brown yesterday. He did not show up and he has ignored my recent emails. Now that I have had time to read the exchange between him and N&K, my guess is that he is ticked off at me but does not want to hurt my feelings. Don’t worry Robert, I still hold you in the highest regard; thank you again for all the help you gave me during my 12 years in the Duke university physics department. Dearest Camel, I didn’t reply to your emails because I could not go on Saturday, and you phrased your invitation in such a way that I didn’t feel a need to decline, only accept. I am not in any way ticked off at you, only insanely busy. I’m teaching a double load of recitation sections this spring — six of them — and have a lot of family stuff on my plate as well. Perhaps another time I will be less busy, although (did I mention that I’m CTO of a small startup as well, and it may be getting ready to take off and consume the little sleep that I get now) I can’t see any time soon — maybe lunch on a Monday or Wednesday. I would have enjoyed meeting Dr. Scafetta although it is certainly true that I have somewhat similar reservations about his work as I have Nikolov and Zeller’s. However, I can think of a number of ways for various coincidences between planetary periods and local climate fluctuations to occur, both as “coincidences” — similar periods but no causal relationship — and as highly indirect causal influences. One of the first things I read about in my initial foray into the climate was the apparent coincidence of solar variation and climate variation, which led through a discussion of Mauder minima, Gleissberg cycles, the Sun’s erratic orbit around the center of mass of the solar system, and much more. Even as I’m still critical of its “numerological” character — Scafetta has IIRC compared his work to early heuristics concerning the tides, but the comparison is not apropos in an era when we know physics well enough to do far, far better — at least in the case of his work I can observe the coincidence and imagine at least one or two plausible explanatory causal chains, chains that I think it was his responsibility to investigate and quantify before publishing. I guess it is not a stretch of the imagination that perhaps Willis and Anthony Watts, may also imbibe. (and Robert Brown provably so, although surprisingly, Tim Folkerts and some “other suspects” seem to be absent) I’m curious, just what is it that I “imbibe” other than the beer I spent all night last night making (it’s the only time I can run my personal brewing operation, see “overcommitted and too damn busy” above:-)? If this is yet another form of dim ad hominem to avoid having to make a substantive comment on Nikolov and Zeller’s absurd equation 7, why not have a beer instead and save the blogosphere from yet another zero information content remark. I cannot help but see an interesting connection / analogy between Scafetta’s research and ours (N&Z). Both studies focus on new and unknown mechanisms, and use correlations and empirical relationships to quantify them. Scafetta’s work is already published in the peer-reviewed literature, and no one has objected that he did not explain the highly significant correlations he found via ‘first principles‘. The lack of physical ‘first principals’ is very typical when studying new phenomena, and is part of the standard scientific inquiry. I’m mentioning this in regard to Dr. Brown’s criticism of our work, where he argues against the physical significance of our Eq. 7 simply because he could not explain values of the regression coefficients through known laboratory-derived physical constants. Such a critique is unwarranted for the above reasons … Actually, I’m certain that a lot of people have objected — myself among them — but as noted just above, because the Sun is itself a complex system with chaotic internal dynamics — instantly visible in a plot of the Solar cycle over the Holocene, for example, as captured in radiometric proxies — and because Jupiter and Saturn are both powerful drivers of the Sun’s erratic internal orbit around the center of mass of the solar system, an orbit which doubtless drives internal resonances that can be lagged as much as 100,000 years, one can at least imagine a causal process with an effect on the Earth’s climate, and with resonance phenomena the physical forces involved need not be very large if they have a very long time to operate. With that said, I think that pointing out the numerical coincidence without analyzing the causality associated with it does little to advance our knowledge, especially when there are decadal oscillations already known to have an effect on our climate that are have similar periods. That is, there are confounding explanations that cannot readily be separated from the data. I’d be happy to work through the usual “correlation is not causality” argument with you, and why this really does matter in general epistemology lest we use statistics to prove that smoking causes pregnancy (example from one of my favorite statistics books) or other nonsense. This kind of “numerology” is rife in the medical profession, where it is used to “prove” that high voltage power lines, or cell phones, or failing to eat your oatmeal, all cause cancer. It isn’t terribly good science there (as it is an open invitation to cherrypick and engage in confirmation bias and other forms of Feynman’s “Cargo Cult Science”) and it only ends up being decent science if it is rapidly followed by a quantitative and consistent causal analysis that includes a full disclosure discussion of the possibly confounding causes and where the observation fails. I’m awaiting this in the case of Scafetta’s paper, but I’m not holding my breath. However, the difference between Scafetta’s reported coincidence and your Equation 7 is that there is no possible way that your Equation 7 can have the slightest bit of physical meaning. You have obscured this — quite possibly from yourselves — by writing it in a way that hides its internal dimensional scaling and the scale pressures involved — but I’ve helped you (or rather forced you) to confront it. [MEGA SNIP] 1) Hi Robert. I won’t tolerate accusations of dishonesty here, since Ned Nikolov stated earlier that he and Karl Zeller will address your points in their upcoming ‘Reply to comments on the UTC part 2′. The policy here is that everything from the offending remark onwards goes in the bit bucket. Think of it as aversion therapy. As a special favour, since it was such a humongous snip, I saved it in a text file. Let me know if you want it emailing. I might put our new menu system to the test with a new sub-page for ‘Rants’. 2) N&Z know what your objections are, so you don’t need to club them over the head with them in 14 foot long comments. I won’t allow you to dominate with such lengthy bombastic diatribe either, so cool it if you want to get your (hopefully shorter) comments posted here in future. Thanks – TB. PS. Why not simply wait until they publish ‘Reply to comments on the UTC part 2′ and then see what pertains? Just a thought. 17. tallbloke says: Phil: I thought capacitors did the charging and diodes blocked two way traffic? Anyway, you still don’t understand N&Z’s ‘model’. They calculate the grey body temperature, and the actual temperature as derived from the surface pressure and the TOA insolation. The ATE factor is then the ratio of these two numbers. Theirs is the more realistic model, because without an atmosphere and its consequent pressure, there is no ocean to spread heat around. Now, the conventional modelers say that Earth with no ghg’s would be 255K at the surface. This is arguable, though from N&Z’s point of view, it a fruitless argument anyway. This is because according to their theory, the surface temperature is a result of atmospheric mass and TOA insolation, and albedo is a result of temperature and pressure. 18. davidmhoffer says: Phil, Yeah, yeah, I’m trying to get to work and the last time I drew a cirquit was a different millenium. I sent rog a drawing, such as it is. there are two resistors. One in series with the diode that limits the charge rate, and one in parallel with the whole thing that limits discharge rate. If the charge and discharge rates are equal, the maximum voltage across the cap is the effective voltage which is 120. That’s the model that the radiative xfer models use, and that is why they get 255K as a max. If the charge resistance is zero, and the discharge resistance is infinity, the cap will charge to 200V. That’s unrealistic of course. But given a “low” charge resistance and a “high” discharge resistance, the voltage that the capacitor reaches is somewhere above 120 but below 200. Or, in climate terms where resistance to incoming SW is very very low, but resistance to out going LW is very high: The temperature that results is above the effective BB of 255K, but below the peak of whatever 1000 w/m2 comes out to. If someone wants to suggest 288K as a good approximation, I’d be willing to accept that. 19. Phil. says: David, the voltage is the analog of the energy flux, I guess the level of charge in the capacitor is the analog of T? To be accurate the discharge rate would need to be proportional to charge^4. I don’t know if this simple a circuit can be a true analog, maybe with op-amps? 20. davidmhoffer says: Phil. says: February 13, 2012 at 3:51 pm David, the voltage is the analog of the energy flux, I guess the level of charge in the capacitor is the analog of T? To be accurate the discharge rate would need to be proportional to charge^4. I don’t know if this simple a circuit can be a true analog, maybe with op-amps>>> If Rog or Tim can post the drawing I sent them, it will be a lot more clear. Actually, the drawing I sent them is missing a diode on the discharge side, I’ll fix it when I have a moment. But essentially: SW = charge voltage. Resistance to SW is LOW. LW = discharge voltage. Resistance to LW is HIGH Capacitance = heat capacity Voltage across capacitor = Temperature. No need for op amps. In an AC cirquit, V(effective) = RMS = Root Mean Square In an Insolation circquit, P(effective) = 4th-root Mean 4th-power The two are 100% analagous, just one uses square root and the other 4th root. Other than that the equations for both are identical and so are the concepts. Take a look at the top graphic in Figure 5 of Doug Proctor’s paper. What that is showing you is that R to SW is LOW and R to LW is high. If I configured this cirquit with resistances, voltages and capcitances of the right order of magnitude, I’d get a voltage curve exactly like that one. 21. BenAW says: tallbloke says: February 13, 2012 at 3:03 pm Ben AW: I think the reason I’m fighting you is because although your model might be sufficient for your understanding of the overall ‘big picture’ energy balance, it conflicts with my understanding of the solar cycle in relation to ENSO, and other oceanic oscillations. So maybe we can compromise. If you can agree with me that the dynamic aspects of the oceans are vital to our understanding of shorter term climatic variation and not mere ‘secondary effects’, I’ll agree we can put them aside for the sake of the elegant simplicity of your particular Gedanken experiment. Agreed? Of course, I assumed it would be blazingly obvious that this is not the complete picture. Do you accept that the oceans have a temp of ~275K and that this is above any BB or GB approach, making these numbers meaningless for waterplanet earth? And that this kills the whole GHE, because their model assumes a deficit of 33K AFTER the sun has heated the earth? So lets get this in a post and move on from there. Imo this finding should make even a politician or journalist understand the enormous mistake in the GHE theory. 22. tallbloke says: Ben AW: I suspect their counterargument is that the back radiation warms the near surface air, and this slows down oceanic cooling, and that is why the ocean is at 275K rather than frozen at 255K. How do you reply to them? 23. BenAW says: tallbloke says: February 13, 2012 at 4:32 pm Ben AW: I suspect their counterargument is that the back radiation warms the near surface air, and this slows down oceanic cooling, and that is why the ocean is at 275K rather than frozen at 255K. How do you reply to them? Backradiation in the GHE is supposed to increase the surface temps from 255K to 288K (33K difference) Adding 33K to 275K gives 308K as average earth temp. Not even close. 24. rgbatduke says: 1) Hi Robert. I won’t tolerate accusations of dishonesty here, since Ned Nikolov stated earlier that he and Karl Zeller will address your points in their upcoming ‘Reply to comments on the UTC part 2′. It’s your blog, censor as you like. I was utterly respectful in those accusations — if there was any irritation that showed through it was strictly due to the fact that it is dead on topic for this thread and yet N&Z refuse to address it on thread. Or am I mistaken, and is this not all about “Reply to: ‘The Mystery of Equation 8’ by Willis Eschenbach” which — note well — contains Equation 7 as its sole “interesting” input. Equation 8 is after all just a rewriting of Equation 7 with the substitution of an uninteresting if oversimplified description of $T_{gb}$? Why, exactly, do we need yet another thread for them to reply in? 2) N&Z know what your objections are, so you don’t need to club them over the head with them in 14 foot long comments. I won’t allow you to dominate with such lengthy Excellent! Then perhaps they can reply to them instead of starting a thread in which they wish to claim that Willis was completely wrong in his criticism of Equation 7/8 and then ignoring the only post in it that shows that Willis was completely correct in his criticism of Equation 7/8, only his criticism wasn’t strong enough. Or starting yet another thread just to reply to them. Or stating that they don’t need to reply to them, because this thread and their paper are about something else entirely, something that presumably survives the death of their “miracle equation” (their words, not mine). bombastic diatribe either, so cool it if you want to get your (hopefully shorter) comments posted here in future. Thanks – TB. Two remarks — one is the post was long because I was replying to three different posts above and avoiding the posting of a fourth. It’s more efficient for me that way — I’m in a hurry and am really too busy to be participating at all, but it strikes me that preventing questionable science — for surely Equation 7 is questionable, given my very specific and thoroughly supported questions — from being taken too seriously is a worthwhile cause. Including when I do it. Let he who has never made an error in a printed paper cast the first stone — I certainly have, and as a consequence I rather welcome it when people point out my errors. Even students catch me in errors. Second, I am busy and I am a physicist. I’m not a climate scientist, and if anything I am a skeptic (probably technically a “non-catastrophic lukewarmist” if skepticism has to come in flavors these days). All I care about in my replies and the issues I’m pursuing is whether the science and the math/statistics are done well, or at least plausibly. If you want to boot all of the physicists who are actually critical of mistakes like Equation 7, it’s your call, but think about the probable consequences. Finally, I will apologize on general principles since I was not trying to offend anyone with my post. I do admit that I have gotten frustrated by the manifest fact that the authors of the paper refuse to actually reply to and address the very, very simple points that I raise, supported by both computations and figures, and instead say that I’m missing some sort of “big picture”. Perhaps I am — I’m completely uninterested in a “big picture” supported by hand waving, heuristics, the production of arbitrary curves with impossible dimensioned numbers and exponents that don’t quite fit even the highly idealized data. But this thread isn’t about the big picture, it is about equation 7/8! If they want to address the “big picture” (again), perhaps that might be a good new toplevel post, as long as that picture doesn’t rely on a still-undefended Equation 7/8. It doesn’t have to be Nikolov or Zeller. I’d be thrilled to hear you personally or anyone explain how 54,000 bar can reasonably appear in and dominate the fit of five out of the eight planets with atmospheric surface pressures ranging from “zero” to a tiny fraction of a bar or atmosphere. It does seem as though defending equation 8 requires an actual explanation of this here and now not in the future and yet another thread on equation 8. We’re up to four or five threads that I know of so far — top posts on the original paper here and on WUWT, criticism on WUWT, this rebuttal here — do we really need to shoot for number 5 or number 6 to hear an actual explanation? PS. Why not simply wait until they publish ‘Reply to comments on the UTC part 2′ and then see what pertains? Just a thought. If I notice, I will. That doesn’t stop me from being frustrated here and now. If I had just said (as Willis did) “Look, your fit isn’t physically motivated and statistically it isn’t that impressive to fit 8 points with four parameters, look, I can do it too” that would be one thing. I did not just do this, I went far beyond this, both here and on the WUWT thread. Just for my second post on this thread I spent several hours doing the arithmetic, writing code, building the plots, inserting their own data from their table of planetary temperatures and pressures to be sure that I was using the same numbers they used. Their sole reply is that I’m missing the big picture. What big picture? I’m addressing Equation 8, the topic of this thread. Do I need to make that a top post on the blog myself to get their focused attention? Nothing else that they can say in defense of Equation 8 matters in the least until they address my quantitative and specific objections, and no big picture arguments can be taken seriously with the guts (Equation 8) kicked out. rgb [Reply] Well, according to you anyway. I’ll happily wait to see what Ned and Karl have to say in ‘Reply to comments on the UTC part 2′. They’ve blown Willis’ math away here on this thread, and they intend to deal with your objections in their next paper. I think it’s quite legitimate, given the length, detail and repetition of your points by you, that they choose to keep their powder dry until they set everything down that they want to in one place on a new headline post. Thank you again for your learned input, and spare us any further “utterly respectful accusations” if you don’t mind. Thanks – TB. PS. Friendly advice: When in a hurry, type less, because less is more when people read to the end. 25. BenAW says: tallbloke says: February 13, 2012 at 4:32 pm Backradiation is invented to explain the missing 33K in the GHE. Imo it’s not a physical reality. See: http://principia-scientific.org/publications/New_Concise_Experiment_on_Backradiation.pdf If my theory holds, I doubt an AGW’er will have much to ask, after seeing that they missed a 275K base in their assumptions. If the sun is capable to keeping a BB earth 255K above it’s base temp of 0K, why wouldn’t it be capable of keeping our waterplanet 15K above it’s basetemp of 275K? 26. Phil. says: tallbloke says: February 13, 2012 at 3:23 pm Anyway, you still don’t understand N&Z’s ‘model’. They calculate the grey body temperature, and the actual temperature as derived from the surface pressure and the TOA insolation. I think I understand it fairly well actually. They calculate the grey body temperature for a rocky, atmosphere-less, planet with zero surface heat capacity (this gives the minimum average temperature for a given input flux). They then calculate the ‘actual average temperature’ for those planets with an atmosphere from the surface pressure and near surface gas density using the ideal gas equation of state. (This assumes that the gas density is better known than the surface temperature which seems a questionable assumption to me, only applicable to 4 of the planets anyway, the application of the ideal gas law to the super-critical atmosphere of Venus is questionable too). They then fit the ratio of those two temperatures to an arbitrary curve with P as the variable (see Brown’s critique of that process with which I agree). [Reply] Read their papers Phil. I challenge you to find any reliance on density in their equations. The ATE factor is then the ratio of these two numbers. Theirs is the more realistic model, because without an atmosphere and its consequent pressure, there is no ocean to spread heat around. The assumption of zero heat capacity is unrealistic as indicated by the temperature distribution on the moon. That the Earth has an ocean makes it a poorer model since the heat capacity of the ocean is a major factor. [Reply] Which part of ‘there would be no ocean without the atmospheric mass and pressure’ do you not understand? Are you aware of how small the energy distribution difference is between their lunar GB calc assuming no heat capacity and the diviner measurements? They get the actual average surface temperature correct to within 6K. The method used by ‘conventional models’ was 100K+ too hot. I don’t think they need to take any lessons in heat distribution from you or them. Now, the conventional modelers say that Earth with no ghg’s would be 255K at the surface. This is arguable, though from N&Z’s point of view, it a fruitless argument anyway. It is a fruitless argument because it’s an apples and oranges comparison. The Conventional view models an earth with an atmosphere and ocean and surface heat capacity and tries to remove the effect of GHGs. The N&Z approach attempts to model a rocky planet without an atmosphere or ocean and no surface heat capacity, in the case of Earth this leaves the effect of atmosphere, GHE, ocean and surface heat capacity out, so the rather large deficit due to these missing terms is assumed to be a fitted function of pressure. For the other planets with an atmosphere the term due to the ocean is absent. If you want to follow this approach then you’d have to account for the surface heat capacity as there’s no reason to assume that it’s a function of pressure. Even then you have two effects of the pressure of the atmosphere due to its heat capacity and GHE (filtering effect) both of which are functions of pressure so how do you disentangle them? [Reply] What ‘term due to the oceans’. I challenge you to identify in their equations. No more posts from you until you find it or admit you are wrong about that and density. This is because according to their theory, the surface temperature is a result of atmospheric mass and TOA insolation, and albedo is a result of temperature and pressure. Their theory assumes constant albedo, if they want it to be a function of P then their constant 25.3966 should in fact be a function of P! [Reply] Their theory does nothing of the sort. It uses the empirically measured lunar grey body albedo to obtain the grey body temperature for all rocky planets, from which they calculate ATE as a ratio between that and the actual surface temperature derived from atmospheric mass and distance from the Sun. Do us a favour and read their papers before you come back to admit you were wrong. 27. Phil. says: [Reply] Read their papers Phil. I challenge you to find any reliance on density in their equations. Answer to Tallbloke the Ideal Gas Law is: P=ρRT where ρ is the gas density They say in their paper: “This can be written in terms of the average air density ρ (kg m-3) as ρTs =const.=Ps M/R (6)” The whole of section 3.1 deals with this and the Ts in Table 1 is calculated using density! Satisfied? I take it the ban is lifted and an apology will be forthcoming? [Reply] Read it again Phil. Eq 7 shows that Ts/Tgb is equivalent to their exponential function which involves pressure only. the discussion is about Eq 8 which is Eq 7 transposed. So their theory does not rely on density as I said, and you are the one who needs to admit you’re wrong (no apology needed). You are not banned, but you won’t be having further comments published until after you step up and do the right thing. You said: This is because according to their theory, the surface temperature is a result of atmospheric mass and TOA insolation, and albedo is a result of temperature and pressure. When I pointed out that they actually use a constant albedo and if their theory in fact uses an albedo which is a function of pressure then that should be included in their equation (8), you remarked: “[Reply] Their theory does nothing of the sort. It uses the empirically measured lunar grey body albedo to obtain the grey body temperature for all rocky planets, from which they calculate ATE as a ratio between that and the actual surface temperature derived from atmospheric mass and distance from the Sun. Do us a favour and read their papers before you come back to admit you were wrong.” I have read the papers, and my remark is correct. [Reply] Their theory doesn’t use an albedo which is a function of pressure. An atmospheric albedo which is a function of pressure (and temperature) is a logical outcome derived from their theory. So once again Phil, man up and admit you are incorrect. 28. davidmhoffer says: rgbatduke; Excellent! Then perhaps they can reply to them instead of starting a thread in which they wish to claim that Willis was completely wrong in his criticism of Equation 7/8 and then ignoring the only post in it that shows that Willis was completely correct in his criticism of Equation 7/8, only his criticism wasn’t strong enough.>>> I read Willis’ criticism and if you consider substituting an equation into the equation it was derived from to arrive at a variable that equals itself as correct criticism… I don’t even have words to describe that. Your first comments at WUWT in regardf to calculating surface temperature properly were informative and valuable. Unfortunately I for one have seen nothing informative or of value from you since. 29. [...] SB Law of 255K), these figures have very little practical value. If you go to the links, I have two additional comments that illustrated this. The first one shows a very simple “step” function [...] 30. Bob_FJ says: Robert Brown @ February 13, 3:12 pm I guess you are fairly new to blogging, but you may have to accept the fact that many threads wander into fringe areas around the lead article, and even off-topic. The exchange I was having with David Hoffer concerned some unusual and interesting aspects of how the N&Z hypothesis was being addressed or not addressed in various areas of the blogosphere. (was on-topic) I’m sorry if you felt an ad hominem innuendo in my use of the word imbibe. Where I come from, the word has several meanings in addition to your wrong interpretation. 31. tallbloke says: Bob, don’t worry about it. Robert was looking for any handy peg to hang his shirtiness on. 32. Robert Brown says: Bob, don’t worry about it. Robert was looking for any handy peg to hang his shirtiness on. Actually, I didn’t understand what he was trying to say to, or about, me (or Anthony or Willis or whoever). I still don’t. I wasn’t really accusing you of ad hominem; rather asking what you meant. That’s why I put the smiley face at the end of the sentence. rgb 33. tallbloke says: Robert, I think Bob just meant the use of the word in the old fashioned sense of ‘partake of’ but ‘drinking in’ the written word rather than one of the many beverages available. By the way, I’m trying a Belgian dark beer brew kit for the first time, and considering using some molasses instead of some of the glucose. I don’t really know what quantity gives me ‘measure for measure’ though. Any experience to share? 34. gallopingcamel says: rgb said: “It isn’t terribly good science there (as it is an open invitation to cherrypick and engage in confirmation bias and other forms of Feynman’s “Cargo Cult Science”) and it only ends up being decent science if it is rapidly followed by a quantitative and consistent causal analysis that includes a full disclosure discussion of the possibly confounding causes and where the observation fails. I’m awaiting this in the case of Scafetta’s paper, but I’m not holding my breath.” Nicola Scafetta is working with models but he is well aware that the correlations they show need to be backed up by plausible mechanisms if they are to be taken seriously. He sees that as “the hard part”. He already has some ideas that make sense to me. You would probably be a much tougher audience so why not ask Nicola to sample some of your home brew. That might take the sharp edge off criticism. While it is desirable that causal analysis should “rapidly follow” it took over 30 years for Wegener’s “Continental Drift” hypothesis to be vindicated. I think it is fair to say that the same applies to N&K, although their analysis did start with a series of physics equations which they boiled down to the dimensionless one you don’t like. 35. Nick Stokes says: “So their theory does not rely on density as I said” It’s here in this post: “For a given pressure, the near-surface air density varies on a planetary scale in a fixed proportion with temperature, so that the product Density*Temperature = const. on average, i.e. higher temperature causes lower density while lower temperature brings about higher density according to the Charles/Gay-Lussac Law for an isobaric process.” 36. gallopingcamel says: rgb, You mentioned the possiblity of getting together today or Wednesday. Unfortunately I am teaching in Tennessee through to February 20 with no days off. Thanks to a certain university cancelling my contract, I no longer have any courses scheduled in North Carolina. It may be time to retire. For real this time. It was a thrill to find Dukies such as you and Nicola getting involved in climate science issues. It would have been a blast to be a participant in some small way. I was hoping that the Physics department was showing the spirit to resist being engulfed by the Nicholas School of the Environment. 37. Dan in Nevada says: I’m trying hard to understand Dr. Brown’s objection to equation 7. As I understand it, he’s saying essentially that the equation was empirically derived (a best-fit regression), instead of being derived by a logical progression of real-world observations. I can understand that logic and it’s what I initially wondered about when reading N&Z’s first paper. I agree it would be much more satisfying to see an argument that derived an equation from first-principles if-then sort of logic. However, I’m lost regarding the argument that if certain parts of an equation imply non-real results, then the formula must be wrong. For example, people that look at demographics will say that the replacement rate for a given society must be 1.9 children per female or higher in order to have a stable population. Dr. Brown seems to be arguing that since there is no such thing as a real-world 9/10 child, this statistic is meaningless. I’m not claiming that is what he’s saying; it’s just all that I can gather from what I’m reading; i.e. since we don’t have any observable planetary bodies with an atmosphere at 54,000 bar, then N&Z’s result must obviously be wrong. Can somebody help me here? I believe Dr. Brown is trying to say something important to math-challenged people like me, but I’m not getting it. 38. Bob_FJ says: ALL, Am I getting irrational in my septuagenarianisticalific afflictions, when I expound the following? I can see that the mathematical derivations of N&Z will be variously critiqued by those that step forward to oppose the new theory. (which is a normal part of science). This then boils down to acceptance of the arguments from whomever one might alternatively prefer as an authority. (camp or consensus culture annat)….. He said, she said, we said, they said, and I like her because she has [self snip] !!! So, whilst the N&Z mathematical derivations may be “difficult”, it does not mean that they don’t work, even if why it is so, is not fully understood. However, if their hypothesis is correct it should be possible to obtain a series of correlating data with a range of parameters in the lab. If this is successful in outcome, then it should precede the maths in any paper, which should then be offered as a possible mathematical solution. Of course, see earlier threads where low budget Konrad is still exploring such data. And, I wonder if deep mineshafts with geothermal energy source might also reveal another piece of supporting empirical data. 39. tallbloke says: Nick Stokes says: February 14, 2012 at 2:15 am Tallbloke said: “So their theory does not rely on density as I said” It’s here in this post: “For a given pressure, the near-surface air density varies on a planetary scale in a fixed proportion with temperature, so that the product Density*Temperature = const. on average, i.e. higher temperature causes lower density while lower temperature brings about higher density according to the Charles/Gay-Lussac Law for an isobaric process.” Yes Nick, but this is additional explanation. They do not need to rely on density because Eq 7 shows that Ts/Tgb is equivalent to their exponential function which involves pressure only. This is why there are two ‘equals’ signs in Eq 7. Immediately above Eq 8 they say “Equation (7) allows us to derive a simple yet robust formula for predicting a planet’s mean surface temperature as a function of only two variables – TOA solar irradiance and mean atmospheric surface pressure, i.e. [Eq 8]” No doubt accurate density measurements for Earth could have assisted them in calibrating their pressure function, but density is not required for the other celestial bodies they then go on to calculate surface temperatures for. This is an important distinction. I had to point this out to Willis Eschenbach on the demonstration of his own ignorance in the post this present discussion addresses. I’m somewhat shocked to see you making the same elementary error. I thought WUWT troll ‘Phil’ was being disingenuous with his comment here and willfully misinterpreting N&Z in an attempt to cast doubt on their work. Seeing you make the same mistake has me wondering if a lot of the argument over N&Z’s work is a result of people simply failing to read what they wrote. Having said that, I had to correct Willis Eschenbach on his claim that they were using the atmospheric albedos inside equation 2 and thus ‘tunaeable parameters’ as well. Not that he accepted that he had made a ‘mistake’ even after I pointed it out. Phil has also parroted a variation of this error. N&Z use a single empirically measured grey body albedo (our Moon’s) in their Tgb for all rocky planets in Eq 2, which seems reasonable to me. I think he Willis-fully Hash’n'baulk’ed the theory and then trash talked it. Whatever his reasons, it ain’t science so far as I can see. Robert Brown says Willis’ criticisms of Eq’s 7 and 8 are correct, despite N&Z’s elegant demolition of his faulty algebra in the headline post here. This and his errors on the Loschmidt effect in hs criticism of Hans Jelbring’s paper leads me to doubt the value of his other criticisms too. Then there is Ira Glickstein’s misdirection of discussion of N&Z’s theory with the spurious discussion of the heat of initial compression, which is simply irrelevant to the discussion of the dynamic throughput of solar energy in an atmosphere subject to a gravitational field. All in all, WUWT hasn’t handled discussion of Nikolov and Zeller’s theory at all well, to put it kindly. 40. Robert Brown says: No doubt accurate density measurements for Earth could have assisted them in calibrating their pressure function, but density is not required for the other celestial bodies they then go on to calculate surface temperatures for. This is an important distinction. For what it is worth, density and pressure are not independent variables. The pressure at any given height is $P = g \int_{z}^{\infty} \rho(z) dz$. Atmospheric pressure has to support the weight of all of the atmosphere above any given height. Any fluid, atmosphere or not, in (even approximate) hydrostatic equilibrium must satisfy the relation: $\frac{dP}{dz} = - \rho g$ (for $z$ positive “up”). If one knows the thermal profile of the atmosphere (and assume that it e.g. an ideal gas) it is a straightforward mathematical exercise, although one that may or may not be easy to do analytically, to convert a function in one variable to a function in the other. [Reply] True but irrelevant to my point, which is that having calibrated their EQ8, Which they could have done with temperature rather than density, they are able to correctly calculate the surface temperatures of the other celestial bodies using only their pressure function and the TOA insolation. Furthermore the pressure at any given height can be calculated by using the mass and the gravitational constant rather than the density. This is related to why Nikolov and Zeller need two distinct exponential forms to fit the planetary data. The physics that describes the surface temperature of the extreme low pressure/density planets is different from the physics that describes/predicts the surface temperature of the four planets on their list with substantial atmospheres. In particular, the last four planets have atmospheres that consist of optically thick greenhouse gases. The first four planets have atmospheres that are optically thin. These are two completely different regimes, so the functional form of the surface warming changes. If you like, the exosphere begins at the surface of the first four planets — they lack meaningful convective transport and have only a tiny split between the direct radiation temperature from the surface and the radiation temperature of the extremely diffuse atmospheric gas. [Reply] Yes, I agree it’s quite remarkable that their equation holds good across such a diverse set of temperature and pressure regimes. Robert Brown says Willis’ criticisms of Eq’s 7 and 8 are correct, despite N&Z’s elegant demolition of his faulty algebra in the headline post here. This and his errors on the Loschmidt effect in hs criticism of Hans Jelbring’s paper leads me to doubt the value of his other criticisms too. Willis is correct when he asserts that taking some data ($T_s$), normalizing it with a computed number $T_{gb}$ to form $N_{TE} = T_s/T_{gb}$, fitting the $N_{TE}$ data to a mathematical form $N_{TE,fit}$ that is neither derived nor heuristically justified, and then multiplying out the normalization to get: $T_s = T_{gb} N_{TE,fit}$ (equation 8) is hardly a “derivation”. If I have data points ${x(y_i)} = (x_1,y_1), (x_2,y_2), (x_3,y_3)...$ and a smooth function $f(y)$, and I use this function to convert the data to $(x_1/f(y_1), y_1), (x_2/f(y_2),y_2), (x_3/f(y_3),y_3)...$, empirically fit the data to a smooth function $g(y)$, and then assert that: $x(y) = g(y)*f(y)$ this is an identity, not a derivation. If I define $g = x/f$, fit $g$, then assert $x = g*f$ I haven’t “derived” anything at all. If I fit $g$ with a functional form that is not justified by any physical argument that has enough free parameters, I can find alternative descriptions of $x(y)$ that are all equally meaningful, given that $0 = 0$ (lack of meaning is conserved). [Reply] All very snarky, but not what Wilis said at all. As for my “errors” concerning the Loschmidt effect in my criticism of Jelbring’s paper, Jelbring’s paper does not refer to any such effect. It quotes a single textbook that derives the DALR for an atmosphere in which there are parcels in convective motion. It asserts without proof that this lapse rate applies to an isolated ideal gas that is not being driven and is in true static equilibrium, even though an ideal gas (being ideal) has the thermal conductivity of an ideal gas and it is the work of a few seconds to see that the DALR atmosphere is not a state of maximum entropy. It then concludes that the atmosphere in question will have a lapse rate and hence be warmer at the bottom than at the top. I do not question that dynamical atmospheres have a lapse rate. I do not question that the lapse rate is important in determining surface temperatures. I question — indeed, I categorically reject — the assertion that a lapsed atmosphere is a state of true thermodynamic equilibrium for an isolated ideal gas. I offer considerable proof that this is so, including the straight-up observation that if it were true it would violate the second law of thermodynamics. [Reply] What you fail to appreciate here Robert is that there is a 120 year old paradox here which has not been resolved. If you had taken the trouble to read the Loschmidt thread on this site you’d be better informed, and if you had any science sensibility, a good deal less categorical too. Whether or not you ultimately agree with my proofs, with the explicit statement of the author of a textbook on physical climatology on the thread (Caballero) that I am correct, with the statistical mechanical computation cited on the thread that concludes that I am correct, whether or not you take note that the DALR is always derived in the context of slowly moving parcels of air in a dynamical atmosphere and that any sort of additional mixing e.g. turbulence destroys it and restores isothermal equilbrium (as does conduction, but much more slowly) — it is difficult to assert that Jelbring dealt with any of this in his paper. He takes a well-known result of atmospheric dynamics, moves it out of context, and makes an entirely circular argument that it applies to the static case as well. It does not. The actual dynamics even of moving air parcels is “adiabatic” only to the extent that you neglect conduction, but an ideal gas has an ideal, easily computed, thermal conductivity that is not zero. Most textbooks point that out when they derive the DALR. Caballero’s certainly does. [Reply] See the Loschmidt thread for where Caballero gets it right, where he gets it wrong, and where he gets it muddled. It would also have behooved you to have taken a bit more notice of WUWT commenter ‘Trick’ on your impossibly long thread. He tried to alert you to another equally eminent Author and textbook which sits on the other side of the paradox. You ignored him of course. To conclude — I am certain that you are aware that your “argument” that I am not to be believed because I once stated something that you disagree with and that is disproved by something I’ve never heard of and therefore I must be wrong now is a textbook case of logical fallacy. This makes it all the more ironic when you conclude by stating: All in all, WUWT hasn’t handled discussion of Nikolov and Zeller’s theory at all well, to put it kindly. If by this you mean that Anthony hasn’t (to my knowledge) stepped into the discussion to defend a theory that he’s fond of not by addressing specific algebraic points of concern, the actual physics of the theory, but by http://en.wikipedia.org/wiki/Poisoning_the_well, I suppose you are correct. [Reply] Anthony is not so much at fault as those who took advantage of the fact that he’s too busy to keep an eye on what they’re up to. In the meantime, I’m perfectly happy to wait for Nikolov and Zeller’s actual derivation of equation 7 to continue the discussion, aside from answering specific questions about my specific objection to equation 7 in the comments above. rgb 41. davidmhoffer says: Dan in Nevada; I’m trying hard to understand Dr. Brown’s objection to equation 7. As I understand it, he’s saying essentially that the equation was empirically derived >>> His complaint relates to the number of free parameters which have been assigned as constants by N&Z. Essentially, N&Z have come up with an equation that successfully calculates the surface temperature of various planets from their insolation and their surface pressure. But, did they come up with an equation that is right simply because of the variables and constants they chose? We cannot say for certain because we don’t have enough data points (planets) to compare to. It could be that for a broader number of use cases, their formula will break down. N&Z believe that their formula will hold up for any planet that one can get the data for. RB believs they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets. 42. Robert Brown says: Can somebody help me here? I believe Dr. Brown is trying to say something important to math-challenged people like me, but I’m not getting it. Sure. The point is that one can learn, or estimate, a lot about any physical quantity from a knowledge of its dimensions, its units. This is particularly true for exponential functions. Take exponential decay, for example, which describes things as disparate as the population of radioactive atoms. The idea there is that every decay event is independent, and occurs with the same probability per unit time. Suppose the probability of a single atom decaying in some small time interval $\Delta t$ is $\Delta P = R \Delta t$. This is easy to understand — think of it as rolling dice once a second, and if snake eyes comes up then the decay happens, but in a way that works for smaller and smaller intervals so that the probability per unit time is constant in the limit of very small times. Then the number of decays we expect in a population of $N$ atoms in that small time $\Delta t$ is just the probability of decay per atom times the number of atoms: $\Delta N = - N \Delta P = - N R \Delta t$ Physicists usually start thinking about finite times $Delta t$, but they want to be able to use calculus to find the result so they assume things like “$N$ is really big” and “$\Delta t$ can be made arbitrarily small” so that the discretization error associated with turning this expression into a continuous expression can be ignored. Note that these assumptions won’t work at all well for $N = 10$ atoms and very short times, because an atom can’t fractionally decay — either it does or it doesn’t. This sort of process is called “coarse graining” the derivatives — choosing intervals large compared to the place where discrete events matter, get small enough to use calculus. If we coarse grain our decay problem we get: $dN = -N R dt$ and we do basic calculus. Don’t worry about understanding it if you’ve never had calculus, but if you have had calculus you should recognize this: $\frac{dN}{N} = - R dt$ $\int \frac{dN}{N} = - \int R dt$ $N(t) = \int \frac{dN}{N} = - \int R dt = - R t + C$ and exponentiating both sides and defining $N_0$ is the number of particles one has at time $t = 0$, one gets: $N(t) = N_0 e^{-Rt}$ There are some very general things about this derivation — the exponential function is the function that is directly proportional to its own derivative, and exponentials in physics therefore must describe this sort of differential relationship. However, this sort of relationship is common as dirt in science — physics and chemistry in particular — and statistics in general, making exponential functions very important. This particular example is exponential decay, but very similar reasoning applies to e.g. compound interest investments and exponential growth, trigonometry (the sine and cosine functions are parts of a complex exponential), oscillations and waves and ever so much more. Physicists learn early on that when one introduces functions like an exponential into a theory — in particular any nonlinear function that has a power series expansion, such as $e^x = 1 + x + x^2/2! + x^3/3! + ...$ — the arguments of the exponential must be dimensionless. This is easy to understand. Suppose that $x$ in this expression were a length and had units of length. Length squared is an area. Length cubed is volume. Length to the 28th power is God knows what. Then the expansion for $e^x$ would have us adding a pure number (1) to a length to an area to a volume… which is nonsense. I don’t know what it could possible mean to add a liter to a meter. That means that in our example above, $Rt$ must be dimensionless! We know that the units of $t$ are units of time, say seconds, so the units of $R$ must be inverse time! There is really no choice here. It cannot be otherwise. Even if we didn’t know where $R$ came from (you can see above that it has units of “probability per second” and since “probability” has no dimensions this is inverse seconds) we would know its units because we know $Rt$ must be dimensionless. This suggests that instead of using $R$ at all, it might be better to use the time implicit in it: $\tau = 1/R$. This time is called the “exponential decay time” and we can write: $N(t) = N_0 e^{-t/\tau}$ as a manifestly dimensionless form to describe the number of still-undecayed radioactive atoms as a function of time. Writing it with the $R$ was OK, but it hid the true dimensions and characteristic time associated with the process from us. The second form is much more revealing, because now we can interpret the time that has appeared. $\tau$ is the time required for the population of undecayed atoms to drop to $1/e$ of its original population. This is closely related to things like the “half life” of the decay process, the time required for half of the atoms in any initial sample to decay. It is even more than that — it states that in any time interval $\tau$, $1 - 1/e$ of the atoms will decay, no matter how many there were at the beginning! The point of this is that $\tau$ isn’t a random number. It has a physical meaning. It has to be connected to a physical process — the one responsible for the decay occurring. This time has to appear naturally when considering the units and sizes of the actual components of the system in question. To understand this, one has to learn a bit about Fermi estimation. Enrico Fermi was famous for his ability to take a physical process and, by considering the units and “reasonable” dimensions of the system in question, produce remarkably accurate estimates of the physics involved, although the idea applies to many things. I’ll include this link: http://en.wikipedia.org/wiki/Fermi_problem and quote from it: “Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results: where the complexity of a precise calculation might obscure a large error, the simplicity of Fermi calculations makes them far less susceptible to such mistakes. (Performing the Fermi calculation first is preferable because the intermediate estimates might otherwise be biased by knowledge of the calculated answer.)” In other words, Fermi estimates are invaluable as sanity checks. They reveal results that, however much we are biased to believe in them, are in the end unbelievable. You can learn a lot more about Fermi estimation online — it is literally a part of most introductory physics courses. I’ll offer a single example of Fermi estimation and dimensional analysis here. It is not inapropos to my objection. Students often are asked to compute the moment of intertia of things like spheres, rods, cylinders, grandfather clock pendula, in physics courses. Doing so involves formulating and evaluating an often-complicated integral and perhaps using something like the parallel axis theorem. It is easy to make purely algebraic errors. How can a student tell if their end answer makes sense, at least enough sense that it might be correct? By checking its units and making sure that the answer satisfies Fermi! The former requires that they look at the size and mass involved. The units of a moment of inertia are $ML^2$ — mass times length squared. The units of the algebraic answer had better contain the mass of the object, to the first power, and its characteristic size to the second power, or it is wrong out of hand. Then one can look at everything else. Most moments of inertia about the center of mass of an object have the form $\beta M L^2$ where $\beta$ is a pure (dimensionless) number between 0 and 1. It cannot be negative, and I can’t think offhand of a case where it could be greater than 1 if $L$ is the maximum radius of the system relative to its center of mass. If a student somehow ends up with $\beta \approx 100$ in their answer, even if it otherwise has the right units, they probably divided instead of multiplied somewhere, or made some other error in their algebra or arithmetic. That’s it — in physics the arguments of exponentials must be dimensionless or they are nonsense. If a dimensioned variable appears in the exponential there must be a similarly dimensioned variable that cancels its units. Finally, in order for the expression to make sense, the actual dimensioned variable that provides the “characteristic” length, or time, or pressure, in the expression has to be physically reasonable. It is this characteristic pressure that dominates the exponential behavior. It is the signpost towards the important physics, and vice versa. Like $\beta$ in the previous example, we should be very worried if it is much more than order unity away from the range of mundane values we expect for the actual physical quantities we are trying to describe. All I did is take Nikolov and Zeller’s empirical equation 7 and put it in manifestly dimensionless form. This is unique — there is no other way to do it, any more than there is for the radioactive decay example above. This reveals that their coefficients are actually dimensioned pressures $P_i$, the characteristic pressures $P$ where $e^{P/P_i}$ has an argument of order unity, where the “shape” of the exponential is important. I argue that it is unreasonable for a characteristic pressure of 54,000 atmospheres to describe the actual physics of a gas at a pressure of $10^{-7}$ atmospheres or even less. It can’t even reasonably describe a gas in the pressure range from 1 to 100 atmospheres. The second characteristic pressure that appears is 202 bar/atmospheres (at this level of Fermi-estimate description the difference doesn’t matter). This isn’t as bad as the 54,000, but it is still worrisome. It is still a “$\beta > 1$ answer, given that the largest pressure being fit is 92 atmospheres. The last area of concern in their result is the very, very odd exponents that appear within the exponentials. One of them is $0.065 \approx 1/15$. Again, in physics, one expects there to be very ordinary relationships between connected quantities in a physical theory, especially when one is considering an idealized theory like an ideal gas (or a normal gas far away from critical points where its behavior is expected to be “nearly” ideal). $PV = NkT$ has fairly straightforward exponents — they are all 1! It’s true that other exponents can appear — an ideal gas that is confined to a container and adiabatically expanding follows a different curve, one where $PV^\gamma =$a constant. This means that the exponent 0.385 in the second term is not inconceivable — it is difficult for me to see how it could arise from any simple dimensional analysis of the problem — it is close to but not equal to $1/\alpha$ for an ideal diatomic gas, but the atmospheric gases of the planets in question do not all have the same $\gamma$ — Mars is mostly triatomic CO_2 and the Earth is mostly diatomic N_2 and O_2, for example, and Triton is a complicated brew of all sorts of non-diatomic molecules. However, the particular exponents of exponents and characteristic pressures in the second term of the fit depends in detail on the values of the first exponential term with its unphysical 54000 and 0.064. If one simply fit (say) the last four planets all by themselves, one might get a functional form that wouldn’t make Fermi (or me, channelling his and Feynman’s ghosts in this discussion) from running screaming from the room — or rather, gently saying “no, that cannot be a physically meaningful description of the phenomena”. Hopefully this explains why Nikolov and Zeller’s empirical fit almost certainly is not physically meaningful as it stands, in terms even a non-math lay person can understand. There are many ways one might fit radioactive decay data with combinations of functional forms, but only one of them is going to be rationally derivable and it will contain a characteristic time that is directly characteristic of the physics of the process, not e.g. the age of the Universe or the period of the Earth around the Sun. At 54000 bar and the surface temperatures in question, the atmospheres would no longer be gases in the case of all of the colder planets. I don’t know the coexistence curves offhand for the components of Venus’ atmosphere, but I’m guessing even its atmosphere would liquify at this pressure and its ambient temperatures. rgb 43. Robert Brown says: His complaint relates to the number of free parameters which have been assigned as constants by N&Z. Essentially, N&Z have come up with an equation that successfully calculates the surface temperature of various planets from their insolation and their surface pressure. But, did they come up with an equation that is right simply because of the variables and constants they chose? We cannot say for certain because we don’t have enough data points (planets) to compare to. It could be that for a broader number of use cases, their formula will break down. N&Z believe that their formula will hold up for any planet that one can get the data for. RB believs they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets. This is also true, but does not actually answer Dan’s question. I just did that above. The problem with fitting eight points of data with four free parameters was actually originally raised by Willis. All I did here is refine it and plot the actual fit against the actual data so one can see that it is not, in fact, a good fit of all eight points, but rather fits four points well and four points poorly, three at one end and one at the other. Where “poorly”, in an arbitrary nonlinear functional fit to data without without error bars, with no $\chi^2$ or objective measure of quality of fit even theoretically possible, is in the eye of the beholder, to be sure. As for predictive value: Suppose one simply connected the data points with a cubic spline, with any number of parameters you like. If one supposes that there is a monotonic increasing function that describes the data, then this spline might well have predictive value of that monotonic increasing function, as might a line you just draw with a pencil so that it passes smoothly through all of the points. However, there isn’t any physics in the interpolating spline, the line you draw with your pencil, or in Nikolov and Zeller’s equation 7. The coefficients of the spline are not simply related to the actual physical processes that govern and establish the hypothetical relationship, and one gets zero physical insight or knowledge from knowing them. The four parameters of Nikolov and Zeller’s fit are manifestly not related to the actual physical processes that govern the surface temperatures, and one gets zero physical insight or knowledge from knowing them, even though they, too, might have just as much predictive power as a spline. Would they fit surface temperatures on the gas giants? Highly doubtful. Do they fit surface temperatures on the moon as they are now? Only if you are generous about what constitutes “a fit”. There is a lovely paper written by a couple of Greek guys who are analyzing e.g rainfall that I have squirrelled away somewhere that illustrates the problem of taking a small finite section of data and extrapolating it vs interpolating it. Interpolation is generally “easy” — lots of functions will fit/interpolate any small data set, especially when one is willing to use any nonlinear function with any form to do so without regard for any sort of justification or reason to think it is a correct or relevant form. They illustrate this by taking IIRC a cosine function plus white noise and then analyzing how fits to this function might well proceed. If you take a very small interval and fit it, your best fit will be a linear function, and you are tempted to say “Aha, I’ve discovered that $x$ is linearly dependent on $y$” and then use that to predict the end of the world, if $x(y)$ exceeding some threshold will bring it about. This argument is, in fact, familiar to us all in CAGW “science”. Of course, eventually one gets more data, and — Aha! — now the data turns up. In fact, it looks a lot like the true dependence of $x$ on $y$ is quadratic! Eureka! Surely now we can use it to predict our catastrophe. Only we, gifted with God-knowledge, know that this is an illusion. They only learn of their error when they get still more data and their quadratic function fails to extrapolate and they now have to add e.g. a cubic terms, or perhaps some bright lad then tries to fit an exponential to it, fails, and tries — just saying, you know — an exponential with its argument itself a nonlinear exponential function, all with adjustable parameters. Well, the function is still smooth — there are an infinity of smooth nonlinear functions that correspond within the fit domain, all with the same first $n$ terms in their power series expansion (for example) and all of which differ completely beyond this point. Even the cosine could be modulated with e.g. a long time exponential decay (or other modulating function) and you couldn’t fit it or observe it until you had tracked many cycles of oscillation and identified the apparent cosine. This is why simply fitting arbitrary functional forms to a small data set, however successful it might be at interpolating the data, however predictive it might be of new data within the interpolated domain, is not useful or meaningful. One can always perform such fits many different ways, and I haven’t even gotten to overcomplete bases yet where even the fit in terms of a given set of functions is not necessarily unique. All of this is well known in functional analysis. In order for the results of a fit to be anything other than heuristic and descriptive, the numbers in the fit and the functions themselves have to have some physical basis. Willis pointed out the problem of fitting from the infinitely large set of all possible nonlinear functions — hell, one might well find a one parameter fit out of that set — without any sort of physical argument supporting it or criterion for judging goodness of fit and a claimed “derivation” of equation 8 that was really just restating the definition of the function empirically fit via equation 7. I pointed out that it is worse than that — the fit they obtained does contain hidden physics (whether they like it or not) [snip]. [Reply] Now you’re getting the hang of it! Leave the insult to the end so you don’t lose so much of the post. rgb 44. Ned Nikolov says: Robert Brown (February 14, 2012 at 3:10 pm): Dr. Brown, I do not quite understand why is all that twisted reasoning, when the reality of our derivation is simple and can be summarized in the following commonsense points: 1) We define the ‘Greenhouse Effect’ as a ratio of the actual surface temperature to the planet’s equivalent gray-body temperature, since such a ratio expresses the integrated thermal effect of the atmosphere in a single non-dimensional number (non-dimensional numbers as you know are widely used in fluid dynamics to describe turbulence and other phenomena). This definition also has a physical meaning one can call relative Atmospheric Thermal Enhancement (ATE). 2) Our gray-body temperature model is not arbitrary, but based on proper integration of the SB law over a sphere and uses values for regolith albedo and emissivity that are representative of values measured for Moon and Mercury. Even the average albedo of the earth surface (0.12) is very close to that of the Moon’s rocky surface (0.11). The data suggest that short-wave substrate albedo and emissivity of airless planets are quite conservative quantities, i.e. A ~ 0.11, and e ~ 0.95. 3) Our analysis revealed that mean surface total pressure (Ps) is the only parameter that nearly completely explains the ATE values for all 8 planets. No other parameters such as ‘greenhouse-gas’ concentrations or their partial pressures, or the actual absorbed radiation by planets (accounting for observed albedos) came even close to describing the ATE variation. Hence, the derivation of Eq. 7. Again, NTE(Ps) was derived using non-linear regression! 4) Eq. 8 is simply a solution of Eq. 7 for the surface temperature (Ts). This is a legitimate and simple (high-school level) math, and it’s really puzzling to me why it prompts any questions at all. Willis made a silly algebraic error of substituting Eq. 7 into itself, thus arriving at the non-nonsensical and false conclusion that Ts = Tgb * (Ts / Tgb). This is a demonstration of his ignorance in math, not a deep thought! With respect to your comment that pressure and density are not independent variables – we never claimed that they were! However, at the surface, the mean atmos. pressure is only a function of the average weight of atmospheric column above a unit surface area and gravity. Surface air density, on the other hand, depends on surface pressure and temperature. Hence, the mean pressure at the surface is independent of near-surface temperature and density! That is because the average thermodynamic process at the surface is isobaric in nature. In summary, the tight exponential relationship between NTE and pressure is real, and the fact that it is described by a function, which coefficients cannot be easily interpreted in terms of known physical quantities, does not invalidate that relationship! This is because it is a higher-order emergent relationship, which summarizes the net effect of countless atmospheric processes including the formation of clouds and cloud albedo. This relationship might not be precisely reproducible in a lab, simply because it may require a planetary scale to manifest. However, a lab experiment should be able to validate the overall shape of the curve defining the thermal enhancement effect of pressure over an airless surface. BTY, this shape is already supported by the response function of relative adiabatic heating defined by Poisson’s formula (Fig. 6 in our paper). 45. Ned Nikolov says: davidmhoffer says (February 14, 2012 at 3:58 pm): RB believs they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets. I think you right about Dr. Brown’s belief. However, if those planets span a really broad range of conditions as they do, it is very unlikely that this relationship will break. The tightness of the relationship suggest that this is NOT an accident, but a real physical phenomenon … Read my comments in the previous post addressing Brown’s arguments… This whole conversation about regression coefficients is really meaningless, as it reveals a lack of understanding about the fact that we are dealing with a HIGHER-ORDER EMERGENT RELATIONSHIP that is rooted in the Gas Law, but embeds complexities that are beyond the simple gas law equation and not necessarily observable in a lab such as the cloud dynamics and cloud albedo. In a sense we are dealing with an interplanetary manifestation of the Gas Law, which maybe a higher-level fractal expression of the simple gas law equation … For those, who are not familiar with fractal structures, please see http://en.wikipedia.org/wiki/Fractal http://en.wikipedia.org/wiki/Fractal_dimension Fractals as an organizational principle in Nature occur not only in physical structures, but in the hierarchy of processes as well. 46. davidmhoffer says: Ned, I had the oddest thought. What would the result be if you were to derive all your equations and constants, but limit yourself to only three planets for doing so? Say Earth, Venus, and Mars. If you did that, and made it clear that ONLY data from those three were used, then we’d have the use case that RB demands. An equation that is built upon a very small set of data, and then it either extrapolates to other planets… or doesn’t. RB, Am I on the right track here? Would you choose three different panets? dmh PS – btw RB, I got a lot of value out of your last two comments, thanks. I’m not saying that I agree 100%, just that there’s a lot of value to be had in a constructive discussion of the issues which is what I saw in your last two comments. thanks! 47. tallbloke says: Please could Robert explain the physical basis of the imaginary number ‘i’ (or ‘j’ in engineering) the product of which when multiplied by itself is minus 1, which is used extensively in electronics design and control engineering? Presumably any competent Duke physicist at the time of the invention of this imaginary quantity which defies the laws of mathematics would have rejected it out of hand for being “absurd nonsense” and therefore of no possible use? Thanks. Wiki ref: http://en.wikipedia.org/wiki/Imaginary_number 48. Ned Nikolov says: davidmhoffer says (February 14, 2012 at 7:14 pm): I had the oddest thought. What would the result be if you were to derive all your equations and constants, but limit yourself to only three planets for doing so? Say Earth, Venus, and Mars. If you did that, and made it clear that ONLY data from those three were used, then we’d have the use case that RB demands. An equation that is built upon a very small set of data, and then it either extrapolates to other planets… or doesn’t. David, We have already done this! In fact, the regression constants in our Eq.7 were derived from a plot of ln(Ts/Tgb) vs. ln(Ps) that did NOT include Titan, Moon and Mercury (we have not explicitly stated this in the paper). You can reduce the number of points (planets) and still get a very similar response function as long as the planets included in the regression span more or less the the whole environmental range. I think using Venus, Earth, Triton ans Europa will produce a function that very closely predicts the mean temperatures of Mars, Titan, Moon and Mercury. Try it … 49. Stephen Wilde says: davidmhoffer said: “N&Z believe that their formula will hold up for any planet that one can get the data for. RB believes they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets.” I think that helpful summary from davidmhoffer is right. So, we can tell from the Gas Laws and observations that planets with atmospheres are very different from those without. We can see from Venus and Earth that despite their very different atmospheres there is an observed match (approximately) between temperatures on both Earth and Venus at the same atmospheric pressure after adjusting for distance from the sun. N & Z are doing their best to ascertain whether the same relationship applies on other planets within the solar system and so far it is looking good although the precision of the data is weak. In the process N & Z have put forward some equations that fit the observations but to my mind it is early days because we just don’t have enough planets or enough variations between planets or accurate enough data to provide absolute proof. N & Z acknowledge those limitations but aver that what they do have is enough to demonstrate a surprising similarity from one planet to another regardless of the vast differences in their atmospheric compositions. Then rgb comes along and from an ivory tower says that because it isn’t all perfect and that therefore it cannot (yet) be shown to be an absolute proof applicable always and everywhere then it has no significance or value at all. Well, excuse me, but I think that given the Gas Laws and the observations we do have then it is perfectly reasonable and indeed valuable for N & Z to announce that they have created a formula that could extend the Gas Laws as observed on Earth to other planets and thereby say something useful about the climates on all planets everywhere. My personal opinion is that in due course they will be found to be correct and that for every planet with an atmosphere it is atmospheric mass, planetary gravity and solar input that ultimately defines every aspect of the atmospheric circulation and that nothing else affects total system energy content. All other factors simply redistribute energy differently within the system. I have no doubt that any planet which fails to configure its atmosphere according to atmospheric mass, planetary gravity and solar input will simply have no atmosphere. Either it will have boiled off into space or be frozen and congealed on the surface. However given the wide range of ‘successful’ planetary atmospheres already found within the solar system it is clear that planets without any atmosphere at all are extraordinarily rare so that one must assume that the relationships which N & Z are endeavouring to describe are very robust. rgb’s time would be better spent accepting that N & Z are attempting something novel here and doing the best possible with the data currently available. 50. Lucy Skywalker says: My understanding of Equation 7 is (a) that it is empirically derived (b) unfortunately I don’t know what “exp” means so I cannot check myself (c) there are four six-figure numbers measured / calculated / tuned from this empirical fit that produces the curve with all the planets on it. My question is, how many planets are needed to define this curve requiring those four very precise numbers extracted from the curve-fit? My first thought was, this is a logarithmic curve that is defined from just three fixed points. But here I am not so sure. How many planets are surplus to definition requirements, and therefore constitute hard evidence? Is it possible to rescale the graph so that a straight line appears, using logarithmic scales? Seems this would help a lot to convince, if it is possible. P=1000W 51. Ned Nikolov says: To All, Please, realize that this entire discussion about regression coefficients and their ‘physical meaning’ pointless, because it does nothing to refute or negate the very EXISTENCE of the relationship. This relationship is not a coincidence, because: (1) There are no other atmospheric parameters (besides pressure) that can explain (describe) so accurately and beautifully the variation of the empirically based NTE factor (the relative ATE) across planets; and (2) The shape of this relationship matches the response of the relative adiabatic heating to pressure changes described by the Gas-Law based Poisson formula… And that is the BIG PICTURE! 52. Ned Nikolov says: Lucy, See my post above for answer to your question: 53. Ned Nikolov says: Lucy, It was a log/log plot we used the derive Eq. 7. A log/log plot does not make the NTE – pressure relationship linear. It makes it somewhat less ‘exponential’ and less non-linear. We will present this plot in our Reply Part 2 … 54. B_Happy says: Dr Nikolov says above “We have already done this! In fact, the regression constants in our Eq.7 were derived from a plot of ln(Ts/Tgb) vs. ln(Ps) that did NOT include Titan, Moon and Mercury (we have not explicitly stated this in the paper). You can reduce the number of points (planets) and still get a very similar response function as long as the planets included in the regression span more or less the the whole environmental range. I think using Venus, Earth, Triton ans Europa will produce a function that very closely predicts the mean temperatures of Mars, Titan, Moon and Mercury. Try it …” but this makes no sense – Mercury and the Moon have no atmosphere, and therefore do not contribute to the pressure dependence fitting anyway, so to say that excluding them makes the predictions more robust seems unlikely. By the way, it would be helpful if the experimental sources for the temperature, pressure and density were given in detail. Half the planets being considering have never had any sensors landed, so all measurements are remote spectroscopic techniques – it would be interesting to see which methods were used to determine the data. 55. Ned Nikolov says: B_Happy says (February 14, 2012 at 9:43 pm): but this makes no sense – Mercury and the Moon have no atmosphere, and therefore do not contribute to the pressure dependence fitting anyway, so to say that excluding them makes the predictions more robust seems unlikely. Think, B_Happy! The regression curve describes a CONTINUUM from an airless surface to whatever pressure. Also, technically Moon and Mercury some small pressure: 1E-9 Pa … 56. B_Happy says: Dr Nikolov I can think and I know that a planet with zero pressure is taken care of purely by the exponential form of your equation. No fitting is needed therefore they do not contribute. It does not matter what parameters you have in your NTE factor, you will get the same answer for Mercury and the Moon no matter what. So can you explain again how they contribute to the fitting process, and thus how leaving them out proves anything? 57. davidmhoffer says: Ned, I see your point, but B_Happy’s also. I would suggest putting the airless and near airless bodies aside. Split the remaining ones into two groups and use the data from one to predict the data from the other and vice versa. It isn’t that the airless bodies don’t have value in the larger analysis, it is just that taking them out removes what will otherwise be a major objection. wars are won one battle at a time. 58. Lucy Skywalker says: davidmhoffer February 9, 2012 at 6:39 pm refers to a past WUWT article that found no evidence that increases in CO2 correlated to increases in temperature. What they found was that a change in CO2 caused a “ripple effect” that then settled back into the exact same equilibrium state there was before. That study in my mind confirms N&Z. N&Z are pointing out that the change in CO2 concentration doesn’t change the equilibrium temperature, but that doesn’t mean there isn’t a “greenhouse” effect. There is. Doubling of CO2 is like throwing a rock into a lake… Got a reference? 59. davidmhoffer says: Lucy; http://wattsupwiththat.com/2010/02/14/new-paper-on/ This is the paper I was referring to. “non stationary effects” are the ripples on the surface of the lake. 60. Ned Nikolov says: Fellows, There is NO experimental evidence from the free atmosphere that increasing CO2, water vapor or any other so-called ‘greenhouse gases’ has ever caused an increase in temperature. We have proxy records of CO2 and global temperature going back more than 65M years. These data sets show that CO2 has ALWAYS lagged temperature changes. The CO2 time lag increases exponentially with the time scale of the data set considered. Thus, we find an 800-1,000-year lag in the ice core data covering past 1M years, and a 12.25M-year lag in the ocean sediment records covering past 65M years … The whole notion that CO2 changes can affect global climate comes from models and models ONLY! Such effect is predicted by the climate models due to decoupling of radiative transfer from convective heat exchange in their code. In other words, the CO2 warming effect is a result of a physical algorithmic error in climate models, it’s a model artifact with no physical equivalence!! 61. gallopingcamel says: Ned points out that it is nonsense to say that CO2 is a major factor controlling global temperature. The physical evidence shows that the temperature is a major factor determining the CO2 concentratration. So why is it that the “Scientists” who are prepared to sell their souls claiming that CO2 rules are showered with money while their opponents find it hard to get funding? 62. Lucy Skywalker says: I’m having a very enlightening experience, re-re-reading material here. I can only take so much science and formulae at a time. But repeated study here is like slowly clearing a frost-covered windscreen. Early on with N&Z my instincts said YES!!! I was lucky to have just read about Jericho below sea level seriously hotter than nearby Jerusalem, and to have thought about the flat snow line on the hills and the cloud underside flat lines. And I was highly upset with Willis. So I was in the mood to study, always my solution to emotional upset is to re-examine the evidence. I was thus ready to take on the huge, under-our-noses paradigm shifter that atmospheric pressure is the major determinant of temperature. So obvious in hindsight. Taking on the full power of N&Z, and the full weight of the maths and their fivefold paradigm shift, is taking much longer. But each time I re-read, the misty surface over comments here has cleared a little, patch by patch, as it were, and every time it’s been reinforcing N&Z, and suggesting to me that most of the commenters here are having similar experiences to my own. For some, the frost over N&Z has simply never lifted at all – especially those who feel emotionally uncomfortable with the presence of significant correlation, but with maths factors raised to strange fractions of powers, lack of recognizable patterns of causation. Heck, this is how every major scientific discovery is made. We’re at the exciting moment when it’s clear the hunt is up, so let’s go looking for the causation. And on reflection, I suspect that N&Z suspect the fractionality has to do with things like convection – and this is why they consider convective influence in their equations. At one point I thought that Huffman was right to criticize N&Z about albedo. But now I see where N&Z are coming from on this paradigm-shift too, I can also finally see that Huffman is wrong in every point he makes here. And now that I can see it, it doesn’t even look that difficult to see! Ah, this is the problem. Communication. Especially when there are so many commenters one has to skim. Now that I understand N&Z better and better, even their communication sounds clearer and clearer. But I have to remember what it was like when I was a dummy, when all the words here were simply covered with white frost…. ************************************************ Looks like I’ve finally found my elevator speech. 63. Ned Nikolov says: Lucy, Great to hear that our concept is coming nicely into focus for a non-scientist and a math-shy person such as yourself. This means that hopefully other people will start getting it too … It’s really not a difficult paradigm to understand, but it does require a shift in perception. Once the shift is made, it becomes self-evident. Now, go ahead and present your ‘elevator speech’ to Willis … you may have to do it in the elevator of the Empire State Building, though … 64. Ned Nikolov says: To gallopingcamel (February 15, 2012 at 1:23 am): The CO2-based ‘theory’ of climate change might enter the Guinness Book of Records one day as the one supported by the least amount of empirical evidence, while violating the most fundamental laws of physics, yet being the longest lasting and most funded misconception in modern science … When all the ‘dust’ settles down in 10-15 years from now, a major lesson learned from this gigantic Greenhouse blunder would be that any absurdity can be sold as a solid science for decades given the right amount of money invested in it. 65. Roger Clague says: Ned Nikolov says: February 15, 2012 at 12:48 am “The whole notion that CO2 changes can affect global climate comes from models and models ONLY! Such effect is predicted by the climate models due to decoupling of radiative transfer from convective heat exchange in their code. In other words, the CO2 warming effect is a result of a physical algorithmic error in climate models, it’s a model artifact with no physical equivalence!!”. Obervations show we can ignore radiative effects such as IR absoption.Mass not composition determines the temperature of an atmosphere.That what you say. Also the thermodynamic theory of a gases( gas laws) confirms this. Within the atmosphere radiative transfer is decoupled from heat exchange ( conduction and convection ). Radiation ( light) is a property of space, heat and all other forms of energy are properties of matter. How is this possible? Maybe the total matter ( mass) of the atmosphere absorbs and emits radiation such that the heat energy and also the radiation energy each stay the same. 66. Ned Nikolov says: TO: Roger Clague (February 15, 2012 at 9:47 am) I would like to clarify something important. I am NOT saying that “within the atmosphere radiative transfer is decoupled from heat exchange ( conduction and convection ).”… On the contrary, in the REAL atmosphere radiative transfer is coupled to (happens simultaneously with) convection! Since convection is MUCH more efficient than radiation in transferring heat, globally, it completely offsets on average the warming effect of back radiation. So, the long-wave back radiation does NOT heat the surface in reality… In climate models, however, radiative transfer is NOT solved simultaneously with convection. As a result, changes in atmospheric emissivity (due to an increase of CO2 concentration for example) lead to the calculation of positive heating rates (degree per day). These rates are produced by the radiative transfer code due to the fact that it is solved independently (outside) of convective processes. The heating rate predicted by the radiative transfer code are then passed onto the thermodynamic (convective/advective) portion of the model, and get distributed around the globe causing the projected warming. So, it is this ARTIFICIAL decoupling between radiative transfer and convection in climate models that is responsible for the non-physical prediction of rising surface temperatures with increasing atmospheric CO2 concentration. 67. wayne says: Ned: So, it is this ARTIFICIAL decoupling between radiative transfer and convection in climate models that is responsible for the non-physical prediction of rising surface temperatures with increasing atmospheric CO2 concentration. That is the trouble with modeling processes is it not. In systems such as the climate that are constrained and driven by so many multiple physics equations and laws that ALL are occurring simultaneously and they all are also inter-related, each affecting the others ruling parameters recursively. Let’s face it, we will never match nature’s calculations, her ‘computer’ has hundreds of digits of precision and a Δt better that yocto-seconds… that is the core reason that predicting the future in such of a system of more than days or maybe weeks is pure fantasy. 68. B_Happy says: Wayne, I would not take Dr Nikolov’s word for it that convection and radiative transfer are decoupled. These models treat the earth’s atmosphere/ocean/ice caps as a 3D grid (ie a set of boxes) and each box influences its neighbours both spatially and in time ie the calculations of pressure and temperature etc in one box at one time are fed into the calculations of the pressure etc of both that box and its neighbours at the next time step. Does that sound to you as if they are decoupled? Now you could argue that they have the magnitudes of some of the couplings (a.k.a feedbacks) wrong, and that makes the models inaccurate (and I would probably agree with that), but that is an entirely different assertion to claiming that the couplings are missing entirely. 69. sergeimk says: N&Z have not responded to my post so I will try it again – I think it shows up some serious errors in their application of Hölder’s inequality Their Equation 2 seems to sum TSI and Cs then spread it round the globe. However Cs is already global so should not be so spread it must be constant over the surface. Inconsequential but physically wrong. I do not see where the continuous downwelling radiation is handled in the equations. Like Cs this is day and night 200+ watts so should not be spread equatorially although it does taper off polewards. the 200Watts was measured here SGP Central Facility, Ponca City, OK 36° 36′ 18.0″ N, 97° 29′ 6.0″ W Altitude: 320 meters Eq 3 uses ap= Earth’s planetary albedo (≈0.3). Eq 2 uses agb=Earth’s albedo without atmosphere (≈0.125), Why the difference both assume atmosphereless planet? [co-mod: I think the answer will come with Part 2 which N&Z have not yet posted. They are trying not to get too distracted at the stage, hence some patience is needed. --Tim] 70. davidmhoffer says: B_Happy; each box influences its neighbours both spatially and in time ie the calculations of pressure and temperature etc in one box at one time are fed into the calculations of the pressure etc of both that box and its neighbours at the next time step. Does that sound to you as if they are decoupled?>>> To be fair, I don’t really know how the models work, I have never dug into it in detail. That said, I have a simple question: Given that the models get it wrong, have no hindcast capability, no predicitve capability, and have repeatedly been shown not just wrong, but way wrong, if Dr Nikolov’s explanation of why isn’t correct, then what IS the reason? 71. tchannon says: Keep this in mind as far as GCM are concerned. I read it as the models are less than 3D http://declineeffect.com/?page_id=189 72. B_Happy says: “Given that the models get it wrong, have no hindcast capability, no predicitve capability, and have repeatedly been shown not just wrong, but way wrong, if Dr Nikolov’s explanation of why isn’t correct, then what IS the reason?” Well that is probably a bit of an exaggeration, but I agree that the models are not satisfactory. I am not a climate scientist by the way – I work in a branch of physical chemistry which also uses a lot of computer time and shares a few techniques, but I deal with system about 10^12 times smaller!. I would say that the current GMC’s are interesting as scientific explorations, but do not have the accuracy needed to justify the kind of political and economic decisions that they are being used to support. As for what I think is wrong, well that was alluded to above. They are trying to use the Navier-Stokes equations, which model flow in gases, and solve them using a multi-grid, multi-timestep approach. Multi-grid because they need different size ‘boxes’ for atmosphere and ocean, and multi-timestep because these evolve at different rates. So they have a whole set of coupled partial differential equations that they are evolving in time, but not all of the couplings (feedbacks) are known accurately. The trouble with this is that the errors can (in fact do) build up over time i.e if there are inaccuracies on a particular time step, then these are fed into the next time step along with the parts that are right. Eventually you can end up with nonsense. The particular coupling that are worst described are those linking temperature, humidity and albedo, which are called clouds by most people… However saying that the couplings are wrong is not the same as saying that there are no couplings – the latter statement is incorrect. 73. Ned Nikolov says: B_Happy says: February 15, 2012 at 10:56 pm Wayne, I would not take Dr Nikolov’s word for it that convection and radiative transfer are decoupled. B_Happy, are you are climate modeler or have you worked with climate models at all? This is not my word, but a fact! Allow me to know my field, please! Yes, climate models are 3D models, but that refers ONLY to the thermodynamic part of the models. Radiative transfer (RT) code works in 1D (along the vertical axes) only, and RT calculations are performed NOT at every time step, but at every OTHER time step of these model. Also, RT is not solved in the same iteration with convection. Rather it is solved independently at a given time step, and it’s results in terms of heating rates are then passed to the 3D thermodynamic portion of the model … 74. B_Happy says: ” but at every OTHER time step of these model” In other words they are coupled…..do you not know what this means? 75. wayne says: B_Happy, Hoffer beat me to it… no hind cast capability. I also have not taken the time to dig into climate simulations either but I have written multiple solar system simulators where you have something as simple as the 15 most massive bodies all interacting simultaneously. That’s 210 3d ODE projections per small dt of a sixth order integrator (position –> velocity’, acceleration”, jerk”’, snap””, crackle””’, and pop””” derivatives ) and the accumulating round-off error will always get you in the end. I do know the problem nice and personal. Just as a dreamed up example to illustrate… to me a simulator is only marginally ‘correct’ if you can run it backward let’s say 600 years and tell within a few arc-minutes that an osculation matches to the monk’s records in English adjusted France’s Julian 1413-Apr-7 at 3:10 am local time of x-star by y-planet. And not relying on one exact confirmation but hundreds. Then, and only then, do you know that if you then reverse the integration can you somewhat trust it’s accurately predicting positions and times in the near future. Climate simulations have a long, long ways to go, if they are ever even possible. That is why I will not spend my time on climate simulations. Far, far to many assumptions, to much questionable data, to many inter-tangled physics limits and processes, some of the physics is questionable itself… mother nature always knows reality… we never will. It is a fantasy and I don’t have time for pure fantasies. I’d rather write a fantasy game, at least then I would understand that it is only fiction. Once climate models can, in reverse, match the monthly records backwards for something like 20 years tit-for-tat I’ll have more confidence in their ability to possibly predict the near future. So far, two years ago, they can’t even match last years temperature records. That’s how I see it. 76. B_Happy says: Wayne, That is more or less what I said. I was not claiming the models were accurate, just that they were not inaccurate for the reason Dr Nikolov was stating, since he is actually wrong on that specific point. 77. Roger Clague says: From sun to the top of the atmosphere radiation rules, we ignore matter. However climate models include both radiative tranfer and heat transfer, coupled or not. Observations shows that thermodynamic gas laws alone explain the properties of atmospheres. We assume at TOA that radiation in is equal to radiation out. Radiation produces heat and hot things radiate but the but the effects cancel each other. Climate models should be purely thermodynamic not a mix of radiation in space and heat tranfer in matter. 78. wayne says: B_Happy, OK. I’d still pay heed to what Ned was saying. I believe he is correct and I myself have a major nit to pick with the radiation code within then models and Trenberth among others’ handling of radiation, the one dimension aspect. But it will take a while to compose a proper explanation so check back here later, like tomorrow evening, maybe even Friday. The handling in radiation in all of climate science is messed up and everyone can properly feel it, the numbers never jibe and I think I have found the reason. 79. tallbloke says: I’d be grateful if Ned would add any further clarification he feels is needed to this post – I didn’t get a reply from him in time to add it here. Joel Shore says in an unapproved comment: tallbloke: Congratulations on [snip] so that Nikolov’s [snip]. This statement by Nikolov is [snip]: “The whole notion that CO2 changes can affect global climate comes from models and models ONLY! Such effect is predicted by the climate models due to decoupling of radiative transfer from convective heat exchange in their code. In other words, the CO2 warming effect is a result of a physical algorithmic error in climate models, it’s a model artifact with no physical equivalence!!” In fact, nobody (including N&Z) has challenged my content[ion], which is[;] the reason why N&Z got rid of the radiative greenhouse effect by adding convection is that they added it in totally incorrectly. We know that because they tell us they added it in incorrectly when they say, “Equation (4) dramatically alters the solution to Eq. (3) by collapsing the difference between Ts, Ta and Te and virtually erasing the GHE (Fig. 3).” I.e., they tell us that they added in convection in a way that leads to the completely unphysical result of an atmosphere isotropic with height, i.e., with zero lapse rate. And, any elementary climate science book would tell them that this would indeed eliminate the radiative greenhouse effect. I guess you are trying to make your site the place on the internet where [snip] [Reply] Hi Joel, thanks for vindicating my reasons for preventing you from turning my blog into a ruckus of inflammatory comment, misdirection and unjustified accusation. N&Z say: “Pressure by itself is not a source of energy! Instead, it enhances (amplifies) the energy supplied by an external source such as the Sun through density-dependent rates of molecular collision.” So the temperatures nearer the surface where the atmosphere is under greater pressure and the density of the compressible air are higher than those at high altitude, in a proportion which approximates to the observed lapse rate. The pressure and consequent thermal gradient is not included in equations 3 and 4 because it is not required for the purposes of demonstrating the inadequacy of radiative activity to account for the GHE (or the lapse rate), this is why the conceptual system under consideration is isotropic. Karl Zeller adds: ” Rog, we are only showing the impact of adding convection on a ‘thin-slice’ one-dimensional model to demonstrate the effect. We go on to say: ‘These results do not change when using multi-layer models. In radiative transfer models, Ts increases with ϵ not as a result of heat trapping by greenhouse gases, but due to the lack of convective cooling [in the radiative transfer models], thus requiring a larger thermal gradient to export the necessary amount of heat.’ Cheers Rog I’ll add the publishable parts of Joel’s next reply plus any further response necessary after the weekend. 80. [...] Shoulder to Shoulder with Anth…BenAW on David Hoffer: Short Circuiting…tallbloke on Nikolov & Zeller: Reply to…davidmhoffer on David Hoffer: Short Circuiting…wayne on Nikolov & Zeller: Reply [...] 81. Robert Brown says: Dear Dr. Nikolov, Thank you for the courtesy of a serious reply. Allow me to address your points one at a time. 1) I have no problem with your expressing the GHE or ATE as a dimensionless ratio. 2) I do not mean to suggest that $T_{gb}$ in your paper is arbitrary. However, in computing it you use a single $\alpha$ for the Earth and for the Moon and for Europa and for Venus, but this number bears no resemblance to their actual bond albedo. Unless you consider the solid high albedo “ice” (in the case of Triton N_2 ice) coating nearly atmosphere-free Europa and extremely diffuse atmosphere Triton this doesn’t make the slightest bit of sense. The entire point of the insolation computation is to determine the fraction of solar energy that heats the planet and must ultimately be lost through radiation. The whole point of the albedo in this computation is that it is a direct measure of the fraction of energy reflected away without causing heating. Why bother with albedo in the first place if you’re going to do this? I’ll tell you why. Because by doing so, it becomes an irrelevant scale factor — you’ve eliminated a source of variability for the planets. You can indeed factor $(1 - \alpha_{gb})^{1/4}$ (and all the other constants) out from under the integral in your equation (2), and write $T_{gb}(p) = C (S_0{p})^{1/4}$ where C is a constant for all the planets and $S_0(p)$ is the TOA TSI for the planet $p$. When you form the dimensionless ratio $N_{TE}$, the constant simply doesn’t matter as it no longer contributes to the variability and you’ve made $S_0$ the only variable. This is just a projection technique, in other words. It introduces significant errors into your table 1 number for $N_{TE}$. The data in this table has other problems. When I look for the bond albedo of Venus (for example) in actual publications such as: http://www.sciencedirect.com/science/article/pii/S0019103505005105 I get 0.9, not 0.75, and you do not provide anything but “multiple references using cross-referencing” which makes it hard for me to assess whether your number is likely to be better. In this case $((1 - 0.9)/(1 - 0.1))^0.25 \approx 0.57$ and the error associated with ignoring the true bond albedo in favor of an artificial one that turns $T_{gb}$ into a direct proxy for $S_0$ could be as high as 43%. In the end, the reason it doesn’t matter much is the forgiving nature of 1/4 powers, and yes I understand that $T_{gb}$ is always an artificial measure, but it doesn’t help to have you do a better job of computing it by accounting for spherical geometry and $S_{microwave}$ and then a worse job of handling the albedo without any estimation of errors! Still, I appreciate what you are trying to do, so let’s just let this go for the moment and concentrate on the rest of it. Bear in mind that I am being critical but I am not hostile to your efforts. Indeed, I agree that the “33 degree warming” number is bullshit, think that in general your improved formula for computing $T_{gb}$ is an improvement, although it would be improved still more if you didn’t insist on making every planet into the Moon as far as albedo is concerned. I also think there is still more work to be done here, because I do not agree with some of your remarks (made in other papers of yours I’ve grabbed) on just how to compute an average surface temperature for the purposes of considering outgoing radiation. But we can discuss this (if you like) another time. 3) Our analysis revealed that mean surface total pressure (Ps) is the only parameter that nearly completely explains the ATE values for all 8 planets. No other parameters such as ‘greenhouse-gas’ concentrations or their partial pressures, or the actual absorbed radiation by planets (accounting for observed albedos) came even close to describing the ATE variation. Hence, the derivation of Eq. 7. Again, NTE(Ps) was derived using non-linear regression! Before I start on the substance of this, let’s get one thing straight. One does not “derive” an equation using regression. One fits data to a presumed functional form using regression. One derives an equation by using the laws of physics, algebra, calculus, geometry, things like that. If you want to be picky, one derives theorems from axioms, but in physics a “derivation” invariably means proceeding from the axioms of physics — the laws of nature, or accepted idealized empirical formulae that themselves may or may not be derived — to a result. For contrast, you arguably derived your variant (2) of the usual $T_e$ formula — you could have provided a lot more detail, but what you provide is enough for me to see what you are doing, and since I already have a good idea of where $T_e$ comes from I can at least assess whether or not I agree with your derivation, whether you did your spherical integrals correctly, and I can identify where I do not agree, e.g. using a one-size-fits-all albedo that completely defeats the purpose of this idealized measure of the integrated absorbed power. In my opinion, of course. The sunlight directly reflected from Europa’s shiny white surface does not contribute to its surface temperature. On the other hand, you did not even justify the form of the function(s) used in your fit using physical laws, and when I point out that it contains implicit physical constants (the pressures required to make the arguments dimensionless) that cannot possibly be justified — again, in my opinion, but feel free to prove me wrong — you loudly ignore this. So please, do not assert that you have derived equation 7 or 8. It isn’t even semi-empirical, it is purely empirical and ad hoc. Next let’s think about your $T_s$ data in table 1. You provide it, and $N_{TE}$, and just about everything in your table, to absurd precision. Do you seriously mean to assert that the Earth’s mean temperature is exactly 287.6 K? Has this temperature historically been constant? Even for the Earth, surely the best measured body in the Solar system, there is considerable argument over just what the average surface temperature is (much of it on this very blog) and it varies by at least 10K (3%) over time scales as short as a few thousand years. [Parenthetically, if your equation 7 were truly predictive, how would it predict this? Are the ice ages caused by the Earth losing atmosphere and hence surface pressure? Do they end because the pressure goes up?] [Reply] Don’t forget the other half of the equation – Insolation, the distribution of which changes considerably with the changes of obliquity, precessional orientation and orbital eccentricity the earth undergoes on these timescales. – TB. It has varied by over a degree on a timescale of a mere 100-150 years. Surely “288 K” would do for the temperature, and “287 +/- 2″ K would be a better descriptor still on the timescale of centuries, given that we are probably at a local high point. Or perhaps not. Perhaps you have a source that you rely on to give you more than an uncertain measure of the Earth’s temperature, one with error bars and without tenths of a degree. If we assume (reasonably) that the order of uncertainty in the Earth’s temperature is 1%, surely the order of uncertainty in all of the other planets with the possible exception of the Moon is an order of magnitude greater. Maybe you disagree. Maybe you can cite references to support the temperatures you give in this table, and a claim that they are known right down to that last 0.1 degree, although I can’t imagine that our fundamental sources are different for the outer moons, just about all of which are known only from one or two flybys of satellites. However, you have not given any references at all to support the data in table 1, so I cannot assess this. Wikipedia has better referential support than this paper for its data. To pick just one more entry in your table 1, Europa, Wikipedia indicates that it has an equatorial average temperature around 110K and a polar average temperature around 50K. You indicate an average surface temperature of 73.4K. Yet one does not have to do the integrals to see that this is inconsistent with Wikipedia’s result. A straight arithmetic average of the two is higher than this, but there is much more surface area at the equator, due to the Jacobean you so ably included in your improved integral in equation (2). Guestimating the integral, the mean surface temperature should be closer to 90K, although this would still have a large error estimate, would it not, and there is no chance that could actually be accurate to 0.1 degree K. Before I or anyone can consider the goodness, or uniqueness, of your fit to your data, surely one needs to have the probable or possible sources of error accounted for and error bars included in the numbers for use in the regression program. One can get truly horrible errors fitting a set of noisy data with a single one size fits all error bar (especially one that is too small so it places too much weight in the fit on data that is actually not known particularly accurately, even more so when one is fitting a small set of data with a large set of parameters). In the meantime, as I said, fit the data with a cubic spline — it is just as meaningful. What you’ve done is no different from Roy Spencer’s “cubic fit” presented on his lower troposphere temperature curve — presented a curve that smoothly interpolates the data, sure, but that is physically unmotivated and hence meaningless except as a guide to the eye. Spencer openly acknowledge it. You’ve written a paper on it, claiming that your arbitrary fit is “derived” by virtue of roughly interpolating the data. Now let’s talk about Equation 7 itself. You yourself in figure 6 plot “potential temperature”. Potential temperature is a dimensionless quantity like the one you hope to understand in the form of $N_{TE}$ — I get it. Note well that in the case of potential temperature, because it is based on and indeed actually derived from some fundamental physics, the two numbers that appear: $P_0 = 1 atm$ and the exponent \$0.285\$ are both entirely physical!. The one is a reference pressure that not only is relevant but sets the scale of pressure-temperature relationships for the entire atmosphere, the other is related to $\gamma$ and the atmosphere’s actual molecular composition. This is characteristic of “good physics”, or at least of plausible physics. The quantities make physical sense even before one digs into and learns to understand where they come from. For some reason you presented Equation 7, the result of your nonlinear regression fit, in a form that was not as manifestly dimensionless as potential temperature in figure 6, after claiming it as inspiration. I have helped you out there by filling in the characteristic pressures that go with your choice of exponents. These pressures are clearly absurd, are they not? Unlike $P_0$ in potential temperature, 54,000 atmospheres is a pressure that appears nowhere in the physics describing ideal gases, in physical processes that might possibly be relevant on the surface of Europa or Triton or Mars or Venus. I’ve played the “fitting nonlinear functions” game myself, for years, as part of finding critical exponents from scaling computations, and in the process I learned a thing or two. One thing I learned is that it is often possible to get more than one fit that “works”, and that the fit that works best may not be the one you are seeking, the one that makes physical sense. Often this is a matter of the error bars or lack thereof. Too small error bars will often “constrain” the best fit away from the true trend hidden in the data. The problem is compounded when one is fitting data with multiple independent trends, such as a fast decay mixed with a slow decay (multiple exponential). Your data clearly has such multiple trends with completely distinct physics — you misrepresent it as a single fit, but presenting it in dimensionless form clearly shows that you are really proposing two different physical processes occurring at the same time with completely different characteristic dimensions. I think this is as clear a signal as you will ever see that you are overfitting the information content of the data, and would do far better to just fit the larger planets on your list with a single dimensionless form, preferrably after putting error estimates into all of the data in your table 1 and using the correct bond albedo for the planets in question, and adding references. In summary, the tight exponential relationship between NTE and pressure is real, and the fact that it is described by a function, which coefficients cannot be easily interpreted in terms of known physical quantities, does not invalidate that relationship! This is because it is a higher-order emergent relationship, which summarizes the net effect of countless atmospheric processes including the formation of clouds and cloud albedo. This relationship might not be precisely reproducible in a lab, simply because it may require a planetary scale to manifest. However, a lab experiment should be able to validate the overall shape of the curve defining the thermal enhancement effect of pressure over an airless surface. BTY, this shape is already supported by the response function of relative adiabatic heating defined by Poisson’s formula (Fig. 6 in our paper). Actually, as I’ve pointed out very precisely above, equation 8 is just as algebraic restatement of your definition of $N_{TE}$. You’ve simply inserted an empirical heuristic fit to your data to replace the data itself. This isn’t a derivation of anything at all, it is curve fitting, which is a game with rules. Mann, Bradley and Hughes tried to play this game and broke the rules when they built the infamous Hockey Stick. Mckittrick and McIntyre called them on it. I’m trying to keep you from making the same sort of mistake. You fit the data with the product of two exponentials of ratios of the surface pressure to arbitrary powers. Why? Well, exponentials are functions that are 1 when their argument is zero, so you fit two of your data points (badly if you leave out error bars or use the actual data in your table) for free without using a fit parameter, and come damn close to a third, close enough that — lacking error bars and given a monotonic relationship — you can count it as “well fit” whatever the error really is. You then are really “fitting” five data points with four free parameters. Skeptics often quite rightly mock the warmist crowd for their global climate models with highly nonlinear behavior and enough free parameters that they can be tuned to fit past temperature data, accurate or not, as nicely as you please, and we are not surprised when those fits of past data turn out to be poor predictors of either future trends or even earlier past data (hindcast). We mock them because it is well known in the model building business that with enough free parameters and the right choice of functional shapes you can fit anything, but unless you treat error in the data with the respect it deserves and include some actual physics in the choice of functions being fit, the result is unlikely to actually capture the physics. Listen, in fact, to your own argument. There is a dazzling amount of physics involved in the processes that establish the surface temperatures on the planets in your list. One can split the planets up into completely distinct groups — two airless planets near the sun with no surface ice, two nearly airless planets that are completely coated in high-albedo ice, one water ice, one frozen N_2, one of which is heated by a tidal process that still isn’t well understood, the other of which is hypothesized to have a greenhouse trapping of heat by the semi-transparent N_2 ice that replenishes its atmosphere. Of the four planets with substantial atmospheres all of them have an optically thick greenhouse gas content and all of them therefore have tropospheres and stratospheres and lapse rates driven by vertical convection across the temperature differential between the surface and the tropopause. Yet somehow none of this matters? Calling it an “high-order emergent relationship” is just fancy talk for “we found a fit and have no idea what it means”, but it isn’t surprising that you can fit the data with an arbitrary form with four free parameters, especially without error bars or any criterion for judging goodness of fit. How is your fit more informative than fitting the data with a spline, or with a polynomial, or with anything else one might imagine? I’ve already pointed out that your figure 6 is precisely why one should not believe your result. In it, $P_0$ means something, and so does the exponent. There is nothing “emergent” about it, it is really a derived result, and when it turns out to approximately describe actual atmospheres we gain understanding from it. What does the 54,000 bar in your fit mean? What does the 202 bar in your fit mean? What does the exponent 0.065 mean? You cannot answer any of these questions because you have no idea. How could you? They are all completely irrelevant to the pressures present on the planets in question. They have precisely as much meaning as the arbitrary coefficients of a cubic spline or any other interpolating function or approximate fit function that could be used to approximate the data, quite possibly as well or better than the fit that you found if you actually add in error bars [Reply] In fact Ned has addressed your concerns regarding your oft repeated assertion, please revue his recent reply to you again. I’d also like you to answer my question which I’ll repost here: “Please could Robert explain the physical basis of the imaginary number ‘i’ (or ‘j’ in engineering) the product of which when multiplied by itself is minus 1, which is used extensively in electronics design and control engineering? Presumably any competent Duke physicist at the time of the invention of this imaginary quantity which defies the laws of mathematics would have rejected it out of hand for being “absurd nonsense” and therefore of no possible use? – Thanks – TB. . So far the total information content of your paper is: * We do a better job of defining/computing a baseline greybody temperature $T_{gb}$ for the planets. Yes and no. Yes to the integral, no to ignoring the bond albedo, especially in the case of Europa and Triton where there is no conceivable justification for doing so. * We define a dimensionless ratio between empirical $T_s$ and $T_{gb}$. We tabulate this computed ratio for the data, forming an empirical $N_{TE}$ dataset with eight objects. Sure. * We heuristically fit a four parameter functional form. The fit works. It is unique. It must be meaningful. Lacking error bars on your data, you cannot possibly assert that it is unique. There could be dozens of functional forms, some of them with fewer free parameters, that produce comparable Pearson’s $\chi^2$ for the fit once you add in error bars. I rather expect that there will be, especially if you correctly treat the bond albedo for planets with almost no atmosphere and no exposed regolith that reflect away over half their incident insolation without being heated by it. The fit you obtain is not meaningful. If you disagree, give me a physical argument for the 54000 bar, the 202 bar, and the exponent of 0.065. The only parameter of your four parameter fit that is plausible is the 0.385, although even that number would need to be connected to some actual physics in order to obtain meaning. * The real meaning is that only surface pressure explains surface temperature, because we were able to fit a functional form to $T_s(P_s)$. Excuse me? I can fit any set of data pairs with any sufficiently large basis. If the data is monotonic I can almost certainly fit it with fewer free parameters than there are data points, especially if I completely ignore the error estimates for the data points! Lacking the error bars, you cannot even compute $R^2$ and plausibly reject no trended correlation at all! I’m not suggesting that this is reasonable for your particular data set, only that you are far away from presenting a plausible argument for uniqueness or correlation that implies causality. In two of the four planets in your list, it’s rather likely the case that surface temperature implies surface pressure, not the other way around! The chemical equilibrium pressure of N_2 over a thick layer of N_2 ice or O_2 over water ice is far more likely to be the self-consistent result of surface temperature, not its cause. In the end, you are left where you started — that there is a monotonic trend to the data that you cannot explain or derive, and because of flaws in your statistical analysis you cannot even resolve difference between competing explanations including the simplest one that the last four planets have surface temperatures dominated by the greenhouse effect and their albedo, the first two are greybodys to a decent approximation (that somehow turned into 1.000 to four presumed significant digits in your Table 1), and two are special cases described by a completely different physics than the others (dominated by the incorrectly used albedo), and to some extend different even from each other. Nothing in your analysis rejects this as a null hypothesis. You cannot even assert that it does without including an error analysis in your data and fit. To conclude, you have two choices. You can ignore my objections above and plow ahead with your paper as is. You might get it past a referee, although I somewhat doubt it. You can in the process continue to get all sorts of uncritical positive feedback on it on the pages of this blog and have it trumpeted as “proof” that there is what, no actual GHE? That gravity alone heats atmospheres? I’ve heard all sorts of absurd punchlines bandied about, and your result can be used to support any or all of them if one ignores the statistical and methodological flaws. Or, you can fix your paper. Include references, for example. Use the correct bond albedos. Here’s a small challenge for you. Apply your formula to Callisto, to Ganymede, to other planetary bodies. Callisto is an excellent case in point. It has an albedo almost twice that of the moon, It is the warmest of Jupiter’s moons — warmer in particular than Europa, for good reason given the difference in their albedos . It has an atmosphere with a surface pressure around 0.75 microPa, it will fit right in there on your table. It puts the immediate lie to any assertion that your fit is either predictive or universal, as its surface pressure is lower than Europas and its surface temperature is higher than Europas and if you use your “universal” $T_{gb}$ formula for it the lower albedo will further raise $N_{TE}$ for it relative to Europa. Your nice monotonic curve won’t be monotonic any more, and you can see some of the consequences of ignoring albedo, atmospheric composition (Callisto’s is mostly CO_2, hmmm), error estimates, and using cherrypicked data to increase the “miraculous” impact of your result. I honestly hope that you fix your paper. There may well be something worth reporting in there in the end, once you stop trying to prove a specific thing and start letting the data speak. I actually rather like what you are trying to do with $T_{gb}$, but if you want to actually improve this you can’t just leave physics out at will, especially not when looking only at the temperature of moons tells you that your assumptions are incorrect even before you get to actual planets with actual atmospheres. Also, if you do indeed do your statistical fits correctly, you might find something useful — a less “miraculous” fit that is still good given the error bars and that has characteristic pressures and exponents with some meaning, Best regards, rgb 82. Robert Brown says: Oops, Tallbloke please insert my missing \$. Sorry. rgb 83. wayne says: Robert Brown, you have not read N&K’s paper correctly. N&K’s Tgb has nothing to do with the actual atmospheric Bond albedo. Tgb is defined in the paper as the albedo and emissivity of that planet or body with NO atmosphere… no ice… no oceans… no clouds, possibly no rotation though in the definition that matters little. You start off incorrect in your point two from the very beginning. Now I’ll read the rest of your lengthy comment. 84. Crom says: Tallbloke, you may want to find a better analogy. Asking about the physical basis of ‘i’ is the same as asking what the physical basis of the number 42 is. It is just a number unless it is being used in a specific context to represent something physical. In this case, as Dr. Brown has repeatedly noted, the construction of the N&Z equations puts the regression coefficients in a context that gives them a physical meaning (they have units of pressure). Also, I would suggest to you that ‘i’ does not defy the laws of mathematics at all. It is just another abstract mathematical concept that is useful in solving some physically meaningful problems. [Reply] Know of any other ‘numbers’ which when multiplied by themselves gives a negative number? 85. Ned Nikolov says: Dear Dr. Brown: You have asked legitimate questions and we plan to address them all in our Reply 2. This would be better than addressing them here, because readers can then link those to the bigger picture discussed in Part 2. BTW, some of the answers have already been provided in the papers and in our blog posts on this and other threads, but we will elaborate on those once again since we understand that being a new paradigm, this theory has details that can easily escape one’s attention on a first read …. For example, to your question: “… if your equation 7 were truly predictive, how would it predict this? Are the ice ages caused by the Earth losing atmosphere and hence surface pressure? Do they end because the pressure goes up?“. The answer is partially contained in Section 5 of our first paper, and specifically in Fig. 10. Ice ages are NOT caused by pressure changes, they are caused by orbital variations (the so-called Milankovitch cycles). Earth’s atmospheric pressure has been relatively stable for the past 1.8M years. Pressure changes typically occur (and control global temperature) on a time scale on millions to tens of millions of years … 86. Ned Nikolov says: wayne says (February 17, 2012 at 12:01 am) Robert Brown, you have not read N&K’s paper correctly. N&K’s Tgb has nothing to do with the actual atmospheric Bond albedo. Tgb is defined in the paper as the albedo and emissivity of that planet or body with NO atmosphere… no ice… no oceans… no clouds, possibly no rotation though in the definition that matters little. You start off incorrect in your point two from the very beginning. Thank you, Wayne! You made quite a correct observation! … As I mentioned previously, a lot of details are not being picked up (understood) by many bloggers including physicists on the first read. That’s because people always look through the glasses they are used to wear, while a new paradigm requires a new pair of glasses … [ ]…[ ] 87. Bob_FJ says: N&Z propose a change of paradigm in “climate science”. One of my favorite paradigm shifts was proposed by Alfred Wegener concerning his unproven “continental drift”, (tectonics), for which he was scorned by his contemporaries, only to be accepted relatively recently. A controversial blogger “Myrrh” at WUWT has cited extensive links that convincingly explain WHY people living in areas of low exposure to sunlight by latitude have evolved to have pale skins, whilst being descendants of black peoples in Africa. The evidence is strong that that vitamin D is multiply essential for health, and that much more D is generated by UV in pale skin. However, this flies in the face of the medical and governmental church, whom collectively insist that we should not expose our skin to sunshine. See Myrrh’s post: http://wattsupwiththat.com/2012/02/03/monckton-responds-to-skeptical-science/#comment-895283 And my following response, but there is a lot of reading in the links which may not be time effective for N&Z to follow. 88. Robert Brown says: Thank you, Wayne! You made quite a correct observation! … As I mentioned previously, a lot of details are not being picked up (understood) by many bloggers including physicists on the first read. That’s because people always look through the glasses they are used to wear, while a new paradigm requires a new pair of glasses … Dear Dr. Nikolov, I assure you that I have not missed this point. However, it is completely irrelevant. I have just completed applying your hypothesis, with your own numbers for T_gb per object, to the actual commonly accepted numbers for T_s for the planets in question. Curiously, with the exception of the last three points not a single planet lies on your curve. I have also applied your formula, with the T_gb you supply to objects orbiting Jupiter (e.g. Europa) to Io, Ganymede, and Callisto, all of which have even more atmosphere than Europa and all of which are considerably warmer — but not in the right direction — Io has the greatest surface pressure by three orders of magnitude but Callisto has the greatest mean temperature. None of them — including Europa, whose mean temperature you underestimate by over 30% — lies remotely near your curve using your own T_gb. However, the warmer temperature of Callisto is instantly understandable given its low albedo. This forces me to ask the question — exactly how did you come by the numbers in your Table 1 for T_s for the planetary bodies in question? When I look at the goodness of the fit to your model, it appears to me to be impossibly good. Literally impossibly. If one ascribes even modest error bars to the T_s and P_s in question, your curve would seem to put each and every point dead on the curve. Surely you realize that this is extremely unlikely in any fit involving real world data. You do not provide any references for the numbers in your Table 1 so I cannot check them against the references you actually used, but they are in significant disagreement with the numbers that I found in every instance but Titan, the Earth, and Venus. One critical aspect of science is reproducibility. I am endeavoring to reproduce your results, but find myself unable to. Please help me by explaining the sources of your data and how you arrived at the numbers in your table 1. I’d be happy to provide the table of numbers I used, and a description of their provenance, as well as the octave/matlab code I used to perform the comparison available, or if you would prefer I can just publish the graph itself on this blog, but before I do I would really like to see where your numbers come from and how it happens that they lie so perfectly on your curve. For example, in your table 1 you find that Mercury and the Moon both exactly have $N_{TE} = 1.000$ — to four digits, presumably. This is all by itself simply not the case. Your estimate of Mercury’s temperature is egregiously low, and its albedo is not (according to most published work) equal to that of the Moon. Neither of them has a significant atmosphere, so one would expect their mean temperature to be determined by their actual albedo according to your own reasoning! Yet somehow they end up having exactly the right surface temperature to have the same $N_{TE}$ in spite of that fact that physically, this is quite impossible by your own arguments. How did that work out, exactly? rgb 89. B_Happy says: I have also asked about the Galilean satellites, and have as yet received no reply to my question regarding the temperatures. Repeating Robert Browns point – where did the experimental data come from? As a further complication, even calculating S0 for these satellites is problematic, unless allowance is made for a) tidal heating b) radiation from Jupiter itself and c) the time the satellites spend in Jupiter’s shadow – were any of these allowed for? 90. Anything is possible says: Robert Brown says: February 19, 2012 at 10:18 pm Yup. I have a similar problem with this aspect of the theory. My thinking is that the formula shouldn’t work on planets with tenuous atmospheres for the very simple reason that the ideal gas law appears to break down under low pressures. The clue is in the way that the temperature/height relationship, as defined by a stable adiabatic lapse rate, breaks down at the top of the troposphere (the tropopause) on every planet with a “mature” atmosphere. It happens on Venus, it happens on Earth, it happens on Titan, and it also appears (according to wiki) to happen on the gas giants – Jupiter, Saturn, Uranus and Neptune. Even more significantly perhaps, it seems to happen at a similar atmospheric pressure (200mb-250mb) on ALL these planets. I’d be very interested to hear the thoughts of all you professional physicists on this……. 91. Robert Brown says: B_Happy says: February 20, 2012 at 12:15 am I have also asked about the Galilean satellites, and have as yet received no reply to my question regarding the temperatures. Repeating Robert Browns point – where did the experimental data come from? As a further complication, even calculating S0 for these satellites is problematic, unless allowance is made for a) tidal heating b) radiation from Jupiter itself and c) the time the satellites spend in Jupiter’s shadow – were any of these allowed for? Good points B. However, those corrections seem as though they are in the wrong direction given that Io is very close to Jupiter, covered with a relatively dense atmosphere (that is still quite close to hard vacuum, of course), and yet cooler than Callisto which is so distant that tidal heating is surely ignorable. Then there is the conceptual difficulty of pretending that Europa — in an atmosphere that is the most tenuous of the four — has a surface temperature (given in the table as 73.4K but given on its Wikipedia page as 102K) that is completely unaffected by the fact that 2/3 of its insolation is reflected back to space without heating anything at all. Just FYI, Wikipedia has the following data: Moon T_s P_s Callisto 134K 7.5pbar Io 110K 300-3000 pbar Ganymede 110K 2-25 pbar Europa 102K 1 pbar Given a common T_gb, this data alone completely confounds the “miracle” of equation 7. Io should be the warmest of the planets by far, and all of them should be much, much colder (commensurate with N&Z’s 73.4K for Europa) in order to fall close to their curve. Wikipedia actually provides references for their numbers. Perhaps Nikolov and Zeller would be so good as to do the same? I’d like an idea of the uncertainties of those numbers as well — if the error bars are as large as they seem as though they really have to be (given the disparity between their numbers and the published/accepted numbers) then one can actually compute a p-value or chi-squared for their fit and see if it is reasonable. rgb [Reply] N&Z give full information on how they calculate the T_gb for airless bodies in their last paper. FYI Wikipedia has removed the reference to Mercury’s average surface temperature calculated from the ‘classic’ S-B method, since we pointed out that it was physically impossible for it to be higher than the simple average of the equatorial max and min empirical data. 92. Robert Brown says: Even more significantly perhaps, it seems to happen at a similar atmospheric pressure (200mb-250mb) on ALL these planets. I’d be very interested to hear the thoughts of all you professional physicists on this……. My own thoughts are that the usual greenhouse effect is determined by the height, and via the DALR temperature, at which the atmosphere becomes optically thin to radiation from greenhouse gases. It isn’t unreasonable that many gases would become optically thin at similar pressures. This also determines the tropopause — below the tropopause there is significant vertical convection from the differential heating at the surface and cooling at the top of the troposphere. So for the last four planets they attempt to fit — Mars through Venus — they all have a troposphere and a stratosphere and hence have a surface temperature that is related to a DALR, although I personally don’t find them lying on a single N_TE curve (Mars is well off) with N&Z’s T_gb. I haven’t tried computing N_TE with a physically plausible T_gb that uses the actual albedo of the planets involved, but I’m a bit doubtful that it will produce a particularly compelling fit. That’s on my agenda of future work to do with my little octave program. [Reply] Both N&Z and Hans Jelbring agree Mars’ atmosphere is too thin to show a pressure effect on surface temperature. Using actual albedo rather than T_gb won’t work in N&Z”s equations. Willis didn’t understand this either. The other Moons as you note have very tenuous atmospheres indeed — atmospheres that are already far thinner than the pressure at the tropopause on most planets — and IIRC only Triton has a troposphere at all, and it lacks a stratosphere. There isn’t any reason in the world to think that the physics that dominates their mean surface temperature in any way resembles the physics that dominates the surface temperature of Venus. For one thing even those Moons with greenhouse gases are optically thin and unsaturated and have very little greenhouse effect compared to a planet with an optically thick, saturated GHE. And as I have noted and do not wish forgotten, N&Z’s equation 7 has completely unphysical reference pressures in it. How the reference pressure of 54,000 bar or 202 bar can arise in any sane way from the consideration of physical principles very much remains to be demonstrated. Personally, I reject it out of hand as evidence that the entire result is wrong unless and until most rigorously proven otherwise. [Reply] I notice you still haven’t answered the question I last asked five days ago. I’ll be snipping any further repetition of your argument about ‘reference pressure’ since you are unwilling to engage in a discussion of its relevance or applicability. Perhaps it is worthwhile to go ahead and show the results of applying N&Z’s own T_gb per planet to independently obtained estimates of planetary T_s to form N_TE. http://www.phy.duke.edu/~rgb/loglong-nandz.jpg It’s a log-log plot so that one can see the data (spread out over many orders of magnitude in pressure, otherwise). The leftmost x is the moon, then mercury, then the four circles are the Jovian moons, then the remaining x’s are triton, mars, earth, titan and venus. Note that I do not pretend that my numbers are certain, only that I can tell you where, and how, I got them, and that I think that they are pretty good ones unlikely to be more than maybe 10% off (less in the case of the Earth, Mars and Venus, which all have more or less permanent weather satellites and good data, more in the case of planets known only from single flybys and very long range Hubble studies — in many cases the error ranges I did find were easily 10% for the pressure alone (probably measured indirectly from pressure broadening of spectral lines, something with fairly large uncertainties given a signal from the entire atmosphere and not just the surface). The +’s are N&Z’s own data from Table 1 in their paper/poster. The curve absolutely precisely goes through the +’s — even a deviation of a few percent in either T_s or P_s would be enough to move them well off, as indeed occurs for all of the points but three in my refit. How can this be? The + signs are presumably experimental data with uncertainties! There is a “miracle” here indeed! rgb [Reply] Did you use surface temperatures calculated with the stefan-boltzmann method N&Z have comprehensively shown to be wrong with empitical data for our Moon? 93. B_Happy says: “Both N&Z and Hans Jelbring agree Mars’ atmosphere is too thin to show a pressure effect on surface temperature. Using actual albedo rather than T_gb won’t work in N&Z”s equations. Willis didn’t understand this either.” Eh????? If Mars’s atmosphere is too thin to show a pressure effect, then why is it in their training set, and why do N+Z quote it as a success? Are you claiming that their NTE values work, but actually have nothing to do with the pressure? And if the Martian atmosphere is too thin, why are Europa and Triton in there. [Reply] Because although the atmosphere is too thin to warm the surface, their equations still work. “Are you claiming…” No, I’m not. “Did you use surface temperatures calculated with the stefan-boltzmann method N&Z have comprehensively shown to be wrong with empirical data for our Moon?” I would say they have asserted it to be wrong, not that they have shown it. Do you really think the temperature drops to 3K when the sun is not shining? [Reply] The N&Z method assumes a zero heat capacity for the surface, but gets the average surface temperature right to within 6K for the Moon. The ‘classic S-B method’ gets the Moon’s average surface temperature wrong (too warm) by over 100K. 94. Crom says: I’ll be snipping any further repetition of your argument about ‘reference pressure’ since you are unwilling to engage in a discussion of its relevance or applicability. Tallbloke, did you miss Dr. Brown’s rather extensive comments regarding dimensional analysis and Fermi estimation in this thread? Or did they just go over your head? [Reply] Robert has still not responded to my simple question, which is logically prior to his more rarified analysis. I notice you haven’t responded to the question I asked you when you interceded on his behalf either. Neither of you will be posting more of the same repetitive verbiage here until it they have been properly responded to. Ignoring pertinent questions and instead repeating your own claims is merely politician’s rhetoric technique which has no place in scientific discourse. 95. Nick Stokes says: But could we have a reference for N&Z’s data? Several people have asked for it. The success of the theory rests on their ability to match temperature and pressure data, but where does it come from? [Reply] Thanks Nick, noted. I hope N&Z will address this in their next paper. Meanwhile, Robert has offered, and I have taken him up on his offer. 96. Stephen Wilde says: I said this elsewhere but I think it valid here: Neither the Earth nor the Earth’s atmosphere are black bodies.To give black body status to Earth you have to take a point beyond the atmosphere as the ‘surface’ and only then apply SB. Treating Earth and its atmosphere as two black bodies separated by a vacuum is wholly inappropriate because the Earth and its atmosphere are a single unit interacting primarily via non radiative processes which is where the Gas Laws come in. So, for planetary bodies separated by a vacuum, apply SB but only at a point outside any atmospheres where radiative processes do indeed dominate exclusively. For bodies not separated by a vacuum, such as a planet and its atmosphere, apply the Gas Laws because non radiative processes dominate. AGW has applied radiative physics to a non radiative scenario and the outcome is garbage. 97. Stephen Wilde says: Applying the SB equations at the contact point between a planet and its atmosphere is no better than applying it at the junction between the Earth’s mantle and the crust. The SB equations are only of relevance at the junction of atmosphere and space. They can never predict the temperature at the contact point between two differing materials within a planetary system. 98. Robert Brown says: Tallbloke, I think that you are really grasping at straws if you are trying to assert that the use of the complexxx unit “i” in mathematics or physics — or for that matter the use of general geometric division algebras of arbitrary grade, since complex numbers are just a step on an infinite series of geometries and number systems — has anything whatsoever to do with Fermi estimation and the appearance of a truly absurd reference pressure in a fit. [snip] [Reply] I didn’t. I asked you to explain the physical basis for it. Can you do that? 99. Robert Brown says: Treating Earth and its atmosphere as two black bodies separated by a vacuum is wholly inappropriate because the Earth and its atmosphere are a single unit interacting primarily via non radiative processes which is where the Gas Laws come in. I completely agree. But it is also completely off topic to the discussion at hand. Radiative balance only makes sense beyond the atmosphere, or if one wants to be very picky, one should really consider a sphere around e.g. the Earth at (say) $2R_E$ (well beyond the atmosphere) and compute the simple flux conservation equation: $\frac{dU}{dt} + \int \vec{S} \cdot \hat{n} dA = 0$ (or in words, the integrated outgoing flux of the Poynting vector equals the rate of change of the total internal energy inside) averaged over a sufficient time and assuming that one can neglect work and stored energy, at least on average. It isn’t even this simple — the notion of “average” has to be coarse-grained, we cannot really track the ultimate disposition of all of the retained energy when the one doesn’t balance the other — but in general this sort of equation describes the energy flux in and out of the Earth via radiation. rgb 100. Robert Brown says: [Reply] The N&Z method assumes a zero heat capacity for the surface, but gets the average surface temperature right to within 6K for the Moon. The ‘classic S-B method’ gets the Moon’s average surface temperature wrong (too warm) by over 100K. 6K is over 2% error. Yet $N_{TE} = 1.000$ to four digits in their table 1 data for both the Moon and for Mercury. Curious, don’t you think? rgb [Reply] I find it a good deal more curious that the climate science mainstream, and apprently, Duke physicists, still defend the classic S-B planetary equation when it has been demonstrated to be at odds with empirical data of planetary surfaces not be a couple of percent, but by over 50% (!?) 101. rgbatduke says: Tallbloke, I think that you are really grasping at straws if you are trying to assert that the use of the complexxx unit “i” in mathematics or physics — or for that matter the use of general geometric division algebras of arbitrary grade, since complex numbers are just a step on an infinite series of geometries and number systems — has anything whatsoever to do with Fermi estimation and the appearance of a truly absurd reference pressure in a fit. [snip] [Reply] I didn’t. I asked you to explain the physical basis for it. Can you do that? OK, I’m done. Best of luck and all that. rgb [Reply] Robert has been defeated by an imaginary number. I never imagined that would happen. It’s easy to see that N&Z could recast their equations to include a significating algebraic letter to represent their multiplication factor for their equation 7, just as engineers and electricians use ‘j’ in their equations for real world solutions that really work, even though they, and Robert are unable to explain the physical basis for it. Robert’s complaint about “absurd reference pressure” is therefore itself absurd. Of course, this only gets N&Z’s equation to the level of a heuristic, so this small victory doesn’t close the issue of the underlying physical basis. In the meantime, we’ll await N&Z’s log-log plot so we can compare it with Robert’s, and further ahead, more planetary data their heuristic can be tested against. 102. tallbloke says: Ned Nikolov says: The actual temperature profile in Earth’s atmosphere is more complex because of differential absorption of spectral solar radiation by air constituents with increasing altitude. The temperature raise in the stratosphere is due to an increased absorption of UV radiation by ozone (oxygen) molecules with height. Higher levels in the stratosphere absorbs MUCH more energetic UV light than lower levels. The pressure effect in terms of relative thermal enhancement is still there and pressure falls with height, but the UV absorption by the higher levels is so much more than that at lower levels that the lapse rate becomes actually inverted. This is similar to the situation we have with Earth and Titan. The NTE factor for Titan is larger than that for Earth (due to higher pressure on Titan), but Titan’s surface is much colder than Earth because it receives/absorbs much less solar radiation … The temperature increase with height in the stratosphere has NOTHING to do with the proposed effect of GH gases to slowdown or reduce infrared cooling to space. There is no such reduction of cooling! Instead, there is an increased absorption of UV radiation with height. Reported temperatures of up to 2,500C in the thermosphere reflect a bit of a confusion, because these are temperatures (energy states) of INDIVIDUAL molecules, not for the gas as a whole. For comparison, the temperatures typically measured and reported in the troposphere refer to the energy state of the entire gas volume. The latter are palatable temperatures as opposed to individual-molecule temperatures. Due to an extremely low air density in the thermosphere, the palatable temperature there is quite low! In other words, if you stick a normal thermometer into the thermosphere, it will measure temperatures that is WAY lower than 2,500C … see this Wikipedia page: http://en.wikipedia.org/wiki/Thermosphere Palatable temperatures are basically not compatible with temperatures reflecting the energy state of individual molecules. So, comparing thermospheric temperatures with tropospheric temperatures is a bit like comparing apples and oranges … Not many people realize that fact! - Ned 103. Stephen Wilde says: Ned makes a good point about the difficulty of comparing temperatures at different levels. That is what makes it virtually impossible to demonstrate with our current knowledge and sensing systems that the energy flow from ground to space for a given planet with an atmosphere does actually match the rate of flow that results from the dry adiabatic lapse rate. Logic, however, suggests that it must be so if the atmosphere is to be retained at all or kept in gaseous form for billions of years. I agree completely that from tropopause upward there are different thermal responses to solar irradiation at different levels. Indeed that is what alters the vertical temperature profile from above so as to affect the air circulation patterns below the tropopause whilst, at the same time, changes in sea surface temperatures are trying to achieve the same effect from the bottom up. Climate change is primarily a consequence of the ever changing balance between the top down solar and bottom up oceanic influences on the rate of energy flow through the system in place of any change in the energy content of the system. 104. Crom says: Tallbloke, I find it curious that people who disagree with you are required to answer your seemingly arbitrary questions under penalty of snip. Curious. To answer your question to me; 2i would be another number that, when multiplied by itself, would result in negative number. I suspect that you won’t find that answer satisfactory. I’m not really sure what you’re getting at, though. I wasn’t even certain that your question was serious. And to be clear, I wasn’t trying to “intercede” on Dr. Brown’s behalf. I was hoping that you would reconsider your question and perhaps find a better way to express it. As stated, your question to him does not make sense. Mathematics is abstract and none of it has any physical meaning unless it is being applied to a physical system. Since you did not provide any context (e.g. a particular physics formula), of course ‘i’ has no physical meaning. How could it? [Reply] When I find time, I’ll make a post out of it so it can be presented more completely, and discussed more thoroughly. Robert misrepresented what I was asking for, and used that misrepresentation as the platform for (yet another) lengthy rant about N&Z. Nearly all of it was repeating earlier comments he made. I had already stated (twice) my intention to curtail further rants which didn’t address my question. Robert chose to answer a different question of his own invention and then rant. That’s why I snipped it early and put him straight about what I was asking for. 105. tallbloke says: Joel Shore says: Robert Brown is someone who you ought to have been able to keep on your side if you were at all reasonable given that he seems strongly pre-disposed toward a skeptical position on AGW. Unfortunately for you, however, he also knows enough physics not to believe silly pseudoscientific nonsense like you and Nikolov and Zeller are peddling. By the way, you really seem to have no real clue about the science that you are attacking, which means you spend most of your time attacking strawmen. [Reply] Hi Joel, You set ‘em up, I knock ‘em down. So is that your considered opinion on the vertical temperature profile of the troposphere? That N&Z’s idea that the main cause of it is the Sun’s energy interacting with the gradient in air pressure caused by gravity, and the consequent higher near surface density with its higher heat capacity is: “silly pseudoscientific nonsense”? it’s very noticeable that you won’t confirm or deny this basic point. I think you are being evasive and unresponsive. Neither of these traits are conducive to proper scientific discourse, so answer the question please. For myself, I think the ocean has a lot to do with the reason surface T (and consequently marine surface air temp) is what it is. This is because the Sun heats it faster with shortwave radiation which penetrates it than it can cool overnight by evaporation, convection, conduction and long wave radiation (which has a tough time escaping). That is, until it gets up to a temperature where those processes removing heat from it (and all of them do on the average) can work at a rate which sets a rough equilibrium. That temperature seems to be around 275K judging by the bulk of the ocean. This also depends on near surface air temperature to a small extent ( but only small, since on average the ocean is warmer than the air, and ‘back radiation’ can’t penetrate the surface anyway) but I think N&Z are right in the wider sense that if the mass of the atmosphere wasn’t exerting pressure on it, the ocean would have boiled off into space. So I’ll continue to provide a venue where the details and premises of their theory, and its strengths and weaknesses, can be calmly discussed without being trash talked by you and Willis, and gish-galloped into the ground like you and he and Robert did at WUWT. As for the quality of Robert’s science, the news that the laws of thermodynamics have been considered and defined in terms of energy rather than heat since the 1880′s doesn’t seem to have reach Duke yet. He is knowledgeable in some specialist areas, but at the end of the day he’s just another person with a false sense of infallibility and a propensity to talk (much) more than listen. He should himself have paid more attention to the Feynman lecture he berated N&Z with IMO. I don’t see much in the way of Feynmannian humility in Robert, or you, or Willis for that matter. All of you need to read the Loschmidt thread. 106. wayne says: “[Reply] I find it a good deal more curious that the climate science mainstream, and apprently, Duke physicists, still defend the classic S-B planetary equation when it has been demonstrated to be at odds with empirical data of planetary surfaces not be a couple of percent, but by over 50% (!?)” I double agree Roger. I am shocked too at the general attitudes of men of science that should be inquisitive. No wonder sceince is in the shape it is in. I never heard Ned or Karl claim that those four best-determined parameters were set in stone. There is one point where I do agree with Dr. Brown, at some point in the future the physical process needs to be found, explained, and those parameters take on proper form in the units. Heck, the entire equation may even end up in a different form for what anyone knows at this point, but that curve stays. It is too consistent, well formed, and smooth to not drive the future investigations to find out why it exists. 107. Stephen Wilde says: “empirical data of planetary surfaces not by a couple of percent, but by over 50% (!?)” Well, you would get that if you define the SB ‘surface’ as being within the system beneath an atmosphere and then fail to apply the Gas Laws. Might as well apply it between the crust and the mantle !!! 108. Tenuc says: “Robert has been defeated by an imaginary number. I never imagined that would happen. “ Nor me! Being a physicist he should be used to dealing with the imaginary stuff which has been invented to protect the ‘standard model’ and has lead us to today’s brand of unphysical physics. No wonder climate science can’t understand what is actually going on when the underpinnings are collapsing. Time to go back to a mechanical explanations for what we see. 109. j.j.m.gommers says: About the impact of rotation on airless planet. I did numerical calculations for the entire equator for two cases a. nonrotating b. infinite rotating a. Tmean(GB) = 0,5 Te b. Tmean(GB)= Te Conclusion temperatures are converging with rotation as postulated by A.Smith(2008) When I looked to the results it made sense to me, half of the surface is not used in case a. I made a brief post with calculated temperatures and a brief explanation and mailed it to WUWT. If there is interest in an extensive post with more details I will submit. 110. tallbloke says: Willis Eschenbach says: April 26, 2012 at 2:21 am Lucy, you never did understand the problems I exposed in Nikolov and Zeller’s work at “The Mystery of Equation 8“. In fact, in that thread you said: I get the feeling that there are a number who can see Willis’ limitations who are no longer coming here to post. … to which another poster replied about why some people, including Nikolov and Zeller, were no longer posting on that thread Yes, their goose has been well and truly cooked by Willis’s article, their fox has been shot. Anyone with a basic knowledge of science, or in this case,just basic mathematics, is aware that when the number of ‘fudge factors’ exceeds the number of unknowns then any ridiculous proposition can be formalised. It isn’t really a ‘Miracle’. Well done Willis – that’s what I call a game-changer. Lucy, have you ever thought that you and Tallbloke do harm to the sceptic cause by promoting nonsense? Indeed, the poster was right, you do harm … I note you haven’t attempted to reply at WUWT to their rebuttal here of your serial maths errors in your vicious and vacuous attack piece: In that post, you also spoke highly of Hans Jelbring and his cockamamie hypothesis that you can get ongoing energy from gravity, a hypothesis that I discussed in Perpetuum Mobile, and that Dr. Robert Brown totally blew out of the water with a formal proof in Refutation of Stable Thermal Equilibrium Lapse Rates. Jelbrings hypothesis was obviously and glaringly wrong. But you, you thought Jelbring’s hypothesis was good, solid science. It rests on which side of the debate over the as yet after 120 year unresolved Loschmidt paradox. I doubt you have the finesse to understand such nuanced issues in the history of science. Heck, even Nikolov and Zeller wouldn’t answer my questions. They refused to reply, to defend their work, or to even discuss their work, they ran like vampires at sunrise from the huge problems I pointed out in their work … and now you want me to listen to you explain their brilliant science? Really? They didn’t get the right of reply at WUWT, so they demolished your disgusting ad hominem attacks and appalling maths errors on this thread instead. Likewise, Lucy has responded here: http://tallbloke.wordpress.com/2012/04/24/the-connection-to-evolution-is-a-culmination-of-this-work-dtu-director-on-svensmarks-new-paper/#comment-24093 You, Willis, are a disgrace to the principle of fairly and courteously conducted scientific debate. 111. ozzieostrich says: Tallbloke, I wonder if you agree with the proposal that maximum radiative transfer of energy between non contiguous bodies occurs in a vacuum. If you agree with this, it should take you about ten seconds to realise that Interposing anything at all – CO2, pixie dust, whatever, cannot possibly result in anything other than a drop in received energy, and hence a drop in temperature in an already cooling body such as the Earth. Anyone who calculates the Earth to be warmer than it is, is a fool. The Earth probably (nobody was there at the time) had a surface temperature in excess of 5000K at the time of its creation. The surface has demonstrably cooled to the present time. Has it stopped cooling yet? Highly unlikely, given that most of he Earth by mass is still molten. Sitting in a vacuum, receiving insufficient insolation to hold the temperature any higher than it is now, the Earth should continue to cool. So, a rise in the Earth’s store of energy may be caused temporarily by the mechanism which results in the creation of CO2 – oxidation of carbon. This is radiated away. I note that many people seem to be enthralled by analogies – so here’s one. Heat a more or less spherical chunk of steel up to white heat. Don’t record the initial temperature. Wait until it has cooled a fair bit. Don’t record the length of time it has been cooling. Now calculate the temperature using SB or any other equation you like. Measure the actual temperature. Explain the difference between the calculated and “real” temperature using words like “back radiation”, “forcings”, “sensitivity”. “radiative transfer”, or any buzz words that mean whatever you want them to mean. I think I am right. I will change my views if I am wrong. Nobody seems to be able to discuss my initial understanding about radiative transfer of energy. Thanks, Mike Flynn. 112. tallbloke says: Hi Mike, and welcome. “Interposing anything at all – CO2, pixie dust, whatever, cannot possibly result in anything other than a drop in received energy,” Co2 is largely transparent to incoming wavelengths from the Sun, but absorbs at some of the longer outgoing wavelengths. The cloud albedo is a much bigger blocker of incoming sunlight than the absorption direct into the atmosphere. “Sitting in a vacuum, receiving insufficient insolation to hold the temperature any higher than it is now, the Earth should continue to cool.” So far as I can tell, it receives just enough radiation to hold a steady surface temperature. It is thought that only around 0.1W/m^2 is escaping from under the crust on land. I suspect it loses a bit more than that into the ocean though. 113. ozzieostrich says: Tallbloke, Thank you for your response. What I want to know is whether there is any known material in the universe which allows transmission of EMR better than a vacuum. I gather from your answer there is not, but you haven’t specifically answered me. As far as I am aware, there is no such thing as a one way insulator. In any case, it matters not whether the wavelengths are long, short, or in between. If a body absorbs energy of any wavelength, its energy content will increase, with the inevitable consequences. In he case of CO2 etc, the energy absorbed raises the temperature of the CO2, which, then radiates the increased energy away. I believe this is why NASA is able take IR photos of CO2 distributions within the atmosphere, amongst other things. So, I wonder if you agree about the vacuum thing? Regardless of re-radiating, reflecting, refracting, or otherwise attempting to create something from nothing, we can’t seem to achieve Earth surface temperatures anything like those on the Moon. The main difference to me seems to be the atmosphere reducing the efficiency of insolation. I should perhaps also point out (and I mean no offence), that people talking about radiation from the Earth’s surface forget that when energy radiates away from the Earth’s surface, the temperature of the surface drops by precisely as much as it rose after absorbing the same amount of energy. So any re-radiation, “back radiation”, or whatever you want to call it, can never replace that temperature drop in reality. Of course, a perfect insulator would ensure that the body’s temperature would remain the same, but such things do not exist. Anyway, the whole thing is certainly fascinating. If I am correct, and about the only man made warming is the warming created by Man in the process of oxidising carbon in the main to create CO2, (I know there are a whole lot of other methods of heat generation) then worrying about “climate change” in the sense that we can affect the outcome by reducing GHGs is the waste of a good worry! Sorry to be so long winded, but I sometimes (usually) find difficulty in getting a definite answer to a simple question. I didn’t finish high school, and I find a lot of the information on the Internet contradictory. Mike Flynn 114. tallbloke says: Hi Mike: I agree a vacuum transmits EMR best. Co2 radiates in all directions, not just ‘away’ in the sense of ‘to space’. “we can’t seem to achieve Earth surface temperatures anything like those on the Moon. The main difference to me seems to be the atmosphere reducing the efficiency of insolation.” Neither as hot, nor as cold at the extremes. The atmosphere and coupled ocean spreads heat, creating a higher average temperature than the Moon’s (due to Holder’s inequality) even though clouds make our albedo three times higher than the Moon’s – reflecting more of the incoming solar radiation directly back to space, ” reducing the efficiency of insolation.” as you note. Additionally, atmospheric mass enhances surface temperature by other mechanisms due to surface pressure. ‘Back radiation’ is a minor bit player in our estimate, so we agree about that, although we agree for different reasons. There is back radiation, and it is absorbed by the surface, but as you point out, the surface is cooled by evaporating, emitting radiation and convecting, before the back radiation returns, and 7/10ths of the surface (the ocean) absorbs all the back radiation in the first few nm, where it mostly promotes ocean surface cooling evaporation. So the surface temperature is raised by something else. N&Z say it’s pressure enhancing the lower atmospheric temperature. I say it may be partly that, plus the limit surface pressure places on the rate of evaporation from the ocean surface. Cheers TB 115. Stephen Wilde says: “plus the limit surface pressure places on the rate of evaporation from the ocean surface.” Actually the energy cost of a given amount of evaporation. Not the rate. The rate is governed by lots of other factors. Pressure governs the value of the latent heat of evaporation. At 1 bar it is currently around 5 units of energy taken up by the phase change for every 1 unit of energy input. Reduce pressure and it will be more than 5 to 1. Increase pressure and it will be less than 5 to 1 (Hope I’ve got that the right way round – in a hurry at the moment) Those relationships set the system equilibrium temperature for our water planet. The atmosphere just follows on. 116. ozzieostrich says: Hi Tallbloke, I notice that, in general, it seems to be taken as gospel that the Earth is somehow “warmer” than it should be, and that this needs an explanation. Terms such as “indisputable”, “irrefutable”, “well proven” and the like abound in supposedly serious discussions. The physical observations seem to indicate otherwise. The solidified crust of the Earth (on which we live), would represent about a millimeter or so, on a globe of molten rock some 200 mm in diameter (assuming my mental arithmetic and memory are reasonable.) This combination of molten and solid(ish) material we call the Earth, is cooling. You imply that the rate is not significant, at an average of 0.1 W/m^2. It is worth considering that if incoming energy is to rise by that amount, the Earth would cease to cool, and would melt the crust in due time, as the temperature gradient between the core and the surface became zero (apart from effects due to turbulent flow within the liquid blob.) So whatever the total loss of energy is, I would prefer it to be as higher rather than lower. I have no wish to fry due to a minor rise in the Sun’s output. In any case, as you rightly state, the Earth’s surface loses energy at some rate not easily quantifiable. This results in less energy within the Earth, and a subsequent fall in temperature. This loss is not made up by insolation, as the best that the Sun can do is to warm small portions of the surface, or things on the surface, to no more than about 90C. As you agree, the amount of insolation reaching the Earth’s surface per unit unit area is less than that of the Moon. If the average Earth surface temperature is higher, might not the fact that the Earth is >99% molten, and losing heat through the crust, thereby raising the crustal temperature well above what the Sun can account for, be the cause? N&Z say it’s pressure enhancing the lower atmospheric pressure, and they may be right. An actual experiment or two would help. At the moment, I am happy with the molten earth slowly cooling. I’m not sure whether it is at all relevant, but pressure doesn’t seem to warm the abyssal depths much. Maybe it only applies to gases, and this should be easy enough to demonstrate. Finally, may I point out that the Moon’s interior appears to be considerably colder than the Earth’s. Once again, it seems logical that the difference in surface temperatures is depressed rather than raised by the presence of an atmosphere. Any observed surface temperature differential between the Earth and the Moon, is purely due to the fact that the Earth has a hotter interior closer to the surface. Live well and prosper. Mike Flynn. PS Sorry to be a pain, but I think people who believe the atmosphere somehow “warms” the Earth suffer from collective infectious delusionalism. The luminiferous ether, phlogiston, phrenology, gastric ulcer causation, circular planetary orbits – take your pick. All a part of the same continuum. Pity about the wasted money, but I suppose we need more poverty and starvation. We certainly try hard enough. Blog at WordPress.com. | Theme: Greyzed by The Forge Web Creations. Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 117, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461051225662231, "perplexity_flag": "middle"}
http://gmc.yoyogames.com/index.php?showtopic=528045&pid=3883214&st=0
# My path finding algorithm. Started by , Dec 29 2011 09:24 PM 3 replies to this topic ga05as GMC Member • GMC Member • 876 posts Posted 29 December 2011 - 09:24 PM Hello, I am attempting to develop an algorithm which finds the shortest path between two points on a square grid, whilst avoiding obstacles. No diagonal movements are allowed, only up down left and right. The algorithm is split into two parts: -The labeling procedure, -The actual path finding algorithm. At the moment this is it: ----Labeling procedure---- 1) Label the start box 0. 2) Label all boxes surrounding the start box that are not filled with an obstacle with 1, If no such box exists and the end box is not yet labeled, then no path is available and stop. 3) Label all the boxes surrounding the boxes labeled 1 that are not filled with an obstacle (and not yet labeled) with 2. If no such box exists and the end box is not yet labeled, then no path is available and stop. 3) Label all the boxes surrounding the boxes labeled 2 that are not filled with an obstacle (and not yet labeled) with 3. If no such box exists and the end box is not yet labeled, then no path is available and stop. 4) Continue this procedure until all boxes that are not filled with an obstacle have a label. ----Path finding algorithm---- 1) Start at box 0, let the current path be P. 2) If this is the end box then stop, otherwise go to step 3. 3) Let n be the label of the current box, and add the current box to P. 4) At random choose a box connected to the current box that is labeled with n+1 and is not redundant, - If such a box exists, make that the current box, and go to step 2. -If no such box exists then treat the current box as redundant, remove it from P and never return to it, go back to the previous box and return to step 2. ---------------------------------------------------------------------------------------- Here are two animations showing the two algorithms in action... Labeling procedure: Path finding algorithm: As you can see from the second animation, because the next box is chosen randomly (when there is a choice) it is possible to go the "wrong way" and have to back track. Can you think of a "smart" way of picking which box to go to next so to minimize the amount of back tracking? Edited by ga05as, 29 December 2011 - 10:17 PM. • 0 ga05as GMC Member • GMC Member • 876 posts Posted 29 December 2011 - 11:15 PM Improved it i think. Instead of the path finding algorithm, if you work backwards from the end it will remove the randomness and should find the optimum solution. • 0 tangibleLime Lunatic • Global Moderators • 2520 posts • Version:GM:HTML5 Posted 30 December 2011 - 04:35 AM To me it looks like you're going for a Markov Decision Process (MDP) model (http://en.wikipedia....ecision_process). Using MDPs, you can use something like SARSA or Q-Learning to determine the utilities of each state (essentially the desirability to be in each box) to determine an optimal policy. A policy (usually denoted by the Greek letter $\pi$) is not exactly an actual path - it is a function that tells the agent what to do in each state. This way, regardless of starting position, an optimal path can be implemented. Without getting too far into it.. You can actually (even by hand) calculate the utility of each state using Bellman equations - one for each box: $U(s)= R(s) + \gamma \sum_{s'} P(s'|s,\pi(s)) U(s')$ Where, U(s) = Utility of state s R(s) = Reward of being in state s $\gamma$ = Learning discount (should be between 0 and 1), which allows you to tell the agent how much to consider future states $P(s'|s,\pi(s))$ = Probability of moving from state s to s' given the optimal policy The idea is to find a good reward assignment situation to make the agent want to reach the goal as quick as possible. Changing the reward values of states can radically alter the optimal path. For example, if all states that are not a goal have a reward of 1 and the goal has a reward of 0.9, the optimal policy will force the agent to never reach the goal. I don't like step 4 of your pathfinding algorithm - how it chooses randomly. True, it is good to occasionally choose randomly, but it would drastically increase efficiency to use some sort of heuristic. Branching off about what I said before, there are methods that are $\epsilon$-greedy. $\epsilon$ is a value between 0 and 1 that denotes the probability of taking a random action ($\epsilon$) or to take the action defined by the current optimal policy (1-$\epsilon$). Use this with SARSA and you'll get an efficient method of computing an optimal policy and therefore an optimal path in any environment, around any obstacles, starting from any box. • 0 paul23 GMC Member • Global Moderators • 3355 posts • Version:GM8 Posted 31 December 2011 - 02:16 AM Above seems to like a (slightly different implemented) version of dijkstra's algorithm for pathfinding (http://en.wikipedia....tra's_algorithm), which grows to each size equally. (If you let this algorithm run in an "open" field it would give a diamond-shape growth). In pathfinding there are often 2 numbers which are named: the heuristic "cost" & the move "cost" (called h & g). g is the amount of time/difficulty/whatever you value to reach from point A (start), the point you are analyzing. The heuristic cost is a rough estimate of the cost from the point B (end). In what you shown above, the "label number" would be the g. An optimizing is done by reducing the amount of nodes-to-check. This is done by a heuristic, instead of a simple label number, each cell would gain a h & g (label number) cost. Then you always take the cell with the lowest summation of both numbers. With a slight change to the algorithm you get A*, (open, closed list are simple terms for lists, where open list is always sorted by "h+g" and closed list is simply a quick-lookup table): • add the start cell to the open list • UNTIL open list is empty OR start node is added to the closed list • remove cell from open list, add it to closed list • look at all walkable cells (called new cell) around the cell (called old cell) • IF new cell IS NOT in closed list • Calculate g for new cell (basically old cell + 1) • IF new cell IS NOT in open list OR calculated d is SMALLER than previously calculated g (this can happen if you have an object blocking the way and you can move in 2 ways around it) • Calculate H to the new cell (manhatten distance, diagonal distance, pythagoras distance, or use an adapted distance calculation) • Add (or replace the value) new cell to open list: with the sorting value of d + h. • set old cell as "parent" to the new cell (so you can "walk it back") - store the parent, g & h in the cell [/list] [*]IF end node IS NOT added to the closed list • No path can be found [*]ELSE • Take the parent node of end node, and add it to the path • while parent node IS NOT start node • Take the parent of the parent not and add it to the path (you will basically walk the path backwards here, using the stored parents) [/list] Now the path is the shortest path "in reverse". Using A* will yield in a typical "rectangle" shaped growth when used in an open field. - As I can't really explain it too well in a short manner here I recommend you reading this site. It is a bit of a read, but very understandable & covering a lot of pathfinding problematics. • 0 #### 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users Reply to quoted posts     Clear • Change Theme • Help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9092004299163818, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47704/scattering-amplitudes-in-centre-of-mass-frame?answertab=oldest
# Scattering Amplitudes in Centre of Mass Frame I'm reviewing page 59 of the QFT notes here and am a little confused by a reference frame argument. You can compute the second order probability amplitude term for nucleon-nucleon scattering to be $$-ig^2\left[\frac{1}{(p_1-p_1')^2-m^2+i\epsilon}+\frac{1}{(p_1-p_2')^2-m^2+i\epsilon}\right](2\pi)^4\delta(p_1+p_2-p_1'-p_2')$$ in a scalar field theory approximation. Now the author argues that we may remove the $i\epsilon$ terms by moving to the centre of mass frame. Here he says that $p_1=-p_2$ in this frame and that $|\vec{p_1}|=|\vec{p_1'}|$ by conservation of momentum. He continues to claim that the four-momentum of the meson is hence $k=(0,\vec{p}-\vec{p'})$ so $k^2<0$. Quite what $p,p'$ are I don't know exactly. I don't understand this argument at all. Surely in the centre of mass frame, the sum of all momenta $p_i$ and $p_i'$ is zero (.)? Also where has the second constraint come from? I don't see how morally you could get more than my claim (.). Could someone explain this argument to me? Very many thanks in advance. - Be careful about distinguishing three- and four-vectors. Using boldface letters for 3-vectors, we have in the CM frame $\mathbf{p_1} + \mathbf{p_2} = 0$ (as 3-vectors), but $p_1^0, p_2^0 > 0$ (you can compute these using the on-shell condition), so $p_1 \neq -p_2.$ If you write $p := |\mathbf{p_1}| = |\mathbf{p_2}|,$ $p' := |\mathbf{p'_1}| = |\mathbf{p'_2}|$, then you can show that $p = p'$ due to conservation of energy (please do this carefully). – Vibert Dec 27 '12 at 14:02 ## 1 Answer The transferred 4-momentum $k = p - p'$ is a difference (not a sum) and is a momentum of the meson. In the CM reference frame it has only space coordinates, so its square is negative: $k^2=0^2-(\vec{p}-\vec{p}')^2=-(2\vec{p})^2<0$. - I think my problem is that I don't know how $p$ is related to $p_i$ and $p_i'$. Could you expand on that? And why does it have only space coordinates in the centre of mass frame? I understand if we're talking about the centre of mass for the meson itself, but aren't we interested in the centre of mass for the whole system? Sorry if I'm missing something simple! Many thanks. – Edward Hughes Dec 27 '12 at 13:56 The system is a couple of nucleons with the same mass. 4-momentum conservation law reads: $p_1+p_2=p'_1+p'_2$. The transfered momentum is by definition $k=p_1-p'_1$. In the CMRF $\vec{p}_1=-\vec{p}_2$. – Vladimir Kalitvianski Dec 27 '12 at 14:03 So in his exposition and your answer there is a subscript 1 missing from the $p$ then? Or have I missed something. Thanks a lot. – Edward Hughes Dec 27 '12 at 14:06 Yes, you can add a subscript 1 to $p$. Omitting the subscript is due to fact that nearly the same expression occurs to $p_2$ too because of 4-momentum conservation. – Vladimir Kalitvianski Dec 27 '12 at 14:10 Ah right that makes much more sense now. I still don't see why the first components of $p_1$ and $p_1'$ are necessarily the same though. I presume it's energy conservation, but why are you allowed to just do that locally? – Edward Hughes Dec 27 '12 at 14:13 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937515914440155, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/classical-electrodynamics?sort=frequent&pagesize=15
# Tagged Questions The classical-electrodynamics tag has no wiki summary. learn more… | top users | synonyms 6answers 1k views ### Do Maxwell's Equations overdetermine the electric and magnetic fields? Maxwell's equations specify two vector and two scalar (differential) equations. That implies 8 components in the equations. But between vector fields $\vec{E}=(E_x,E_y,E_z)$ and ... 5answers 688 views ### Does GR provide a maximum electric field limit? Does GR provide a limit to the maximum electric field? I've gotten conflicting information regarding this, and am quite confused. I will try to quote exactly when possible so as not to confuse ... 1answer 279 views ### Noether theorem and classical proof of electric charge conservation How to prove conservation of electric charge using Noether's theorem according to classical (non-quantum) mechanics? I know the proof based on using Klein–Gordon field, but that derivation use ... 4answers 2k views ### Does a magnetic field do work on an intrinsic magnetic dipole? When you release a magnetic dipole in a nonuniform magnetic field, it will accelerate. I understand that for current loops (and other such macroscopic objects) the magnetic moment comes from moving ... 2answers 457 views ### Does a static electric field and the conservation of momentum give rise to a relationship between $E$, $t$, and some path $s$? For a static electric field $E$ the conservation of energy gives rise to $$\oint E\cdot ds =0$$ Is there an analogous mathematical expression the conservation of momentum gives rise to? 2answers 298 views ### Why do electrons around nucleus radiate light according to classical physics As I navigate through physics stackexchange, I noticed Electron model under Maxwell's theory. Electrons radiate light when revolving around nucleus? Why is it so obvious? Note that I do not know ... 1answer 185 views ### Non-linear dynamics of classical hydrogen atom I'd like to know if there have been attempts in solving the full problem of the dynamics of a classical hydrogen atom. Taking into account Newton equations for the electron and the proton and Maxwell ... 3answers 1k views ### What is the answer to Feynman's Disc Paradox? [This question is Certified Higgs Free!] Richard Feynman in Lectures on Physics Vol. II Sec. 17-4, "A paradox," describes a problem in electromagnetic induction that did not originate with him, but ... 3answers 536 views ### Trouble with the Lorentz law of force: Incompatibility with special relativity and momentum conservation? In Physical Review Letters, there was a paper recently published: Masud Mansuripur, Trouble with the Lorentz Law of Force: Incompatibility with Special Relativity and Momentum Conservation, Phys. ... 0answers 130 views ### Semiclassical QED and long-range interaction I'm interested in the (very) low energy limit of quantum electrodynamics. I've seen that taking this limit does not yield Maxwell equations, but a quantum corrected non-linear version of them. If ... 2answers 561 views ### Pseudoscalar action in classical field theory I was reading Landau and Lifschitz's "Classical Field Theory" and came across a comment that the action for electromagnetism must be a scalar, not a pseudoscalar (footnote in section 27). So I was ... 2answers 543 views ### What is the conserved canonical momentum for a relativistically moving charge in a static Coulomb electric field? The canonical momentum is a fundamental conserved quantity from Noether's theorem for translational invariance of the Lagrangian. Yet I'm finding it very difficult to see its derivation, or even a ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957457542419434, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/263193/find-number-of-ways?answertab=active
# Find number of ways.. The number of ways in which we can post 5 letters in 10 letter boxes is..... I already have the answer given to me but I am not able to get there(Only answer is given not solution). Please answer with appropriate solution for understanding. - ## 1 Answer If you had only one letter, there would be $10$ ways to post it. With two letters you’d have $10$ ways to post the first letter, and no matter which letter box you used, you could still post the second letter in any of the $10$ letter boxes. Thus, you’d have $10^2$ possible choices. If this isn’t clear, imagine that the letter boxes are labelled A through J. If you post the first letter in box A, you can post the second in any of the $10$ boxes; that’s a total of $10$ different ways to post the two letters. If you post the first letter in box B, you can again post the second in any of the $10$ boxes; that’s another $10$ different ways to post the two letters. Thus, there are $10$ different ways to post the two letters for each of the $10$ ways to post the first letter, for a grand total of $10\cdot10=10^2$ ways to post the two letters. Now just extent the reasoning. Each of those $10^2$ ways to post the first two letters can be combined with any of $10$ different ways to post the third letter, so there are $10^2\cdot10=10^3$ ways to post the first three letters. Two more arguments of the same kind lead to the conclusion that there are $10^5$ different ways to post the $5$ letters. Note that I’m assuming that you’re allowed to post more than one letter in the same letter box. If that’s not the case, the reasoning is similar, but the answer is quite different. If you can post at most one letter in a box, there are still $10$ different ways to post the first letter. After you’ve done that, however, there are only $9$ ways to post the second letter, since you can’t use the first letter box again. Thus, you get only $10\cdot9$ different combinations of letter boxes for the first two letters. Similarly, after they’ve been posted you must pick one of the $8$ remaining letter boxes for the third letter, so each of the $10\cdot9$ ways of posting the first two letters gives you only $8$ ways of posting the first three. As a result, there are $10\cdot9\cdot8$ ways of posting the first three letters. Two more arguments of the same kind lead this time to the conclusion that there are $10\cdot9\cdot8\cdot7\cdot6=30,240$ different ways to post the letters if you can use each letter box at most once. - Yes you are very right, and the answer is correct. What's more interesting is the solution, which after seeing I am thinking how could I miss that. Thankyou very much. Sorry for delay in accepting the answer I was reading the wiki entry on stars and bars :D – Master Chief Dec 21 '12 at 12:55 @MasterChief: No problem: that’s a very good use of your time. :-) And you’re welcome. – Brian M. Scott Dec 21 '12 at 13:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356698989868164, "perplexity_flag": "head"}
http://mathoverflow.net/questions/31058/the-vanishing-of-ramanujans-function-taun/31070
## The Vanishing of Ramanujan’s Function tau(n) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a problem I had a look at some years ago but always had the feeling that I was missing something behind its motivation. D.H. Lehmer says in his 1947 paper, “The Vanishing of Ramanujan's Function τ(n),” that it is natural to ask whether τ(n)=0 for any n>0. My question is: Why is it natural to wonder whether τ(n)=0 any n>0? Are there any particular arithmetic properties among the many satisfied by τ(n) that would lead one to ponder its vanishing? The problem is mentioned here, where it's stated that it was a conjecture of Lehmer, although it's not actually presented as a conjecture in his paper, more a curiosity. Maybe there is no deep reason to ponder the vanishing of τ(n), in which case that would be a satisfactory answer too. - 2 From the viewpoint of Hecke eigenforms, the vanishing of $\tau(p)$ for prime $p$ seems more interesting than for general $n$; consider the analogy with elliptic curves over $\mathbb{Q}$ (for which $a_p = 0$ encodes supersingularity, except maybe for some issue when $p = 2, 3$). It ties in with the whole story of slopes of modular forms. But on a more concrete/classical level, doesn't $\tau(n)$ arise as the "error term" in one of those Ramanujan formulas for counting something related to a quadratic form, so vanishing means "no error" for that $n$. Perhaps that was Lehmer's motivation? – Boyarsky Jul 8 2010 at 15:15 1 @Boyarsky: If there exist any $n\geq1$ for which $\tau(n)$ vanishes, then the smallest such $n$ will be prime. This is not immediately obvious but is proved in Lehmer's paper IIRC. @DerekJ: because of this, one could look at Lehmer's question in the following way. CM elliptic curves have $a_p=0$ for 50% of primes. Non-CM elliptic curves have much sparser, but still infinitely many, $p$ with $a_p=0$. But by the time you get to weight 12 the $\Delta$ function is a candidate for a modular form with $a_p=0$ never happening at all! – Kevin Buzzard Jul 8 2010 at 20:01 @Kevin: Yes, that's a good point about the least $n$ (if any) for which $\tau(n) = 0$. In view of the possible failure of $a_p = 0$ to hold when $p = 2, 3$ is a supersingular prime for an elliptic curve over $\mathbb{Q}$, it's nice that Lehmer's argument adapted to apply in general to eigenforms of any weight (using basics about quadratic fields generated by roots of unity in place of his trigonometric language) always requires a special calculation (which may fail...) for $p = 2, 3$. – Boyarsky Jul 8 2010 at 20:57 ## 3 Answers The key to your question is lacunarity in modular functions. The tau function, as we know, occurs as the coefficient of the Discriminant function, which in turn is the 24th power of the Eta function. The Eta function was known to be lacunary (having gaps or zero coefficients). Therefore it was natural for Lehmer in 1947 to wonder if coefficients of powers of eta are also zero. See the opening passage of the following paper MR0021027 (9,12b) Lehmer, D. H. The vanishing of Ramanujan's function $\tau(n)$. Duke Math. J. 14, (1947). 429--433. http://projecteuclid.org/euclid.dmj/1077474140 - Thanks, particularly for the lacunarity link. – Derek Jennings Jul 9 2010 at 7:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A simple reason: this is a function of $n$ satisfying significant congruences. If it vanishes, that is further congruence information. - I'll remark that Charles' observation can be used to yield fairly big lower bounds on the smallest $n$ for which $\tau(n)$ can possibly be zero. Known congruences for $\tau(n)$ modulo small powers of small primes imply that $n$ has to satisfy certain congruences if you want it to vanish. Serre's paper on lacunarity of powers of the eta function gives an explicit lower bound which is $O(10^{10})$ or so (I don't have his paper to hand). – Kevin Buzzard Jul 9 2010 at 9:24 I don't know if this helps but you can put $D=24$ in (13) (14) of my paper to get an explicit formula for $\tau(n)$. MR2218820 (2007c:17009) Westbury, Bruce W. Universal characters from the Macdonald identities. Adv. Math. 202 (2006), no. 1, 50--63. doi:10.1016/j.aim.2005.03.013 Since $SL(5)$ is a simple Lie algebra of dimension 24 this also relates $\tau(n)$ to the affine root system of type $A_4$. I doubt Lehmer would have had this in mind. Addendum I started this project with the following problem. Let $\mathfrak{g}$ be a simple Lie algebra whose dimension is $D$. Normalise the Casimir so it acts as 1 on $\mathfrak{g}$. Now consider the subspace of the exterior power $\wedge^k \mathfrak{g}$ on which the Casimir acts by $k$. This is a representation of $\mathfrak{g}$ but obviously does not make sense for $k>D$. Taking $\mathfrak{g}=\mathfrak{sl}(5)$ we have $\tau(k)$ is the dimension of a representation for small $k$ (certainly no more than 24). I doubt this is interesting. The conclusion of the project was that for all $k$ there is a complex of representations of $\mathfrak{g}$. Then the Euler characteristic is a virtual representation. This can be written as a sum (with signs) of representations of $\mathfrak{g}$ using the MacDonald identities for affine $\mathfrak{g}$. This gives $\tau(k)$ as the dimension of a virtual representation of $\mathfrak{sl}(5)$ for all $k$. Because of the signs this does not give an immediate solution to Lehmer's question. However it is a different way of looking at the problem. I also give an explicit formula for $\tau(k)$ in terms of partitions and hooklengths. I believe this is new. - Thanks, I'll take a look at your paper. – Derek Jennings Jul 8 2010 at 15:41 Bruce, but what means "for each sufficiently large $N$" in your paper? Is $N=5$ sufficiently large? You interpret $\tau(n)$ as dimensions of certain representations. How does the sign of $\tau(n)$ appear there? What is wrong in having $\tau(n)=0$? I am wondering about the last question, since I am wondering whether your approach can be used in proving some partial results towards Lehmer's question (mathoverflow.net/questions/32620). – Wadim Zudilin Jul 22 2010 at 0:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439030289649963, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/102687/list
## Return to Answer 3 added 4 characters in body Masser and Wüstholz have given an effective proof that the representation $\bar{\rho}_{E,\ell}\colon G_K \to \mathrm{GL}(E[\ell])$ is irreducible for all $\ell$ greater than some constant $c_E$, see their paper Some effective estimates for elliptic curves. They use isogeny bounds coming from transcendence theory to prove Shafarevich's Theorem without Siegel's theorem. They show that $c_E$ can be chosen to be less than $C h^4$ where $h$ is some naive height attached to $E/K$ and $C$ is a constant that can in principle be computed. (The isogeny bounds have since been repeated improved. The state of the art might be the paper Théorème des périodes et degrés minimaux d'isogénies of Gaudron and Rémond.) Added afterwards: The surjectivity of $\bar{\rho}_{E,\ell}$ for $\ell$ sufficiently large is also discussed by Masser and Wüstholz in Galois properties of division fields of elliptic curves. It is effective and again does not require Siegel's theorem. 2 added 256 characters in body Masser and Wüstholz have given an effective proof that the representation $\bar{\rho}_{E,\ell}\colon G_K \to \mathrm{GL}(E[\ell])$ is irreducible for all $\ell$ greater than some constant $c_E$, see their paper Some effective estimates for elliptic curves. They use isogeny bounds coming from transcendence theory to prove Shafarevich's Theorem without Siegel's theorem. They show that $c_E$ can be chosen to be less than $C h^4$ where $h$ is some naive height attached to $E/K$ and $C$ is a constant that can in principle be computed. (The isogeny bounds have since been repeated improved. The state of the art might be the paper Théorème des périodes et degrés minimaux d'isogénies of Gaudron and Rémond.) Added afterwards: The surjectivity of $\bar{\rho}_{E,\ell}$ for $\ell$ sufficiently large is also discussed by Masser and Wüstholz in Galois properties of division fields of elliptic curves. It is effective and again does require Siegel's theorem. 1 Masser and Wüstholz have given an effective proof that the representation $\bar{\rho}_{E,\ell}\colon G_K \to \mathrm{GL}(E[\ell])$ is irreducible for all $\ell$ greater than some constant $c_E$, see their paper Some effective estimates for elliptic curves. They use isogeny bounds coming from transcendence theory to prove Shafarevich's Theorem without Siegel's theorem. They show that $c_E$ can be chosen to be less than $C h^4$ where $h$ is some naive height attached to $E/K$ and $C$ is a constant that can in principle be computed. (The isogeny bounds have since been repeated improved. The state of the art might be the paper Théorème des périodes et degrés minimaux d'isogénies of Gaudron and Rémond.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924340546131134, "perplexity_flag": "head"}
http://mathoverflow.net/questions/53346?sort=oldest
## An extension of the Hardy-Littlewood-Polya inequality? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $x,y$ be vectors in $\mathbb{R}^n$ and let's use the notation $\hat x$ for the vector $x$ with its components sorted in increasing order. The Hardy-Littlewood-Polya inequality states that $$x\cdot y \leq \hat x\cdot \hat y.$$ Let us also use the notation $xy\in\mathbb{R}^n$ to denote the coordinate-wise product of $x$ and $y$. I conjecture that $$\frac{ ||xy||_p ||xy||_r}{||xy||_q} \le \frac{ ||\hat x\hat y||_p ||\hat x\hat y||_r}{||\hat x\hat y||_q}$$ for all $1\le p\le q\le r$. For $q=p$ and $q=r$, my conjectured inequality is true by the HLP inequality. Any ideas for a proof? UPDATE: thank you for the quick answers. The counterexamples indeed work when negative coordinates for x and y are allowed. However, when all the coordinates of x and y are required to be positive, the conjecture seems to hold. UPDATE 2: so the conjecture is totally false; see below for counterexamples. - Really, with my code you don't get a counterexample? am I doing something wrong? For $n=3$ I get counterexamples right away. – S. Sra Jan 26 2011 at 14:05 ## 2 Answers Here is a Matlab script that will generate a quick counterexample for you: ````function [x,y]=testIneq(n, p, q, r) % x and y are length n vectors % Try: [x,y]=testIneq(2,1,2,3) to get a counterexample! flag = 1; iter = 0; while (flag) iter = iter + 1; x = randn(n,1); y = randn(n,1); xh = sort(x); yh = sort(y); xy = x .* y; xyh = xh .* yh; lhs = norm(xy,p) * norm(xy,r) / norm(xy,q); rhs = norm(xyh,p) * norm(xyh,r) / norm(xyh,q); if (rhs < lhs) flag = 0; fprintf('Found countex after %d tries\n', iter); end end ```` end Example: $x =[-2.1384,-0.8396]$, $y =[1.3546,-1.0722]$, with $p=1$, $q=2$, $r=3$. - See counterxamples in comment to fedja's answer above for the case where we restrict to positive vectors only. – S. Sra Jan 26 2011 at 13:48 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Rather for a counterexample. Let's say all coordinates are positive. The inequality is equivalent to the claim that $f(t)=\frac{\|xy\|_t}{\|\hat x\hat y\|_t}$ satisfies $f(s)f(t)\le f(1)$ for $s\le 1\le t$ ($1\le p$ is not really a restriction due to the possibility to raise to positive powers inside and outside, so only the ratios $p/q$ and $r/q$ really matter). Also sums can be replaced by averages. Now, as $s\to 0$, we have the geometric means in the limit, which do not feel the rearrangements, so $f(0+)=1$. Also, $f(\infty)=1$ if only the maxima match in the original arrangements. But $f(1)<1$ unless the orderings are exactly the same. - can you give a counterexample with all positive coordinates? – Aryeh Kontorovich Jan 26 2011 at 13:30 I don't follow your argument that my claim is equivalent to $f(s)f(t)\le f(1)$ -- the latter is indeed easily falsifiable. – Aryeh Kontorovich Jan 26 2011 at 13:42 Just try my code below with testIneq(10,1,2,10) and you will get a quick countexexample; replace the 'randn' by 'rand' to get all positive coordinates. – S. Sra Jan 26 2011 at 13:45 1 another simpler example is: $x =[0.0062,0.5198,0.4350]$, $y =0.0515,0.9148,0.5404]$, with $p=1$, $q=2$, $r=10$. – S. Sra Jan 26 2011 at 13:47 When I replace randn by rand, I never get a counterexample. Would you be kind enough to provide one? – Aryeh Kontorovich Jan 26 2011 at 13:49 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.846859335899353, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/compactification
Tagged Questions The compactification tag has no wiki summary. 1answer 98 views Current operators for compactified CFTs Intuitively I feel that if you compactified open bosonic strings on a product of $n$ circles such that each radius is fine-tuned to the self-dual point then the CFT of these $n$ world-sheet fields ... 0answers 121 views Compactifying on a circle and the exchange of R and NS sectors I've noticed a general phenomenon in compactifying on a circle where if you start with, say, an NS field, then the KK fields with an index along the circle will be in the R sector, and those without ... 0answers 224 views How can two time theories be compactified to 3+1 without any Kaluza-Klein remnants I have recently been looking into the two-time theories and the implied concepts. For me this seems slightly hard to grasp. How can I see the basic concept in this theory in a fundamental way based ... 0answers 83 views Can decompactification explain the inflation of the early universe? I've just reread chapter 11 of this book where it is explained among other things, that our four dimensional universe could be unstable concerning a decompactification transition, since potential ... 1answer 102 views Disappearance of moduli for condensate of open strings Consider a Dp-brane. Compactify $d$ spatial dimensions over a torus $T^d$. Suppose $d\geqslant p$, and that the Dp-brane is completely wrapped around the compactified dimensions. Look at the open ... 0answers 81 views Examples of manifolds and fluxes coming from generalized complex geometry The paramount object in generalized gomplex geometry is the Courant algebroid $TM\oplus T^\star M$, where the manifold $M$ is called background geometry I think (I am not sure). More generally this ... 0answers 54 views Folded and/or compacted dimensions in M-theory? I've on many occasions that there are various numbers of 'extra' dimensions above the 4th. However, I've heard that they are 'compacted' or 'folded' tightly and unimaginably small. Now, as I ... 2answers 162 views Why would a particle in an extra dimension appear not as one particle, but a set of particles? I was reading an article in this months issue of Physics World magazine on the three main theories of extra dimensions and stumbled across something I didn't quite understand when the author began ... 4answers 833 views Is spacetime discrete or continuous? Is the spacetime continuous or discrete? Or better, is the 4-dimensional spacetime of general-relativity discrete or continuous? What if we consider additional dimensions like string theory ... 2answers 587 views Why does string theory require 9 dimensions of space and one dimension of time? String theorists say that there are many more dimensions out there, but they are too small to be detected. However, I do not understand why there are ten dimensions and not just any other number? ... 0answers 84 views Calabi Yau compactification based on U(1) charges In Green-Schwarz-Witten Volume 2, chapter 15, it is argued (roughly) that we need 6-dimensional manifolds of $SU(3)$ holonomy in order to receive 1 covariantly constant spinor field. And it turns out ... 1answer 251 views CY moduli fields When one does string compactification on a Calabi-Yau 3-fold. The parameters in Kähler moduli and complex moduli gives the scalar fields in 4-dimensions. It is claimed that the Kähler potentials of ... 1answer 116 views Why do Calabi-Yau manifolds crop up in string theory, and what their most useful and suggestive form? Why do Calabi-Yau manifolds crop up in String Theory? From reading "The Shape of Inner Space", I gather one reason is of course that Calabi-Yaus are vacuum solutions of the GR equations. But are there ... 2answers 293 views How can one imagine curled up dimensions? Actually I'm learning String Theory, and one of its proposals is that there are actually 25+1 dimensions of which only 3+1 are visible to us-- and the remaining are curled up. However, superstring ... 2answers 204 views Measuring extra-dimensions I have read and heard in a number of places that extra dimension might be as big as $x$ mm. What I'm wondering is the following: How is length assigned to these extra dimensions? I mean you can ... 2answers 145 views What is the relation between extra dimensions and unification of theories? One of the most used methods in unification of theories is the use of higher dimensions. How does it actually work? If these dimensions are extremely small curled up, how does it affect the universe. ... 1answer 37 views Interplay between the cosmological constant and “microscopic” properties of string vacua As far as I understand, string phenomenology is usually concerned with compactifications of string theory, M-theory or F-theory in which the uncompactified dimensions form a 4-dimensional Minkowski ... 1answer 66 views what compactifications of the Poincare group have been studied? as we know the Poincare group is non-compact. Poincare invariance have been observed in velocities and energies up to $10^{20}$ eV in cosmic rays. The other day i was thinking in how $SU(2)$ ... 1answer 74 views Scherk-Schwarz and other compactifications? I have been thinking about various types of compactifications and have been wondering if I have been understanding them, and how they all fit together, correctly. From my understanding, if we want ... 0answers 33 views What is the importance of studying degeneration on $M_g$ Let $M_g$ be the moduli space of smooth curves of genus $g$. Let $\overline{M_g}$ be its compactification; the moduli space of stable curves of genus $g$. It seems to be important in physics to study ... 1answer 34 views Are lens spaces classified via a Weinberg angle? I am thinking about Kaluza Klein theory in the 3 dimensional lens spaces. These have an isometry group SU(2)xU(1), generically, and in some way interpolate between the extreme cases of manifolds \$S^2 ... 1answer 231 views Why is Compactification restricted to Toroids, Calabi-Yau & Co.? I think I've missed this point somehow. I've just started with Compactification and so far, I don't really see why it is restricted to the above mentioned types of manifolds? I have to admit, when ... 1answer 104 views Time dilation and dimensional compactification Is time dilation a form of dimensional compactification? As a probe approaches a black hole, toward a point on the equator of the event horizon, does general relativity predict that the time ... 7answers 1k views Why are extra dimensions necessary? Some theories have more than 4 dimensions of spacetime. But we only observe 4 spacetime dimensions in the real world, cf. e.g. this Phys.SE post. Why are the theories (e.g. string theory) that ... 1answer 192 views Could extra dimensions be or become clustered? String theory - for example - requires extra spatial dimension. Say for example in 10 dimensional string theory, what theoretically prevents clustering of the extra 6 dimensions in 2 timeless 3 ... 1answer 120 views equivalence principle and nontrivial compactifications it is commonly argued that the equivalence principle implies that everything must fall locally in the same direction, because any local variation of accelerations in a small enough neighbourhood is ... 3answers 316 views Why (in relatively non-technical terms) are Calabi-Yau manifolds favored for compactified dimensions in string theory? I was hoping for an answer in general terms avoiding things like holonomy, Chern classes, Kahler manifolds, fibre bundles and terms of similar ilk. Simply, what are the compelling reasons for ... 1answer 275 views Measurement of kaluza-klein radion field gradient? I've been very impressed to learn about kaluza-klein theory and compactification strategies. I would like to read more about this but in the meantime i'm curious about 2 different points. I have the ... 4answers 913 views Shape of the universe? What is the exact shape of the universe? I know of the balloon analogy, and the bread with raisins in it. These clarify some points, like how the universe can have no centre, and how it can expand ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209301471710205, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/208733-basic-recurrence-relation-problem-print.html
# Basic Recurrence Relation Problem Printable View • November 29th 2012, 04:53 PM Walshy Basic Recurrence Relation Problem A factory makes custom sports cars at an increasing rate. In the first month only one car is made, in the second month two cars are made, and so on, with n cars made in the nth month. (a) Set up a recurrence relation for the number of cars produced in the first n months by this factory. (b) How many cars are produced in the first year? a. I think this would be an = an-1 + 1, but something tells me it might be an = an-1 + n, I am not sure. b. Would this just be 1+2+3+4+5+6+7+8+9+10+11+12? Or, plugging in 12 for the above equation? Thanks. • November 29th 2012, 05:24 PM Stephen347 Re: Basic Recurrence Relation Problem Hi Walshy, Let's make a little diagram that we can check all of our answers with. | | | | |-------|----------------------|--------------------| | Month | Cars Made This month | Cares Made to Date | | 1 | 1 | 1 | | 2 | 2 | 3 | | 3 | 3 | 6 | | 4 | 4 | 10 | | 5 | 5 | 15 | | ... | ... | ... | | n | an | Sn | A. Following the pattern, Sn = Sn-1+an. Yet, an = n, so we have that the number of cars made to date in the nth month is Sn = Sn-1+n. But Sn-1 = Sn-2 + n-1, so Sn = Sn-2 +(n-1) + n. Thus, continuing its recursive definition, Sn = Sn-3 + (n-2) + (n-1) + n. Continuing this until we get down to the first month, this is Sn = 1 + 2 + ... + n, which can be shown to equal (by the sum of an arithmetic sequence): Sn = $\frac{n(n+1)}{2}$ With a quick check of out table, we can see that this gives us the third entry in every row. $\frac{1*2}{2} = 1$ $\frac{2*3}{2} = 3$ $\frac{3*4}{2} = 6$ ... B. After the first year (n = 12), we can either substitute n=12 into the formula above or do 1 + 2 + ... + 12. Either way, we get 78 cars produced. All times are GMT -8. The time now is 09:00 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8832105994224548, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/28601/proof-of-existence-of-lowest-temperature-0-k/29511
# Proof of existence of lowest temperature $0 K$ Im mathematics there is a concept of infinity meaning that whenever you pick a number and say that it is the smallest/Largest there is a way to further reduce/increase that number by subtracting/adding any other number. But in physics/chemistry i see that the absolute temperature does not have a negative reading and the lowest temperature is $0 K$. What is the proof that temperatures below zero cannot exist? - 2 – Qmechanic♦ May 19 '12 at 12:11 ## 3 Answers In physics, temperature and other concepts in "thermodynamics" (that was known for centuries from macroscopic analyses of the heat engines and similar systems) is given by a more fundamental theory, the so-called "statistical mechanics". According to statistical mechanics, the thermal phenomena are explained by the motion of the atoms and various states in which the atoms may be found (and the number of these states). In particular, the probability $p_k$ of a state $k$ (in classical physics, the state is described e.g. by the location and velocity of each particle) is given by $$p_k = C \exp(-E_k/k_BT)$$ where $E_k$ is the energy of the state $k$, $k_B$ is Boltzmann's constant converting kelvins to joules, and $T$ is the absolute temperature in kelvins. The coefficient $C$ is a "normalization factor" that is $k$-independent and chosen so that the sum of $p_k$ over $k$ is equal to one (the total probability). This form makes it clear that $T\lt 0$ isn't allowed: the exponential would be growing with $E_k$ and because there are infinitely many states with ever larger values of $E_k$ (the kinetic energy may grow arbitrarily high, in particular), the probabilities would be getting larger and their sum would diverge: it couldn't be normalized to one. Before this statistical explanation involving Boltzmann's constant was known, the temperature was a phenomenological quantity measured by a thermometer. One was actually uncertain about any redefinition $T\to f(T)$ where $f(T)$ is a monotonically increasing function. In principle, one may relabel $T$ so that zero kelvins gets mapped to $-\infty$ in another convention for the temperature, for example; try $T_\text{new convention} = \ln (T)$. However, the ideal gases obeyed $pV = nRT$ so at a fixed pressure, the volume of some gas was proportional to the absolute temperature – the same one as one in statistical mechanics, without any redefinition by a function $f$. So people knew how to measure the "right absolute temperature" even well before statistical mechanics was understood. The usual thermometers relied on the expansion of liquids etc. which are not ideal gases but they're close enough. For ideal gases, where the absolute temperature is proportional to the volume, the statement that $T\gt 0$ is equivalent to the statement that the volume of the ideal gas cannot be negative. You cool it down and it shrinks but it can't shrink below zero. Volume is about the "shape" but the underlying reason for the positivity of temperature isn't about locations; it is about the motion. Any physical object with quadratic degrees of freedom will carry $k_BT/2$ of kinetic energy per degree of freedom. Again, because the quadratic kinetic energy of the type $mv_x^2/2$ can't be negative, the absolute temperature can't be negative, either. In lasers and similar devices, one may formally find negative absolute temperatures when the number of atoms at a higher energy level is greater than the number of atoms at a lower energy level. However, this negative temperature can't be brought to equilibrium with all degrees of freedom in a larger object because the number of high-energy states is always divergent. In lasers, one kind of abuses the fact that that the energy of the "interesting degrees of freedom" is bounded both from below and from above (we only allow two or few levels for each atom). - 4 Spot on physics +1, but no good history--- the recognition of absolute temperature (no reparametrization freedom) came with Carnot, who defined the absolute temperature as the T in $dS={dQ\over T}$, the identification of this with ideal-gas temperature was instantaneous and I think around the 1820s (maybe 1830s). The recognition of conservation of energy dates to the 1840s, and the statistical interpretation dates to the 1860s-1870s when Boltzmann starts to write. Boltzmann's statistical intepretation is not accepted universally until 1910 or so, and some debate lingers into the 1920s. – Ron Maimon May 19 '12 at 4:28 Slight wrinkle: this assumes energy levels unbounded from above, which is not true for many condensed matter systems. In those cases, one can certainly have negative temperature states. Of course one can wave "it's only a meta-stable state", but when there is a good separation of timescales it doesn't matter. – genneth May 19 '12 at 11:39 Thanks, Ron. Genneth, isn't it what the last paragraph of my text are about? – Luboš Motl May 19 '12 at 18:09 Sorry; somehow I completely missed that the first time round. All good then :-D – genneth May 20 '12 at 16:06 Zero kelvin is the temperature at which there is no thermal motion. Since temperature by definition is the average thermal motion (really kinetic energy) of an ensemble of molecules, then it is a matter of definition that there can not be a lower temperature than that zero K--because there is no such thing as "negative" motion. A good analogy is a batting average. You can't have a negative batting average because there are no "negative" hits. Or, as they told me when I was an intern--you can't get less than zero sleep. - 3 "temperature by definition is the average thermal motion (really kinetic energy) of an ensemble of molecules" this works only in classical case; Fermi gas has actually very high average kinetic energy even at 0K. – C.R. May 19 '12 at 2:26 Can you explain? – Richardbernstein May 19 '12 at 3:49 – John Rennie May 19 '12 at 6:24 Downvoted because this is just too limited a definition. Temperature is the gradient of internal energy to entropy for equilibrium states, nothing else. – genneth May 19 '12 at 11:38 I'd complete @Luboš Motl's answer with the fact that the relevant quantity in thermodynamics is the coldness $\beta=\frac1{k_BT}$, in ${\mathrm J}^{-1}$ (You can see this answer by Arnold Neumaier for references.) It basicaly comes from the definition of thermodynamic temperature : For systems where the entropy $S$ is a function of the energy $E$, the temperature $T$ can be defined as $$\frac1T=\frac{dS}{dE}$$ For system at thermal equilibrium, $\beta$ varies from $+\infty$ for very cold systems downto to $0$ of very hot systems. Your question can be rephrased as What is the proof that there is no coldness above $+\infty$ ? which seems obvious ... The next question is about coldness below 0. The short answer is : they exist in non equilibrium system, as discussed in this question. They are logically hotter than systems at equilibrium (with $\beta>0$). Including those systems, $\beta$ can take any real value, with the hotter systems corresponding to the smaller $\beta$. This is of course also true for the temperature $T\propto\frac1\beta$, but the fact that any system with $T<0$ is hotter than any system with $T>0$ becomes less intuitive. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495666027069092, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/140606-another-geometry-construction.html
# Thread: 1. ## Another Geometry Construction... Construct a triangle, given the length of one side, and the lengths of the median and the altitude to that side. 2. Originally Posted by MATNTRNG Construct a triangle, given the length of one side, and the lengths of the median and the altitude to that side. I've attached a sketch of the construction. Keep in mind that the construction is only possible if $|h| \leq |m|$ Attached Thumbnails 3. Originally Posted by MATNTRNG Construct a triangle, given the length of one side, and the lengths of the median and the altitude to that side. Construct a right triagle with the given altitude as its altitude, and the median as its hypotenuse. Now mark off a segmanet of the third side of the given side length centred on the point where the hypotenuse (median) meets the base. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.878836989402771, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/147246/does-int-0-infty-frac-cos2xx25x11dx-converge-or-diverge
# Does $\int_0^\infty\frac{\cos^2x}{x^2+5x+11}dx$ converge or diverge? When I'm learning convergence, my teacher just show me about condition to convergence or not. But I haven't meet a function that contain both trigonometric and normal polynomial. When I asked one of my friends, he tell me that using Taylor to developed $\cos x$, but I'm afraid that is not the good solution. $$\int_0^\infty\frac{\cos^2(x)}{x^2+5x+11}dx$$ Thanks :) - ## 3 Answers This is one of those questions that looks far worse than it really is. So we consider $\displaystyle\int_0^\infty\frac{\cos^2x}{x^2+5x+11}dx$ Note immediately that the denomonator $x^2 + 5x + 11 > 11$ over the domain, and the numerator is bounded above by $1$. So on any finite subinterval, this integral is bounded. More to the point, we might consider only $\displaystyle \int_1^\infty\frac{\cos^2x}{x^2+5x+11}dx$, as the integral from $0$ to $1$ is bounded. But $\displaystyle \int_1^\infty\frac{\cos^2x}{x^2+5x+11}dx < \int_1^\infty\frac{1}{x^2+5x+11}dx < \int_1^\infty\frac{1}{x^2}dx$ And there we have it. - Let $$I = \int_0^{\infty} \frac{\cos^2(x)}{x^2 + 5x+11} dx$$ The integrand is non-negative and since $\cos^2(x) \in [0,1]$, $\forall x \in \mathbb{R}$, we get that $$0 \leq I = \int_0^{\infty} \frac{\cos^2(x)}{x^2 + 5x+11} dx \leq \int_0^{\infty} \frac1{x^2 + 5x+11} dx = \int_0^{\infty} \frac{dx}{(x+5/2)^2 + 19/4}$$ Now make use of the identity $$\int \frac{dx}{x^2 + a^2} = \frac{\arctan (x/a)}a$$ and bound the integral $I$. Hence, we get that $$0 \leq I \leq \left. \frac{2}{\sqrt{19}} \arctan \left(\frac{2x+5}{\sqrt{19}} \right) \right \rvert_{0}^{\infty} = \frac{2}{\sqrt{19}} \left( \frac{\pi}{2} - \arctan \left(\frac{5}{\sqrt{19}}\right) \right) = \frac{\pi - 2 \arctan \left( \dfrac{5}{\sqrt{19}} \right)}{\sqrt{19}} \approx 0.328984$$ EDIT Another quick argument is along the lines of what mixedmath has done but there is no need to split the integral over two regions $$I = \int_0^{\infty} \frac{\cos^2(x)}{x^2 + 5x+11} dx \leq \int_0^{\infty} \frac1{x^2 + 5x+11} dx \leq \int_0^{\infty} \frac{dx}{(x+5/2)^2} = \left. \left(-\frac1{x+5/2} \right) \right \rvert_{0}^{\infty} = \frac25$$ - You can see that $\displaystyle \int_0^\infty \dfrac{\cos^2 x}{x^2 + 5x + 11} dx \le \int_0^\infty \dfrac{1}{x^2+5x+11}dx$ while the RHS expression converges, thus your integral is convergent too. When it comes to use its Taylor series, after expanding, you can go on by proving that each term of your expansion is uniformly convergent to a function (there may be other way than this). After than, you can integrate each term of the series and the problem usually comes when you have to evaluate the value of new series. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496804475784302, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51449?sort=votes
## Homogeneous linear stochastic DE with noncommuting coefficients ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The system I am studying can be reduced to a Stratonovich vector stochastic differential equation $dX = A X \; dt + \sum B_k X \circ dW_k$ with $W_k$, $k=1..m$ the Brownian motion in $m$ dimensions, $X$ the unknown process in $n$ dimensions, and $A$ and $B_k$ matrices that in the simple case are constant. However, they do not commute, so that we cannot express the solution as a simple exponent $X = \exp(A t + \sum_k B_k W_k)$. Are there any general methods to solve such systems, or to prove interesting properties of the solutions, or perhaps to express solutions in term of some other standard processes (rather than $W_k$)? Any pointers to books or papers would be appreciated. In particular, if the explicit solution is not possible, are there any techniques to compute/write an ODE for the expectation $E X_1/X_2$ (i.e. ratio of two components of the process)? I have briefly looked through the book Stochastic Flows and Stochastic Differential Equations'' by H. Kunita; from what I could understand it seems that an explicit solution in terms of exponentials of combinations of $W_k$ is possible when the Lie algebra corresponding to the $A$ and $B_k$ is solvable. Unfortunately in my case it is not, and the book does not seem to comment on the general case. As a simple illustration of the problem consider the following equation on the unit sphere in 3d: $dX = a \times X \; dt + b \times X \circ dW$ where $a$ and $b$ are some non-collinear constant vectors, and $\times$ is the vector cross product. Informally, at each time point we rotate $X$ around $a$ proportionally to $|a| dt$ and around $b$ proportionally to $|b| dW$. What can be said about the resulting solution $X$? Can the resulting random walk be expressed explicitly in some form? - ## 1 Answer It may be useful to look at the Magnus series for the solution, especially if you're in a Lie algebra setting such as your example on the sphere. This series writes the solution as $X = \exp(\Omega(t))$ where $\Omega(t)$ is an infinite series which starts with the terms you wrote down: $\Omega(t) = At + \sum_k B_k W_t + \cdots$. I don't know whether this approach is useful in your setting and I don't have time at the moment, but you can find more in the recent review paper Blanes et al., "The Magnus expansion and some of its application" and references therein. - Thanks for pointing out the review paper - the references dealing with numerical integration schemes are indeed quite relevant. – demonc Jan 8 2011 at 14:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408268928527832, "perplexity_flag": "head"}
http://mathoverflow.net/questions/60633/is-there-any-relation-between-deformation-and-extension-of-lie-algebras
## Is there any relation between deformation and extension of Lie algebras? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In a paper of A. Weinstein on the geometry of Poisson manifolds, he relates the formal linearization around a zero, p, of the Poisson bivector to extensions of the Lie algebra induced by the bivector on the tangent space over p. I wanted to know if this is part of a big picture, possibly relating deformations of Lie brackets to some extensions of Lie algebras. - ## 3 Answers In fact the picture is extremely simple and works indeed for any type of algebra as follows : Let $\mu$ be a Lie algebra on a vector space $V$, $c$ a two-cocycle $c\in CE^2(V;M)$ (Chevalley Eilenberg cohomology) where $M$ is a module over the Lie algebra. The extension of $\mu$ by $c$ is nothing else than a deformation of $\mu$, but in the space of Lie algebras on the vector space $V\oplus M$. The deformed Lie algebra has a bracket $\mu'$ given by $\mu'(v+m,v'+m')=(\mu(v,v'),v.m'-v'.m+c(v,v'))$. One can easily check that the Jacobi condition for the deformed algebra $\mu'$ is equivalent to the data of the Jacobi condition for $\mu$, the module structure of $M$ and the cocycle condition for $c$. From this point of view one can also view $\mu'$ as the semi-direct product of $\mu$ and $M$ when the cocycle $c$ is null. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are big pictures that I'll let others describe. Here's a little picture which cogeneralizes the Weinstein remark. (To "cogeneralize" is to make more specific, rather than less.) Recall that a Lie bialgebra is a vector space $\mathfrak g$ with a "Lie bracket" $\mathfrak g^{\wedge 2} \to \mathfrak g$ satisfying Jacobi, a "Lie cobracket" $\mathfrak g \to \mathfrak g^{\wedge 2}$ satisfying Jacobi, and such that the two structures satisfy a compatibility condition which has lots of equivalent formulations: one of them is that the cobracket is a 1-cochain for the Chevalley-Eilenberg complex of $\mathfrak g$ with values in $\mathfrak g^{\wedge 2}$ (diagonal adjoint action). Then the first result you prove about these things is: Any such structure defines (and is equivalent to) an "extension" (although it's not a short exact sequence), called the double of $\mathfrak g$. As a vector space, the double is the sum $\mathfrak g \oplus \mathfrak g^\ast$ (where $\mathfrak g^\ast$ is the dual vector space, and is a Lie algebra by turning around the Lie cobracket), and indeed each of the summands $\mathfrak g,\mathfrak g^\ast$ inside the double is a Lie subalgebra. The two terms do interact: they interact in the unique way making the canonical pairing $(\mathfrak g \oplus \mathfrak g^\ast)^{\otimes 2} \to \mathbb k$ ad-invariant. For various equivalent descriptions, and if you want to see this all in pictures, I have a short expository note on Lie bialgebras at http://math.berkeley.edu/~theojf/GraphicalLanguage.pdf . Anyway, why is this a cogeneralization of what Alan's doing? There is a generalization of Lie algebra to Lie algebroid, which I can define if you like, but I would assume that it's in Alan's paper, and one example of a Lie algebroid is that the tangent bundle of a manifold has a canonical Lie algebroid structure. A Lie algebroid structure on the cotangent bundle is precisely the same as a Poisson bivector. So a Poisson manifold is (almost) an example of a "Lie bialgebroid", because the tangent bundle is both an algebroid and a coalgebroid. I say "almost", because a priori there is no cocycle condition. But the linearization of a Poisson structure near a zero thereof I think should satisfy a cocycle condition --- I haven't worked out the details, so take this paragraph with a grain of salt. Anyway, having not read this paper, I'm not sure if I've answered the question you asked, or a related one. - $\otimes 2$ should be an exponent, right? – darij grinberg Apr 5 2011 at 15:32 @darij: yes, fixed. – Theo Johnson-Freyd Apr 22 2011 at 3:16 Hinich says in "Deformation theory and lie algebra homology", that Grothendieck says that "to each deformation problem we can assign a sheaf of Lie algebras over X; the sheaf of infinitesimal automorphisms". - @cdm80: Thank you! but where does the Lie algebra extension show up? Is it possible to associate such an extension to that sheaf of infinitesimal automorphisms? I will give a look to that Hinich's paper. – Feri Apr 5 2011 at 2:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917255163192749, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/171945/what-does-the-variance-sd-of-a-set-signify?answertab=votes
# What does the variance/SD of a set signify? Assuming I have a huge data set and the only attribute I know about it is the Variance (or SD since SD = $\sqrt{\text{Variance}}$). What conclusions can I make about the set with reasonable certainty? I don't know the mean, median, mode, presence of outliers or any other information. Can I make any conclusions beyond How much the values deviate from the mean? - 1 See the chebyshev inequality. – PeterR Jul 17 '12 at 14:37 ## 1 Answer The Chebyshev inequality tells you how much probability content is guaranteed to be within k standard deviations of the mean for any distribution with a finite variance. The bound ie 1-1/k$^2$. So for k <=1 to 1 nothing is guaranteed. for k=2 it guarantees 0.75 and for k=3, 0.89 approximately (actually 8/9). This can be contrasted to the normal distribution which includes probability 0.68 within 1 standard deviation and 0.954 for 2 standard deviations and more 0.9973 for 3 standard deviations. Chebyshev will always give less than or equal to the actual probability of any particular distribution because it has to hold for every distribution with finite variance. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915240466594696, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/198451/what-is-wrong-with-this-proof-of-wedderburns-little-theorem/198472
# What is wrong with this proof of Wedderburn's little theorem? Wedderburn's little theorem $\quad$ every finite domain $A$ is a field. Proof $\quad$ Let $x$ be a nonzero element of $A$. Because $A$ is finite, there exist positive integers $n$, $k$ such that $x^n = x^{n + k}$. It is easy to see by induction that the set $E = \left \{x^i : i \in \mathbf{N}^*\right\}$ does not contain $0$; it follows therefore from $x^n\left(1 - x^k\right) = 0$ that $x^k = 1$. Thus, $x^{k - 1}$ is the inverse of $x$ (when $k = 1$, $x$ has inverse $1$). All the proofs I have seen of this result are much more sophisticated than mine. Hence, I am doubting its correctness and could use a second opinion. - 4 Your proof only shows that every commutative domain is a field. Wedderburn's theorem states moreover that your $A$ is commutative. – martini Sep 18 '12 at 10:15 4 @martini: You could turn that comment into an answer so the question can be marked resolved. – joriki Sep 18 '12 at 10:21 1 I think the word "domain" is ambiguous. It should be called "not necessarily commutative domain" though it's awkward. – Makoto Kato Sep 18 '12 at 11:20 2 The existence of inverses can also be proved by considering the map $y \mapsto xy$ and arguing that it is injective and hence surjective. – lhf Sep 18 '12 at 11:28 ## 1 Answer Wedderburn's little theorem states as you wrote above, that a finite domain is a field. A field is a commutative domain $A$, such that every nonzero $x \in A$ has a multiplicative inverse. Your proof only shows that any finite domain is a skew field. You must also prove that $A$ is commutative, which needs more sophisticated arguments. - Well, this is embarrassing. I keep thinking that "field" is the translation for "corps" (French), which are not assumed to be commutative. Thank you. – user25784 Sep 18 '12 at 11:22 @user25784 Indeed I believe Fields were originally not assumed to be commutative, especially by french mathematicians. Andre Weil's number theory text follows that convention. – Ragib Zaman Sep 18 '12 at 11:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517985582351685, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/36359/gentle-introduction-to-twistors/36391
# Gentle introduction to twistors When reading about the twistor uprising or trying to follow a corresponding Nima talk, it always annoys me that I have no clue about how twistor space, the twistor formalism, or twistor theory works. First of all, are these three terms some kind of synonyms or what is the relationship between them? Twistors are just a deep black gap in my education. I've read the Road to Reality but I just did not get it from the relevant chapter therein, maybe because I could not understand better the one or two chapters preceding it either ... :-/ So can somebody point me to a gentle, but nevertheless slightly technical source that explains twistors step by step (similar to a demystified book...) such that even I can get it, if something like this exists? Since I think I'd really have to "meditate" about it a bit, I'd prefer something written I can print out, but nevertheless I would appreciate video lectures or talks too. - – Dilaton Sep 13 '12 at 19:55 ## 2 Answers :-) The best gentle introduction to basic twistor theory that I know of is the book by Huggett and Tod If you don't have access to that book and some other answers don't surface in the meantime I'm happy to write a few bits and pieces here, but will have to wait until the weekend. (I may be biased, but I think it's well worth learning, as the MHV amplitude applications are extremely interesting). Edit: Here are a few paragraphs to give a flavor of twistor theory: Twistor theory makes extensive use of Weyl spinors, which form representations of $SL(2;\mathbb{C})$ - the double cover of the (restricted) Lorentz group. These come in two varieties – unprimed spinors $\omega_A$ transforming according to the fundamental representation, and primed spinors $\omega_{A’}$ transforming according to the conjugate representation. (Note in much of the modern literature, primed and unprimed are denoted by dotted $\lambda_{\dot{a}}$ and undotted). Spinor indices are raised and lowered using the antisymmetric spinor $$\epsilon_{AB}=\epsilon_{A’B’}=\epsilon^{AB}= \epsilon^{A’B’} = \left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right)$$ Minkowski-space vectors $x^a$ can be put into correspondence with two-index unprimed/primed spinors by writing $$x^{AA’} = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} x^0+x^1 & x^2+ix^3 \\ x^2-ix^3 & x^0-x^1 \end{array} \right)$$ Now if we take a primed/unprimed spinor pair $(\omega^A, \pi_{A’})$, then the set of Minkowski vectors which satisfy $$\omega^A=ix^{AA’}\pi_{A’} \ \ \ (1)$$ is a null line in Minkowski space provided we impose the reality condition $$\omega^A{\bar{\pi}}_{A}+{\bar{\omega}}^{A’}\pi_{A’}=0$$ The pair of spinors is referred to as a twistor $Z^{\alpha} = (\omega^A, \pi_{A’})$. The space of such four-component objects is “twistor space” $\mathbb{T}$, upon which we define a Hermitian form via the conjugation operation $${\bar{Z}}_0 = \bar{Z^2} = {\bar{\pi}}_0$$ $${\bar{Z}}_1 = \bar{Z^3} = {\bar{\pi}}_1$$ $${\bar{Z}}_2 = \bar{Z^0} = {\bar{\omega}}^{0’}$$ $${\bar{Z}}_3 = \bar{Z^1} = {\bar{\omega}}^{1’}$$ The reality condition above is then expressible as $Z^{\alpha}{\bar{Z}}_{\alpha}=0$ and twistors satisfying this condition are called null twistors. The locus of points in Minkowski space satisfying (1) is unchanged if we multiply the twistor $Z^{\alpha}{\bar{Z}}_{\alpha}=0$ by any non zero complex number. In fact it proves extremely useful to impose this as an equivalence relation on $\mathbb{T}$ and work with its projective version $P\mathbb{T}$. Projective null twistors, then, correspond to light rays in Minkowski space. The correspondence between (projective) twistor space and Minkowski space is made more complete if we attach to Minkowski space its conformal boundary (light cone at infinity) and if we complexify it. We are then dealing with complexified, compactified Minkowski space $\mathbb{C}M$ and twistors (we will always mean projective twistors) correspond to totally null two-planes (called alpha planes) in $\mathbb{C}M$. The alpha planes corresponding to null twistors (such objects live in a subspace of $P\mathbb{T}$ called $PN$) will intersect the real slice of $\mathbb{C}M$ in null rays. Conversely a point x in real Minkowski space defines a set of null rays – the ones defining the null cone at that point. There is a two-sphere’s worth of such rays (the celestial sphere), and the set of twistors defining these rays defines a subset of $PN$ having the topology of a two-sphere, but more importantly having the complex structure of a $\mathbb{C}P^1$, and known as a projective line (or just “line”). Figure 1 shows a point x in Minkowski space and the corresponding line $L_x$ in $PN$, and also a pair of twistors $Z$ and $W$ on $L_x$ and the null rays $\gamma_Z$ and $\gamma_W$ they correspond to. Now the fun starts when you consider functions on twistor space. Suppose we consider a function homogeneous of degree zero (i.e. $f(\lambda Z^{\alpha}) = f(Z^{\alpha}); \lambda \in \mathbb{C}^*$). We then define the field on spacetime: $$\phi_{AB}(x) = \oint{\rho_x(\frac{\partial}{\partial \omega^A} \frac{\partial}{\partial \omega^B}f(\omega^A, \pi_{A’}))\pi_{C’}d\pi^{C’}}$$ where $\rho_x$ means “impose the restriction (1)”. To get a non trivial field, the function f needs to have singularities on twistor space, i.e it mustn’t be holomorphic everywhere. For example it can have poles. The contour used is on the projective line $L_x$ and avoids the singularities of f. The field defined in this way satisfies $$\nabla^{AA'} \phi_{AB} = 0 \ \ \ (2)$$ Where $$\nabla_{AA’} = \frac{\partial}{\partial x^{AA’}}$$ We can decompose an antisymmetric electromagnetic field tensor into its anti self-dual and self-dual parts respectively as $$F_{ab} = F_{AA'BB'} = \phi_{AB}\epsilon_{A'B'} +{\tilde{\phi}}_{A'B'}\epsilon_{AB}$$ Then (2) represents the (source free) Maxwell equations (for anti self dual Maxwell fields). The correspondence between twistor functions and anti-self-dual solutions of the Maxwell equations is not unique. However, treating the twistor functions as representatives of certain sheaf cohomology classes does give a unique correspondence. Choosing twistor functions with other homogeneities gives rise to other types of field (symmetric spinors with other numbers of primed or unprimed indices satisfying equations similar to (2)). For example the equations for self dual Maxwell fields $$\nabla^{AA'} \phi_{A’B’} = 0$$ are given by a (slightly different) contour integral involving twistor functions of homogeneity -4: $$\phi_{A'B'}(x) = \oint{\rho_x(\pi_{A'}\pi_{B'}f(\omega^D, \pi_{D'}))\pi_{C'}d\pi^{C'}}$$ Other ways of using the twistor correspondence exist, for example a correspondence can be set up for fields on a real space with Euclidean signature. This programme led to the construction of self dual solutions of the Yang Mills equations on $S^4$ (the compactification of $\mathbb{R}^4$). In this case, the correspondence is between self dual Yang Mills fields on $S^4$ and holomorphic bundles on twistor space which are (holomorphically) trivial on projective lines in twistor space (and which have various other conditions depending on the structure group of the Yang Mills theory you’re interested in). Both twistor space and Minkowski space can be “thickened” by adding Grassmannian coordinates and in this way supersymmetric versions of the twistor correspondences of the type illustrated above can be given. This has been used in the treatment of Supersymmetric Yang Mills theory. - Hi Twistor :-))), thanks a lot. From afar the book looks very nice and it seems to contain many things I always wanted to know and to be quite accessible to me :-). I would appreciate it if you could write some further more detailed information as you find time for it. Cheers – Dilaton Sep 13 '12 at 21:23 I accept this answer because I think it is easier for me to start with this book before I read Nair's lecture notes. – Dilaton Sep 15 '12 at 17:03 1 @Dilaton Yes, I think it is an easier space to start. Although I've seen Nair's work, I hadn't seen that set of notes, identified by David, before. It's probably a good idea to get a feel for "conventional" twistors before flipping to their supersymmetric versions. – twistor59 Sep 16 '12 at 7:29 1 @Dilaton: for example you might see the anticommutator of two supercharges written with dotted indices as $\{Q_{\alpha},{\bar{Q}}_{\dot{\alpha}}\} = 2\sigma^{\mu}_{\alpha \dot{\alpha}}P_{\mu}$ This would, in the notation of this post, be written $\{Q_A, {\bar{Q}}_{A'}\} = 2\sigma^{\mu}_{AA'}P_{\mu}$ – twistor59 Sep 17 '12 at 12:42 1 In some circumstances, people are using a spinor labelled with a barred symbol to mean an independent entity from the unbarred spinor, and in some circumstances they use the barred spinor to mean the conjugate of the unbarred one, if the former, most authors will state this explicitly. Applying conjugation converts the spinor index undotted->dotted or vice versa. – twistor59 Sep 17 '12 at 12:48 show 1 more comment I would like to recommend to you the following lecture notes by V.P. Nair. These lecture notes contain a very concise chapter about twistors, their relation to massless wave equations and their use in the construction of Yang-Mills amplitudes. The importance of this work to me is that, here, Nair connects these two applications to another (may be less famous) application of twistors in the theory of quantization on geometrically nontrivial manifolds (such as the quantization problem of a particle moving on the two sphere in the presence of a monople). - Thanks, this seems to explain the things I finally want to know ... :-) – Dilaton Sep 14 '12 at 8:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156327247619629, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/268002/gelfand-naimark-theorem
Gelfand-Naimark Theorem The Gelfand–Naimark Theorem states that an arbitrary C*-algebra $A$ is isometrically *-isomorphic to a C*-algebra of bounded operators on a Hilbert space. There is another version, which states that if $X$ and $Y$ are compact Hausdorff spaces, then they are homeomorphic iff $C(X)$ and $C(Y)$ are isomorphic as rings. Are these two related anyway? - 2 Answers The first result that you stated is commonly known as the Gelfand-Naimark-Segal Theorem. It is true for arbitrary C*-algebras, and its proof employs a technique known as the GNS-construction. This technique basically allows one to construct a Hilbert space $\mathcal{H}$ from a given C*-algebra $\mathcal{A}$ such that $\mathcal{A}$ can be isometrically embedded into $B(\mathcal{H})$ as a C*-subalgebra. The Gelfand-Naimark Theorem, on the other hand, states that every commutative C*-algebra $\mathcal{A}$, whether unital or not, is isometrically *-isomorphic to ${C_{0}}(X)$ for some locally compact Hausdorff space $X$. When $X$ is compact, ${C_{0}}(X)$ and $C(X)$ become identical. Note: The assumption of commutativity is essential for stating the Gelfand-Naimark Theorem. This is because we cannot realize a non-commutative C*-algebra as the commutative C*-algebra ${C_{0}}(X)$, for some locally compact Hausdorff space $X$. What follows is a statement of the Gelfand-Naimark Theorem, with the utmost level of precision. Gelfand-Naimark Theorem Let $\mathcal{A}$ be a commutative C*-algebra. If $\mathcal{A}$ is unital, then $\mathcal{A}$ is isometrically *-isomorphic to $C(X)$ for some compact Hausdorff space $X$. If $\mathcal{A}$ is non-unital, then $\mathcal{A}$ is isometrically *-isomorphic to ${C_{0}}(X)$ for some non-compact, locally compact Hausdorff space $X$. This result is often first established for the case when $\mathcal{A}$ is unital. One basically tries to show that the compact Hausdorff space $X$ can be taken to be the set $\Sigma$ of all non-zero characters on $\mathcal{A}$, where $\Sigma$ is equipped with a special topology. Here, a character on $\mathcal{A}$ means a linear functional $\phi: \mathcal{A} \to \mathbb{C}$ satisfying $\phi(xy) = \phi(x) \phi(y)$ for all $x,y \in \mathcal{A}$. A rough outline of the proof is given below. • Show that every character has sup-norm $\leq 1$. Hence, $\Sigma \subseteq {\overline{\mathbb{B}}}(\mathcal{A}^{*})$, where ${\overline{\mathbb{B}}}(\mathcal{A}^{*})$ denotes the closed unit ball of $\mathcal{A}^{*}$. • Equip ${\overline{\mathbb{B}}}(\mathcal{A}^{*})$ with the subspace topology inherited from $(\mathcal{A}^{*},\text{wk}^{*})$, where $\text{wk}^{*}$ denotes the weak*-topology. By the Banach-Alaoglu Theorem, ${\overline{\mathbb{B}}}(\mathcal{A}^{*})$ then becomes a compact Hausdorff space. • Prove that $\Sigma$ is a weak*-closed subset of $\left( {\overline{\mathbb{B}}}(\mathcal{A}^{*}),\text{wk}^{*} \right)$. Hence, $\Sigma$ becomes a compact Hausdorff space with the subspace topology inherited from $\left( {\overline{\mathbb{B}}}(\mathcal{A}^{*}),\text{wk}^{*} \right)$. • For each $a \in \mathcal{A}$, define $\hat{a}: \Sigma \to \mathbb{C}$ by $\hat{a}(\phi) \stackrel{\text{def}}{=} \phi(a)$ for all $\phi \in \Sigma$. We call $\hat{a}$ the Gelfand-transform of $a$. • Show that $\hat{a}$ is a continuous function from $(\Sigma,\text{wk}^{*})$ to $\mathbb{C}$ for each $a \in \mathcal{A}$. In other words, $\hat{a} \in C((\Sigma,\text{wk}^{*}))$ for each $a \in \mathcal{A}$. • Finally, prove that $a \longmapsto \hat{a}$ is an isometric *-isomorphism from $\mathcal{A}$ to $C((\Sigma,\text{wk}^{*}))$. Let us now take a look at the following theorem, which the OP has asked about. If $X$ and $Y$ are compact Hausdorff spaces, then $X$ and $Y$ are homeomorphic if and only if $C(X)$ and $C(Y)$ are isomorphic as C*-algebras (not only as rings). One actually does not require the Gelfand-Naimark Theorem to prove this result. Let us see a demonstration. Proof • The forward direction is trivial. Take a homeomorphism $h: X \to Y$, and define $h^{*}: C(Y) \to C(X)$ by ${h^{*}}(f) \stackrel{\text{def}}{=} f \circ h$ for all $f \in C(Y)$. Then $h^{*}$ is an isometric *-isomorphism. • The other direction is non-trivial. Let $\Sigma_{X}$ and $\Sigma_{Y}$ denote the set of non-zero characters of $C(X)$ and $C(Y)$ respectively. As $C(X)$ and $C(Y)$ are isomorphic C*-algebras, it follows that $\Sigma_{X} \cong_{\text{homeo}} \Sigma_{Y}$. We must now prove that $X \cong_{\text{homeo}} \Sigma_{X}$. For each $x \in X$, let $\delta_{x}$ denote the Dirac functional that sends $f \in C(X)$ to $f(x)$. Next, define a mapping $\Delta: X \to \Sigma_{X}$ by $\Delta(x) \stackrel{\text{def}}{=} \delta_{x}$ for all $x \in X$. Then $\Delta$ is a homeomorphism from $X$ to $(\Delta[X],\text{wk}^{*})$ (this follows from the fact that $X$ is a completely regular space). We will be done if we can show that $\Delta[X] = \Sigma_{X}$. Let $\phi \in \Sigma_{X}$. As $\phi: C(X) \to \mathbb{C}$ is surjective (as it maps the constant function $1_{X}$ to $1$), we see that $C(X)/\ker(\phi) \cong \mathbb{C}$. According to a basic result in commutative ring theory, $\ker(\phi)$ must then be a maximal ideal of $C(X)$. As such, $$\ker(\phi) = \{ f \in C(X) ~|~ f(x_{0}) = 0 \}$$ for some $x_{0} \in X$ (in fact, all maximal ideals of $C(X)$ have this form; the compactness of $X$ is essential). By the Riesz Representation Theorem, we can find a regular complex Borel measure $\mu$ on $X$ such that $\phi(f) = \displaystyle \int_{X} f ~ d{\mu}$ for all $f \in C(X)$. As $\phi$ annihilates all functions that are vanishing at $x_{0}$, Urysohn's Lemma implies that $\text{supp}(\mu) = \{ x_{0} \}$. Hence, $\phi = \delta_{x_{0}}$, which yields $\Sigma_{X} \subseteq \Delta[X]$. We thus obtain $\Sigma_{X} = \Delta[X]$, so $X \cong_{\text{homeo}} \Sigma_{X}$. Similarly, $Y \cong_{\text{homeo}} \Sigma_{Y}$. Therefore, $X \cong_{\text{homeo}} Y$ because $$X \cong_{\text{homeo}} \Sigma_{X} \cong_{\text{homeo}} \Sigma_{Y} \cong_{\text{homeo}} Y.$$ We actually have the following general categorical result. Let $\textbf{CompHaus}$ denote the category of compact Hausdorff spaces, where the morphisms are proper continuous mappings. Let $\textbf{C*-Alg}$ denote the category of commutative unital C*-algebras, where the morphisms are unit-preserving *-homomorphisms. Then there is a contravariant functor $\mathcal{F}$ from $\textbf{CompHaus}$ to $\textbf{C*-Alg}$ such that (1) $\mathcal{F}(X) = C(X)$ for all $X \in \textbf{CompHaus}$, and (2) $\mathcal{F}(h) = h^{*}$ for all proper continuous mappings $h$. If $h: X \to Y$, then $h^{*}: C(Y) \to C(X)$, which highlights the contravariant nature of $\mathcal{F}$. Furthermore, $\mathcal{F}$ is a duality (i.e., contravariant equivalence) of categories. The role of the Gelfand-Naimark Theorem in this result is to prove that $\mathcal{F}$ is an essentially surjective functor, i.e., every commutative C*-algebra can be realized as $\mathcal{F}(X) = C(X)$ for some $X \in \textbf{CompHaus}$. - where can I find the proof of last statement. – K.Ghosh Dec 31 '12 at 6:41 3 – Qiaochu Yuan Dec 31 '12 at 6:49 @K.Ghosh: I apologize that I took so long to reply. I was actually trying to make my answer more complete. – Haskell Curry Dec 31 '12 at 9:51 @Qiaochu: If I had seen that you had already posted the link, I would have spared myself the pain of having to provide so much detail in my posted solution. Anyway, thanks for providing the link! – Haskell Curry Dec 31 '12 at 9:53 2 – Martin Dec 31 '12 at 10:49 show 3 more comments The second theorem you describe is the Banach-Stone theorem. The commutative Gelfand-Naimark theorem says something stronger, namely that every commutative (unital) C*-algebra is of the form $C(X)$ for some compact Hausdorff space $X$. The strongest version of the theorem says that the functor $X \mapsto C(X)$ is a contravariant equivalence of categories. I don't know the history here, but both Gelfand-Naimark theorems are "Cayley theorems" for C*-algebras, one saying that commutative C*-algebras can be represented faithfully as function spaces and the other saying that noncommutative C*-algebras can be represented faithfully as spaces of operators. - 2 – Martin Dec 31 '12 at 10:35 @Martin: thanks for the correction. Admittedly I did not read the Wikipedia article too closely... – Qiaochu Yuan Dec 31 '12 at 11:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 135, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252782464027405, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Talk:Euclidean_vector
Talk:Euclidean vector WikiProject Engineering   EngineeringWikipedia:WikiProject EngineeringTemplate:WikiProject EngineeringEngineering articles This article is within the scope of WikiProject Engineering, a collaborative effort to improve the coverage of engineering on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. WikiProject Physics (Rated B-class, High-importance) PhysicsWikipedia:WikiProject PhysicsTemplate:WikiProject Physicsphysics articles This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. B  This article has been rated as B-Class on the project's quality scale. High  This article has been rated as High-importance on the project's importance scale. WikiProject Mathematics (Rated B-class, Top-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: B Class Top Importance Field: Algebra One of the 500 most frequently viewed mathematics articles. Wikipedia Version 1.0 Editorial Team Wikipedia 1.0Wikipedia:Version 1.0 Editorial TeamTemplate:WP1.0Wikipedia 1.0 articles B This article has been rated as B-Class on the quality scale. Archives • Archive 1 (Sept 2001–June 2005) • Archive 2 (June 2005–Feb 2008) • Archive 3 (February 2008) • Archive 4 (April 2007–Feb 2008) • Archive 5 (Dec 2006–Dec 2009) Coordination among articles The article linear algebra has been demoted to "start class". Several people are trying to fix it. But this article and the articles linear map, vector space, linear algebra, and matrix seem to have been written without reference to one another. A good goal would be to have all of these articles agree in terminology and style, and this article seems to be the place to start. There are probably other articles that should also be included in this project. The first thing to consider is whether the title "Euclidean vector" is the best title for this article, leaving no article on the more general subject "Vector". Rick Norwood (talk) 16:57, 9 February 2010 (UTC) Considering the length of the disambiguation pages at Vector and Vector (mathematics and physics), I think the scope and title of this article -- for physical vectors in 2 and 3 real-world dimensions -- are not badly chosen. There was extensive discussion on the title in /Archive 5. IMO the present title is better than any alternatives that were being canvassed at that time. Jheald (talk) 17:49, 9 February 2010 (UTC) I don't have any problem with an article titled "Euclidean vector". My problem is with the lack of an article titled "vector (mathematics)". I haven't checked, but I suspect every mathematical encyclopedia has such an article. For example, this article at MathWorld http://mathworld.wolfram.com/Vector.html. At one point, Wikipedia explicitly wanted an article on every subject on MathWorld. Rick Norwood (talk) 14:20, 10 February 2010 (UTC) Right now, vector (mathematics) redirects to vector space. I think that's a good solution: After all, a vector (strictly speaking) is just an element of a vector space, so you can't really discuss one without the other. Ozob (talk) 14:32, 10 February 2010 (UTC) Consider the intelligent layperson who hears the word "vector" and wants to know what it means. To say that a vector is an element in a vector space is not helpful. I've been trying to find a good definition that will include vectors over an arbitrary field of scalars, and still be something a layperson can understand. Something like "a vector is a mathematical object that has both magnitude and direction, though in abstract mathematics the concepts of magnitude and direction may also be abstract." Rick Norwood (talk) 14:55, 10 February 2010 (UTC) If you can think of a good article to put at vector (mathematics), then go for it. Myself, I don't know what could be put there; but I'll be interested to see what you come up with. Ozob (talk) 15:25, 10 February 2010 (UTC) Sometimes its easier to define what something is by defining what it isn't. It's hard to find an equationally defined class used in practice that wasn't originally motivated by its concrete instances. A Boolean algebra can be defined concretely as, any structure (with the appropriate operations) isomorphic to a subalgebra of a power set, or more abstractly as, any model of the equational theory of the two-element Boolean algebra. (That this theory has a finite basis, or finite axiomatization, is convenient but not the main point in the concrete-abstract distinction.) The article Introduction to Boolean algebra begins with the finite axiomatization, but from the point of view of elementary algebra rather than abstract algebra. Section 5 on Boolean algebras (necessarily plural to avoid confusion) initially ignores the axioms and begins with concrete Boolean algebras (a) because they arise naturally and (b) to make the point that one can speak about at least the concrete kind without reference to any axiomatic definition of the concept. The same should apply to vector spaces, with the parallels being strikingly clear when one considers that a vector space over GF(2), when equipped with a second constant 1 as the complement of the origin 0, is equivalent to a Boolean algebra via the evident translations in each direction between their respective languages. In particular, just as there is one finite concrete Boolean algebra 2n for each natural number n, so (given any field k) is there one finite-dimensional concrete vector space k n for each n. In both cases these are, up to isomorphism, the only finite/finite-dimensional such. (That the only non-free algebras here are some of the Boolean ones is an interesting but not central point.) The benefit of the abstract definition, that it does not commit to a basis, can be had almost as well in the concrete case by defining an isomorphism of a concrete n-dimensional vector space to be a non-singular n×n matrix and pointing out that the resulting automorphism group elegantly links all bases in a way that makes the concrete concept basis-independent. One can then ask whether there might be an even neater approach to basis independence, which then leads naturally to the notion of an abstract vector space. The problem with starting with the abstract definition is that it comes with no intuition. The point that is often lost is that concrete vector spaces are still vector spaces, despite not being defined equationally. It is enough for an object merely to satisfy the equations for a vector space, however the object was defined. --Vaughan Pratt (talk) 20:20, 13 July 2010 (UTC) I think there ought to be an article about vector spaces in general, and one about vectors as used in classical mechanics and engineering with a short mention about other kinds of vectors used in physics. The first is what is currently at vector space, and either the current title or vector (mathematics) would be fine for it. For the latter, the current Euclidean vector is a good start but vector (physics) would be a better title. (Note also that physical vectors are not mathematical vectors, rather they are described by mathematical vectors: in modern mathematics all elements of all sets -- hence all vectors -- must be sets themselves, but the gravitational force acting on me right now in my frame of reference is not a set... A. di M. (formerly Army1987) (talk) 12:45, 14 July 2010 (UTC)) History of vectors reduced to just one line? The history, as we can see from the article, is only ONE LINE. Would someone help expand the histories? Thank you. KaliumPropane (talk) 09:05, 4 November 2010 (UTC)KaliumPropane Formal definition I have removed the "formal definition" from the first paragraph of the article: More formally, a Euclidean vector is any element of a Euclidean vector space, i.e. a vector space that has a Euclidean norm. A Euclidean vector space is automatically a type of normed linear space and a type of inner product space. For one thing, this is rather at odds with the way the article introduces vectors as directed line segments in the usual Euclidean space (which is more properly speaking an affine space, not a vector space). This is typical of how most mathematical treatments deal with geometric vectors (see, for instance, the EOM entry). It's also important to observe WP:NPOV. When most people consider geometric vectors, e.g., in mechanics, they are usually not thinking of the "element of a Euclidean space" viewpoint, but rather are thinking of a vector in the sense described in this article: a directed line segment in a (naive) Euclidean space. It might be worth having more discussion somewhere to disambiguate the naive vectors described here and the elements of a Euclidean vector space, i.e., an inner product space. A perusal of the archive shows that there is substantial confusion over what the scope of this article is, with formalists often trying to impose the "rigorous" definition (which is not even mathematically the same notion that the rest of the article is talking about). Sławomir Biały (talk) 13:52, 15 January 2011 (UTC) Indeed: once upon a time this page was titled (IIRC) vector (physics) and it was the counterpart to scalar (physics); look for example at the 500th-oldest revision. Then the mathematicians took over and completely messed up with both the scope and the title of the article. :-) Now it's about any three-dimensional vector space over the real numbers with a positive-definite inner product, regardless of its relationships to physical space. I once even proposed to keep this article with its "new" scope and "new" title and to start another article which would then be the new counterpart to scalar (physics) (and wrote a draft of it), but there were too few physicists (or engineer) around :-) so no-one saw the need for such an article. --A. di M. (talk) (formerly Army1987) 14:45, 15 January 2011 (UTC) I'm getting sick of it I know Wikipedia is written for different readers, but this is just ridiculous. The sum of the null vector with any vector a is a (that is, 0+a=a) is way too obvious and useless to be here. I don't mind if we remind readers that 1 + 1 = 2, but 0 + 1 = 1 is a little extreme. I mean I can't understand 90% of the mathematics on Wikipedia, and even I think this is too basic. 173.183.79.81 (talk) 03:10, 30 March 2011 (UTC) I disagree. The existence and behavior of the null vector is central to the notion of a vector space. Without it and it's admittedly trivial-seeming behavior that 0+a=a, you don't have a linear space, you have an affine space. Plenty of times in advanced mathematics, seemingly trivial things are very important and need to be mentioned. —Ben FrantzDale (talk) 12:18, 31 March 2011 (UTC) History The history of vectors focuses entirely on quaternions. A brief mention of quaternions is fine, but the section simply describes the history and properties of quaternions and leaves out the history of vectors entirely. The section obviously does not satisfy quality standards and if an experienced knowledgable editor doesn't revise it the section should be removed. — Preceding unsigned comment added by 174.109.94.64 (talk) 02:19, 11 September 2011 (UTC) This is absolutely correct. I've removed the section. Someone who wants to add a relevant history section is welcome to do so. The removed text is copied below. --JBL (talk) 21:44, 17 April 2013 (UTC) I disagree. We are an encyclopaedia not a text book so we need to cover a history of the subject. To properly discuss how the concept of vectors came about we need to discuss what came before, complex numbers and quaternions were important precursors. Also note this section shows where the word "vector" first appears, part of a quaternion. Yes it could use some editing Grassmann’s Calculus of Extension needs a mention. Crowe's A History of Vector Analysis [1] seems a good basis for extending the section.--Salix (talk): 07:00, 18 April 2013 (UTC) (copied from my talk page)--Salix (talk): 22:07, 18 April 2013 (UTC) Yes, of course the history of a topic is important in an encyclopedia, and it's always nice to see historical information in math articles. But we have here a history section that is the history of a different topic than the subject of the article: there were 4 out of 5 or 6 paragraphs written to emphasize quaternions, with vectors an afterthought at best. (I've now re-removed one of these paragraphs, a comparison of quaternion and complex multiplication.) Three such paragraphs remain. If you'd like to keep them, please rewrite them so that they are about vectors, not quaternions. (Or move content over to the quaternion article as appropriate.) Right now the section is extremely misfocused and misleading. --JBL (talk) 14:13, 18 April 2013 (UTC) The history of vectors is quite convoluted. Many of the important features of vectors, like the dot and cross products and the del operator were arrived at through studying quaternions. There was also a parallel development following Grassmann which is closer to what we would recognise as vector, however at the time this was very marginal. It was not until the 1880's when Gibbs and Heaviside both publish works which we would recognise as vector analysis.--Salix (talk): 23:07, 18 April 2013 (UTC) Removed section The concept of vector, as we know it today, evolved gradually over a period of more than 200 years. About a dozen people made significant contributions.[1] The immediate predecessor of vectors were quaternions, devised by William Rowan Hamilton in 1843 as a generalization of complex numbers. Initially, his search was for a formalism to enable the analysis of three-dimensional space in the same way that complex numbers had enabled analysis of two-dimensional space, but he arrived at a four-dimensional system. In 1846 Hamilton divided his quaternions into the sum of real and imaginary parts that he respectively called "scalar" and "vector": The algebraically imaginary part, being geometrically constructed by a straight line, or radius vector, which has, in general, for each determined quaternion, a determined length and determined direction in space, may be called the vector part, or simply the vector of the quaternion.[2] Whereas complex numbers have one number $(i)$ whose square is negative one, quaternions have three independent imaginary units $(i, j, k)$. Multiplication of these imaginary units by each other is anti-commutative, that is, $ij \ = \ -ji\ =\ k$. Multiplication of two quaternions yields a third quaternion whose scalar part is the negative of the dot product and whose vector part is the cross product. Peter Guthrie Tait carried the quaternion standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator. In 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis.[1] In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibb's lectures, and banishing any mention of quaternions in the development of vector calculus. It has been suggested that portions of this section be moved into . (Discuss) Why is there an overview section? Right now, this article has an excellent introduction, followed by an overview section that mostly repeats the same content, but with less clarity and a variety of issues (like the idea that "an arrow" is the definition). I suggest simply removing the "overview" part of the first section (I.e., before the subsection "examples in 1 dimension"). --JBL (talk) 15:55, 28 August 2012 (UTC) Vectors, pseudovectors, and transformations Does this section have contravariant and covariant backwards? Not only does the word contravariant seem to imply it should vary in the opposite way, but the description seems to say so to. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, like the co-ordinates, would reduce in an exactly compensating way. Why then does it say, just above that, that they "transform like the coordinates" and the give math transformation both the coordinates $x'=Mx$ and the vector $v'=Mv$ the same. Also why, if they are both transformed by the forward transformation, is the need for an inverse to exist mentioned. Combined with the fact that covariance and contravariance of vectors page give the contravariant transformation in terms of the inverse, $v[fA] = A^{-1}v[f]$, I think this section got the transformations crossed over for part of it somehow. 207.112.55.16 (talk) 04:50, 5 February 2013 (UTC) vector subtraction The article states: "to subtract b from a, place the end points of a and b at the same point, and then draw an arrow from the tip of b to the tip of a. That arrow represents the vector a − b, as illustrated below:". However the section on representations tells us that the tip and the endpoint are synonymous. I presume that it should read "place the tails of a and b at the same point" or similar (this is what is illustrated). This subject is pretty fresh to me so I will leave it to somebody else to make the change. Kelly F Thomas (talk) 16:59, 1 April 2013 (UTC) Quite right. I've made everything "head"s and "tail"s in that section. (In general this article is something of a mishmash and needs someone to go through and sort it out. Not volunteering, though.) --JBL (talk) 21:59, 1 April 2013 (UTC) Please, contribute to these discussions. See also talk: Vector (mathematics and physics) #A CONCEPTDAB article is needed. Incnis Mrsi (talk) 07:58, 25 April 2013 (UTC) Ошибка цитирования Для существующего тега `<ref>` не найдено соответствующего тега `<references/>`
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663336873054504, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/66754-equivalence-relation-equivalence-classes-print.html
# Equivalence relation and Equivalence classes? Printable View • January 4th 2009, 05:04 AM Jason Bourne Equivalence relation and Equivalence classes? I'm not sure on the following question: (a) Define a relation R on Z by $aRb$ if and only if $3|(a+2b)$ Show that R is an equivalence relation, and describe the equivalence classes. --------------------------------------------------------------------------- I'm particularly interested on how you prove the transativity part of the equivalence relation and how to describe the equivalence classes. Can anyone help? (also any more explanation on Equivalence relations and classes would be much appreciated as I could certainly do with understanding it better.) • January 4th 2009, 05:30 AM Plato Quote: Originally Posted by Jason Bourne (a) Define a relation R on Z by $aRb$ if and only if $3|(a+2b)$ I'm particularly interested on how you prove the transativity part of the equivalence relation $aRb\,\& \,bRc \Rightarrow \quad a + 2b = 3k\,\& \,b + 2c = 3j$ So add together: $\begin{gathered} a + 3b + 2c = 3k + 3j \hfill \\<br /> a + 2c = 3k + 3j - 3b \hfill \\ \end{gathered}$ • January 5th 2009, 12:21 AM Jason Bourne thanks for that, anything on Equivalence classes? • January 7th 2009, 03:39 AM HallsofIvy I would find symmetry most interesting because the rule defining equivalence does "look" symmetric! You must prove that if aRb, then bRa which means show that if a+ 2b is a multiple of 3, then so is b+ 2a. That is true but how did you show it? An equivalence class consists of all those things that are equivalent to one another. Here two integers, a and b, are equivalent if and only if a= 2b is divisible by 3. Now just start looking at integers: If b= 0, then in order to be equivalent, a must satisfy "a+ 0= a is divisible by 3" so one equivalence class is just the multiples of 3. If b= 1, then we must have a+ 2 divisible by 3 which is the same as saying a is a multiple of 3 plus 1: 1, 4, 7, -2, -5, etc. If b= 2, then we must have a+ 4 divisible by 3: 2, 5, -1, -4, etc. Those are the only three equivalence classes: every integer is in one of those. All times are GMT -8. The time now is 01:38 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302191138267517, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38437/is-there-a-reset-sequence/38440
## Is there a reset sequence? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There is a question someone (I'm hazy as to who) told me years ago. I found it fascinating for a time, but then I forgot about it, and I'm out of touch with any subsequent developments. Can anyone better identify the problem or fill in the history, and say whether it's still unsolved? It's a challenging question if I've gotten it right. Here it is: Suppose you have some kind of machine with two buttons, evidently designed by people with poor instinct for UI. The machine has many states in which the buttons do different things. Here are the assumptions: 1. There is no periodic quotient of the state space: no way to label states by an n-cycle so that both buttons advance the label by 1 mod n. 2. It is not reversible: there are situations when two states merge into one. 3. It's ergodic: you can get from any state to any other state by some sequence of buttons. Now suppose its dinky little LCD is faded or broken, so you can't actually tell what the state it's in. Is there necessarily a universal reset code, a sequence that will get you to a known state no matter where you start? (Formally, this is a finite state automaton, or an action of the free 2-generator semigroup on a finite set, and asks whether some element acts as a constant map). - 1 "there is no quotient map" from what? – Ricky Demer Sep 12 2010 at 3:06 ## 3 Answers I believe you are referring to the Road coloring theorem. It was solved in this preprint. - You may be right, which would means I garbled the statement: perhaps the question as I phrased it is trivial. I'll mull it over a bit. – Bill Thurston Sep 12 2010 at 3:23 4 Syncronizing word at en.wikipedia.org/wiki/Synchronizing_word uses the same diagram as the road coloring theorem wikipedia article and has a few references to Černý's 1964 paper and to Eppstein's article for finding reset sequences for monotonic automata. – sleepless in beantown Sep 12 2010 at 3:29 1 Thanks for helping remove some of my mental haze. I now think I heard the problem from Roy Adler. To clarify: the "Road coloring theorem" tells how, under suitable conditions similar to the above, a digraph with out-degree 2 can be labeled with 2 letters so that there exists a reset code. The problem the way I stated it above is more straightforward to solve and has been known longer, as noted by sleepless in beantown. – Bill Thurston Sep 12 2010 at 3:41 2 Yes, it might be best if you gave "sleepless in beantown" the check mark (and the associated 15 points). – S. Carnahan♦ Sep 13 2010 at 1:18 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I've played with this problem in real life with a TiVo, wanting it to go to sleep (a low power consumption state) without having to turn on the monitor to watch as its states changed. The TiVo, or any remote, uses an alphabet size of at least as many buttons as there are on the remote control. However, a little hunting on wikipedia shows that "Synchronizing Word" is where "reset sequence" leads to. For $n$-state DFAs over a $k$-letter input alphabet in which all state transitions preserve the cyclic order of the states, an algorithm by David Eppstein finds a synchronizing word in $O(n^3+kn^2)$ time and $O(n^2)$ space. The name of that paper is "Reset Sequences for Monotonic Automata" . Finding and estimating the length of the "reset sequence" for a Deterministic Finite Automaton has been studied since the 1960's. The Černý conjecture posits $(n-1)^2$ as the upper bound of for the length of the shortest synchronizing word, for any $n$-state complete DFA (a DFA with complete state transition graph). The way you've posed your question sets $k=2$, since the transitions can only be labeled by the two buttons as input, thus the Deterministic Finite Automaton underlying your question will have a directed graph with at most two outbound arcs at each state. - I think this paper by Rob Schapire and Ron Rivest on "homing sequences" in deterministic finite state automata might contain the answer to your question. From the paper: "Informally, a homing sequence is a sequence of inputs that, when fed to the machine, is guaranteed to 'orient' the learner: the outputs produced in executing the homing sequence completely determine the state reached by the automaton at the end of the homing sequence ... Every finite-state machine has a homing sequence." Admittedly, I haven't read the paper carefully enough to be sure that the problem it studies exactly matches your formulation, but at a high-level it seems closely related. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532369375228882, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=129553
Physics Forums Thread Closed Page 1 of 2 1 2 > ## Pipe Flow This came up in the homework help section, and has me scratching my head. Consider a vertical pipe open to atm at one end under steady flow. The velocity in has to be equal to the velocity out because of the cont. equation. The pressure at the bottom is zero, (atm), and the datum is at the bottom, (z=0). That only leaves, $$P + \gamma h = 0$$ h cant be negative, so that means the pressure must be equal and opposite to the hydrostatic head. But then that mean's the fluid will flow in the direction of increasing pressure because the pressure will become less negative as you move down ?? <enter confusion> Show me what I did wrong. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Admin I take it P is the pressure at the top of the pipe? There must be a force applied to have a fluid flow. Either the fluid is given momentum (from a fan or some device) or one puts energy (heat) into it and bouyancy causes a flow. The P at the top has to be less the P (1 atm) at bottom. Yeah, everything at the bottom will become zero, and the velocity at the bottom will cancel the velocity at the top, so it drops out as well. The P at the top has to be less the P (1 atm) at bottom. That's what the equation is saying. But that would imply reverse flow because the atm pressure would want to push the water up, and not down. (water is flowing down and out the tube) Admin ## Pipe Flow The OP didn't mention water. I thought it was an open pipe in air, and it was air flow. The water flows down in gravity. The head of water pushes down. Water displaces air. The atmosphere provides a hydrostatic pressure, distinct from the head of water. Quote by Astronuc The OP didn't mention water. I thought it was an open pipe in air, and it was air flow. The water flows down in gravity. The head of water pushes down. Water displaces air. The atmosphere provides a hydrostatic pressure, distinct from the head of water. Sorry, it can be any incompressible fluid. Eh, I did a bad job in my OP, didnt I! The problem is that if you go a step further and consider the top of the pipe to be pressurized, then you will get negative hydrostatic terms, which makes no sense. Maybe the problem is that you cannot ignore the contraction coefficient? The head of water pushes down. Water displaces air. This cant be true in steady flow. The water flows down in gravity. Right, but it also flows in the direction of decreasing pressure, which is now a problem? Recognitions: Gold Member Science Advisor Staff Emeritus Quote by cyrusabdollahi Eh, I did a bad job in my OP, didnt I! Yeah, you can say that again. I've got no idea what your system is!! Got a diagram? Which end's open, and what's at the other end? It is a vertical pipe w/constant diameter. The bottom of the pipe is open and the water is flowing out of it. The top of the pipe keeps going on forever. Is that better? I don't have a picture, I just made it up in my head. Just think of a pipe flowing out the bottom. See post #3? Quote by me (water is flowing down and out the tube) Does this help? The top is a section cut, that's all you need to know. What is above it is irrelevant. Recognitions: Gold Member Science Advisor The pressure at the exit down section is the atmospheric one. The pressure at the top is also the atmospheric one. There is no contradiction nowhere, BECAUSE you cannot apply hydrostatic pressure equilibrium in the vertical direction. Hydrostatic means static, and your fluid isn't static at all. Let's call the pipe section $$A$$, the atmospheric pressure $$P_a$$ and we have also the gravity $$g$$, the characteristic heigth of the pipe $$L$$, the fluid density $$\rho$$ and the dynamic viscosity $$\mu$$. And let's play dimensional analysis. Call the vertical upwards coordinate z, $$r$$ the radial component, and call the vertical velocity component $$u$$. The equations of motion are: $$\nabla\cdot \overline{v}=0$$ $$\frac{\partial u}{\partial t} +u\frac{\partial u}{\partial z}=-\frac{1}{\rho}\frac{\partial (P+\rho gz)}{\partial z}+\frac{\nu}{r}\frac{\partial}{\partial r}\left(r \frac{\partial u}{\partial r}\right)$$ The last equation is the z-Momentum equation. Note I could have not written $$P$$, because it's vertical gradient is identically zero in this problem. Note also that this problem is inherently unsteady. Let's talk about the order of magnitudes. Firstly, one expects to have as much velocity as the one caused by the gravitational force, so that $$U\sim \sqrt{gL}$$. Therefore, one defines the Reynolds Number $$Re=\sqrt{g/L}A/\nu$$. Non dimensionalizing $$u=u/U$$, $$z=z/L$$ and $$t=tU/L$$ the equations become: $$\nabla\cdot \overline{v}=0$$ $$\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial z}=-1+\frac{1}{rRe}\frac{\partial}{\partial r}\left(r \frac{\partial u}{\partial r}\right)$$ For $$Re>>1$$, that is, for very wide pipes, one can assume ideal flow, and the z-Momentum eqn at leading order (with errors $$O(1/Re)$$) is: $$\frac{\partial u}{\partial t}+ u\frac{\partial u}{\partial z}=\frac{\partial u}{\partial t}+1/2\frac{\partial u^2}{\partial z}= -1$$ which can be integrated from 0 to h (the level of water): $$\int_0^h \left\{\frac{\partial u}{\partial t}+1/2 \frac{\partial u^2}{\partial z}+1\right\}dz=h\frac{\partial u}{\partial t}+h=0$$ which is just a nondimensional Bernouilli equation. Observe that I have integrated the unsteady term directly, because the acceleration is homogeneous inside the fluid. Note that there is no variation of kinetic energy. Also note that from the equation of Continuity one obtains a relation between u and h: $$u=-\frac{dh}{dt}$$. And a second order linear differential equation arises for h: $$\frac{d^2h}{dt^2}+1=0$$. Then, $$h(t)=-t^2/2+h_o$$. Does not sound to you as the free-falling space law for a particle in a gravity field????. Yeah, the water is falling as a whole, as a rigid solid. To sum up, the hydrostatic balance $$\nabla P=\rho \overline{g}$$ does not hold in this system. If there is motion in the direction of the body force, there is no hydrostatic balance. I am still in good shape, am not? The pressure at the exit down section is the atmospheric one. The pressure at the top is also the atmospheric one. There is no contradiction nowhere, BECAUSE you cannot apply hydrostatic pressure equilibrium in the vertical direction. Hydrostatic means static, and your fluid isn't static at all. Well, I said you could make the pressure at the top end whatever value you like, so it is not necessarily atm. Also, I did not apply hydrostatic equilibrium anywhere. I used the bernoulli equation which reduced itself to what I showed. That is a great respons Clausius, but I asked a question about bernoulli and you replied with the navier stokes equations. Can you give a more explicit answer to the bernoulli? or if you have, restate it, because I can't see it. Recognitions: Science Advisor The fluid has to be accelerating. The velocities will not be equal at the two points, so they won't cancel out. Recognitions: Gold Member Science Advisor Quote by cyrusabdollahi Well, I said you could make the pressure at the top end whatever value you like, so it is not necessarily atm. Also, I did not apply hydrostatic equilibrium anywhere. I used the bernoulli equation which reduced itself to what I showed. That is a great respons Clausius, but I asked a question about bernoulli and you replied with the navier stokes equations. Can you give a more explicit answer to the bernoulli? or if you have, restate it, because I can't see it. Hey cyrus. The Bernoulli equation arises automatically from my analysis. I mean, i am not answering you with another class of equations never seen before, the N-S equations suitably simplified give birth to the Bernoulli equation. Look again to my post and find the Bernoulli equation, which in this case is an UNSTEADY Bernoulli equation of the form: $$\frac{\partial u}{\partial t}+\frac{\partial }{\partial z}\left(\frac{P}{\rho}+\frac{u^2}{2}+gz\left)=0$$ BTW: by non including the unsteady term you were actually assuming hydrostatic equilibrium even though you didn't want to. Quote by Fred The fluid has to be accelerating. The velocities will not be equal at the two points, so they won't cancel out. That's false, and it repels the Continuity equation for incompressible flow. Your first sentence is true though, but your second one not quite. I think you guys didn't understand my post. Recognitions: Science Advisor Quote by Clausius2 That's false, and it repels the Continuity equation for incompressible flow. Your first sentence is true though, but your second one not quite. I guess I'll have to digest it a bit more. What is escaping me right now is how it is violating continuity. We know that the flow area of a stream will decrease as the fluid accelerates. If the fluid is accelerating in a linear fashion, then how can the velocities not be different? I think...I need to look at your post again. Blog Entries: 7 Recognitions: Gold Member Homework Help Science Advisor Hi cyrus, Let me play back to you what I think you're saying and what your confusion is. First, you apply B's eq. to a verticle fluid column. In the case of zero velocity, you find the static pressure in any part of this column is equal to the pressure at the top of the column plus the head pressure. However, you also note that when the fluid is flowing (velocity > 0) and the bottom of the pipe is open to atmosphere, the same equation must hold, and thus if the pressure at the bottom of the column is atmospheric, you are suggesting the pressure above it is lower than atmospheric by the quantity rho*g*h (ie: head pressure). Thus, you note that the pressure above the opening in the bottom of the pipe is lower than Patm and you don't understand how that can be. Note that in the static case of there being no flow in the pipe, the fluid is at a higher static pressure at the bottom than the top. Despite this pressure gradient, there is no flow going up the pipe. There is no flow at all. Flow can't be determined simply by saying the pressure is higher at one point so there is flow from a higher pressure location to a lower pressure location. The pressure due to head has no ability to force fluid to flow upwards because this upward pressure is balanced by the differential downward head pressure. Consider this however: if we remove static pressure head from the equation, we find the resulting pressure gradient is constant for a static condition. The static pressure in the verticle column of fluid when head pressure is neglected is constant. B's equation is missleading, and unfortunately it doesn't include frictional pressure drop. Normally we add that part in, but we're generally not taught to do that in college. Why? Because B's eq. is actually a conservation of energy equation, and frictional losses don't conserve energy per se. B's eq. is simplifed to the point it doesn't represent reality. For it to represent reality, it has to include frictional pressure losses in the pipe which can be calculated by the Darcy Weisbach equation for example. Now back to your example. For the case of steady flow (ie: not accelerating) the frictional pressure loss must be added into the equation. B's equation then becomes: P1 = pgh + Pf + P2 Where P1 is the pressure at location 1 which I'll call the pipe outlet P2 is the pressure some location above the outlet. pgh is head pressure (density * g * h) Pf is the frictional pressure loss which should be a negative term since we loose static pressure due to frictional losses along a pipe. We know pgh from the fluid's density and height. We find Pf from the Darcey Weisbach equation or equivalent equation, and if we know P1 or P2, we can calulate the other. This doesn't mean that P2 is necessarily lower or higher than P1. For your example, you probably have some ideas in mind as to what's happening, but if you work them out I suspect you'll resolve the problem if you simply realize that P1 and P2 are only related to each other through the fluid head and frictional pressure losses (see my examples below). For a real situation, you need to include frictional pressure losses and you need to identify what pressure you may have going into the pipe which may be higher or lower than atmospheric pressure. Examples: 1) If you say the pipe is open to atmosphere at the top and bottom, then you're saying that P1 = P2 and then the head pressure is equal to the frictional pressure loss. 2) If you say the pipe is open to atmosphere at the bottom and head pressure is larger than the absolute value of frictional pressure loss, then the pressure at the top of the pipe (P2) is below atmospheric pressure. 3) If you say the pipe is open to atmosphere at the bottom and head pressure is smaller than the absolute value of frictional pressure loss, then the top of the pipe is above atmospheric pressure. Recognitions: Gold Member Science Advisor Quote by FredGarvin I guess I'll have to digest it a bit more. What is escaping me right now is how it is violating continuity. We know that the flow area of a stream will decrease as the fluid accelerates. If the fluid is accelerating in a linear fashion, then how can the velocities not be different? I think...I need to look at your post again. The flow is accelerating, but the acceleration $$\frac{\partial u}{\partial t}$$ as I said is homogeneous through the entire fluid, so it does not depend on z. The equation of Continuity says that UA=constant in the whole pipe, IFFFF the density remains constant. IFFF the acceleration would depend on the height, the velocity would too, and the flow wouldn't be incompressible at all (the column of water would behave as a spring). But experimentally that's not the case, it behaves like approximately a brick, so U=constant throughout the fluid but NOT in time. Moreover, the acceleration in incompressible flow is instantaneously felt by any part of the fluid, because acceleration is propagated via pressure waves, and pressure waves travel at the speed of sound (infinite in incompressible flow). Quote by QGoest For the case of steady flow I appreciate your comments, but this is NOT a steady flow, as well as the free falling of a brick is not steady. A problem that could be treated as a quasisteady one could be the discharge of water FROM a vessel through a pipeline of diameter much smaller than the vessel diameter, because the time scale variation of the vessel water height can be approximated as very long compared with the local flow times at the exit of the pipe. Still you can apply Bernoulli, but not the usual Bernoulli. You have to employ the unsteady Bernoulli equation. I did it, solving the problem, and it showed us, as it was sensible, that it is consistent with the classical mechanics of a falling body. The static pressure is the atmospheric pressure throughout the entire pipeline, as it couldn't be otherwise. Blog Entries: 7 Recognitions: Gold Member Homework Help Science Advisor Hi Clausius. but this is NOT a steady flow, Yes, we can say it's an accelerating flow. As soon as one allows flow to begin, the fluid must accelerate since it begins by standing still. However, the flow doesn't accelerate forever. The flow is likely to come to some equilibrium unless it undergoes deceleration and oscilates around some nominal value which is especially likely for the case of gasoline being poured from a small can for example. In this case, the pressure in the can decays as gasoline is poured into our lawn mower and flow slows down. Air then enters the can by traveling up the nozzle (pipe) which repressurizes the tank, and the gasoline accelerates again. But I don't think this is what the OP is concerned with. The OP doesn't state whether they are concerned with the equilibrium state or the non-equilibrium, transient condition, during which flow must accelerate and potentially decelerate. I've assumed the OP was interested in the steady state condition, not the transient. There is a perfectly valid steady state condition of interest, and that's what I've provided an explanation for. Moreover, the acceleration in incompressible flow is instantaneously felt by any part of the fluid, because acceleration is propagated via pressure waves, and pressure waves travel at the speed of sound (infinite in incompressible flow). Note that although we might model something as having infinite pressure wave velocity, that isn't the reality of it. Modeling it this way can work if the distances the wave must travel are small, or pressure waves are small compared to gross pressure changes needed to accelerate the fluid, but we must remember these are only simplifying assumptions, just as using B's eq. without frictional pressure losses, or without acceleration pressure losses are simplifying assumptions. Recognitions: Gold Member Science Advisor Quote by Q_Goest Hi Clausius. Yes, we can say it's an accelerating flow. As soon as one allows flow to begin, the fluid must accelerate since it begins by standing still. However, the flow doesn't accelerate forever. The flow is likely to come to some equilibrium unless it undergoes deceleration and oscilates around some nominal value which is especially likely for the case of gasoline being poured from a small can for example. In this case, the pressure in the can decays as gasoline is poured into our lawn mower and flow slows down. Air then enters the can by traveling up the nozzle (pipe) which repressurizes the tank, and the gasoline accelerates again. But I don't think this is what the OP is concerned with. The flow accelerates until there is no fluid inside the pipe, which I am assuming as the domain of integration. I told you, there is no way of looking for an steady state in this problem, because there are no disparity of time scales. It is not the same problem than the gasoline one. Well it could be. Imagine you have a cylindrical can, you open the upper side making a hole of the same diameter than the can one, and you put instantaneously the can upside down. There is no steady regime nowhere. It is the same case than if I throw a brick from a plane, the brick accelerates with the gravitational acceleration. In the pipe problem, the acceleration is constant in time and equal to the gravitational acceleration. Quote by Q_Goest Note that although we might model something as having infinite pressure wave velocity, that isn't the reality of it. Modeling it this way can work if the distances the wave must travel are small, or pressure waves are small compared to gross pressure changes needed to accelerate the fluid, but we must remember these are only simplifying assumptions, just as using B's eq. without frictional pressure losses, or without acceleration pressure losses are simplifying assumptions. Hey Q, we are engineers, and as engineer one needs to discard, via dimensional analysis, what is important for my problem and what is a second order effect for my problem. The compressibility of the fluid here plays no role UNLESS the pipe has a length of order $$L\sim c^2/g$$, where c=1500 m/s in water (speed of sound). Do the numbers and tell me if you know a vertical pipeline of that dimensions. If that is not the case, the time spent by a pressure wave to travel from the bottom to the top of the pipeline is doubtless so small compared with the rest of the times scales that is not worthy at all to include this time range in the formulation. The unsteadiness here is due to the acceleration of the bulk motion, not due to the propagation of pressure waves. Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |--------------------------------|---------------------------|---------| | Similar Threads for: Pipe Flow | | | | Thread | Forum | Replies | | | Mechanical Engineering | 7 | | | Advanced Physics Homework | 4 | | | Mechanical Engineering | 6 | | | Mechanical Engineering | 1 | | | Mechanical Engineering | 23 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 30, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952284038066864, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/152655/a-well-known-sines-limit/152657
# A well-known sines limit The following question is related to the answer i've found for this limit and i like to know if it's valid. I need to find the following limit: $$\lim_{x\rightarrow0} \frac{\sin(kx)}{x}$$ where k is a fixed positive integer. Proof: Here we'are going to appeal to a very well known inequality: $$\sin(x) < x < \tan(x),\space 0<x<\frac{\pi}{2}$$ Then we have that: $$\sin(kx) < kx < \tan(kx),\space 0<x<\frac{\pi}{2k}$$ From the above inequality we get that: $$\cos(kx) < \frac{\sin(kx)}{kx}< 1$$ After multiplying the inequality by k and taking the limit when x goes to ${0}$ we get that: $$\lim_{x\rightarrow0}\space k\cos(kx) < \lim_{x\rightarrow0}\frac{\sin(kx)}{x}< k$$ By Squeeze Theorem the limit is $k$. For such an answer i received a downvote because in the last inequality i used $"<"$ instead of $"\leq"$. I'd like to know your opinion and if i'm wrong then i want to correct it. Thanks. - 1 – Joe Jun 1 '12 at 22:02 The hypotheses of the squeeze theorem are still satisfied, as a<b<c implies a≤b≤c. I don't see what the problem is. – Potato Jun 1 '12 at 22:02 I don't have any problem with your solution except for it being beyond the OP's level of math, which is why I did not upvote it. Also, this may better be suited for Meta. – Joe Jun 1 '12 at 22:03 I have no problem with using $"\leq"$ but i only want to know which way is the correct way for avoiding future discussions on this topic. – Chris's wise sister Jun 1 '12 at 22:08 1 It often seems like a really tiny point, but the thing to remember is that the correct deduction when taking limits is to go from $u_n < w_n$ to $\lim u_n \leq \lim w_n$ - then nobody can quibble with it! (Took me a long time to learn to be that precise!). – John Wordsworth Jun 1 '12 at 22:16 show 6 more comments ## 2 Answers technically I see why the downvote happened: the whole point of the squeeze theorem is that the 'outer' functions are equal at that point. Using strict inequalities means the squeeze theorem wouldn't work. It's a nitpicky point, but such is math I guess. For the record I think it would have been more constructive just to post a comment rather than downvote with a correction that minor. - The hypotheses of the squeeze theorem are still satisfied, as $a<b<c$ implies $a\le b \le c$. I don't see what the problem is. – Potato Jun 1 '12 at 22:02 If $a<b<c$, then $a=c \implies a=b=c$ is an impossibility, since you're explicitly saying that $a$ is strictly less than $c$. $a<b<c$ does imply $a\leq b\leq c$, yes, but never $a=b=c$, which is the conclusion the squeeze theorem needs to come to. – Robert Mastragostino Jun 1 '12 at 22:28 I think the point is this: we have to be very careful with inequalities when we take limits. For example, for $n \ge 1$, we obviously have $\frac{1}{n+1} < \frac{1}{n}$, but when we let $n\to\infty$, we can only conclude that $\lim\frac{1}{n+1} \le \lim\frac{1}{n}$ and not $\lim\frac{1}{n+1} < \lim\frac{1}{n}$, since both limits are clearly zero. - He's not concluding any strict inequalities, only equality, so there is no problem. – Potato Jun 1 '12 at 22:05 But if you read his solution, he takes limits and still has strict inequalities in the line including the limits - they really should be $\le$ - a bit nit-picky, but to be precise, his reasoning is at fault. – John Wordsworth Jun 1 '12 at 22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9701372981071472, "perplexity_flag": "head"}
http://sciencehouse.wordpress.com/2010/06/
# Scientific Clearing House Carson C. Chow ## Archive for June, 2010 ### MCMC and fitting models to data June 23, 2010 As I have posted before, I never learned any statistics during my education as a theoretical physicist/applied mathematician.  However, it became fairly apparent after I entered biology (although I managed to avoid it for a few years) that fitting models to data and estimating parameters is unavoidable.   As I have opined multiple times previously, Bayesian inference and the Markov Chain Monte Carlo (MCMC) method is the best way to do this.   I thought I would provide the details on how to implement the algorithm since most of the treatments on the subject are often couched in statistical language that may be hard to decipher for someone with a physics or dynamical systems background.  I’ll do this in a series of posts and in a nonstandard order.  Instead of building up the entire Bayesian formalism, I thought I would show how the MCMC method can be used to do the job and then later show how it fits into the grand scheme of things to do even more. Suppose you have data $D(t_i)$, which consists of some measurements (either a scalar or a vector) at discrete time points $t_i$ and a proposed model, which produces a time series $y(t | \theta)$,  where $\theta$ represents a set of free parameters that changes the function.  This model could be a system of differential equations or  just some proposed function like a polynomial.    The goal is to find the set of parameters that best fits the data and to evaluate how good the model is.  Now, the correct way to do this is to use Bayesian inference and model comparison, which can be computed using the MCMC.   However, the MCMC can also be used just to get the parameters in the sense of finding the best fit according to some criterion. (more…) Posted in Bayes, Computer Science, Mathematics, Optimization, Pedagogy, Probablity | 21 Comments » ### Progress in robotics June 14, 2010 The blog Skippy Records has an interesting post on the recent progress of robotics with incredible videos of robots demonstrating amazing capabilities.  I’ve always felt that the robotics community seemed to be way ahead of the computational neuroscience community in terms of presenting deliverables.  I remember attending a talk by Rodney Brooks almost a decade ago that left me astounded.   It made me wonder if reverse engineering the brain is much more difficult then simply engineering a new one.  Just trying to make stuff work and not worrying about anything else is a great advantage.  Computational neuroscience is more of a diffusion driven field where people try out different ideas rather than build on previous work. The increasing availability of high quality data is also both a blessing and a curse.  It is a blessing because it can help constrain and validate models.  It can be a curse because theorists may become less bold and creative.  I’ve blogged before on my old blog that it may be possible that we already have all the biological mechanisms necessary to understand how the brain works.  That is not to say that new experimental results are unimportant. Biological details are critical when it comes to understanding the genetic basis for brain function or curing diseases.  However,  discovering the main principles for how the brain performs computations may not require knowing all the biological mechanisms.  The trend in neuroscience and biology in general is towards more and more high throughput data acquisition.  The consequence may be that while data driven systems biology will thrive classical computational neuroscience could actually slow. ### Talk at Gatsby June 13, 2010 I visited the Gatsby Computational Neuroscience Unit in London on Friday. I talked about how the dynamics of many observed neural responses to visual stimuli can be explained by varying just two parameters in a “micro-cortical circuit” at the sub-millimetre level.  The circuit consists of recurrent excitation, lateral inhibition and fatigue mechanisms like synaptic depression.  Recurrently connected pools of neurons inhibit other pools and the competition between pools with fatigue leads to all the varied observed responses we see.  I also covered my recent paper on autism where we describe how perturbing the synaptic balance in the micro-cortical circuit can then lead to alterations in performance of simple saccade tasks that seem to match clinical observations.  My slides for the talk are here. ### Slides for Talk June 9, 2010 I’m currently in England at the Dendrites, Neurones, and Networks workshop.  The talks have really impressed me.  The field of computational neuroscience has really reached a critical mass where truly excellent work is being done in multiple directions.  I gave a talk on finite system size effects in neural networks.  I mostly covered the work on the Kuramoto model with a little bit on synaptically coupled phase neurons at the end.  My slides are here. Posted in Computational neuroscience, Talks | 1 Comment » ### Some numbers for the BP leak June 3, 2010 The Deepwater Horizon well is situated 1500 m below the surface of the Gulf of Mexico.  The hydrostatic pressure is approximately given by  the simple formula of $P_a+ g\rho h$ where $P_a = 100 \ kPa$ is the pressure of the atmosphere, $\rho = 1 \ g/ml = 1000 \ kg/m^3$   is the density of water, and $g = 10 \ m/s^2$ is the gravitational acceleration.  Putting the numbers together gives $1.5\times 10^7 \ kg/m s^2$, which is $15000 \ kPa$ or about 150 times atmospheric pressure.  Hence, the oil and natural gas must be under tremendous pressure to be able to leak out of the well at all.  It’s no wonder the Top Kill operation, where mud was pumped in at high pressure, did not work. Currently, it is estimated that the leak rate is somewhere between 10,000 and 100,000 barrels of oil per day.  A barrel of oil is 159 litres or 0.159 cubic metres.  So basically 1600 to 16000 cubic metres of oil is leaking each day.  This amounts to a cube with sides of about 11 metres for the lower value and 25 metres for the upper one, which is about the length of a basketball court.  However, assuming that the oil forms a layer on the surface of the ocean that is 0.001 mm thick, this then corresponds to a slick with an area between 1,600 to 16,000 square kilometres.  Given that the leak has been going on for almost two months and the Gulf of Mexico is 160,000 square kilometres, this implies that the slick is either very thick, oil has started to wash up on shore, or a lot of the oil is still under the surface. Posted in Back of the envelope, Environment, Pedagogy, Physics | 6 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918647050857544, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/184213/irreducibility-of-an-affine-variety-and-its-projective-closure
# Irreducibility of an Affine Variety and its Projective Closure Volume I of Shaferevich's Basic Algebraic Geometry has the following as an exercise: Show that the affine variety $U$ is irreducible if and only if its closure $\bar U$ in a projective space is irreducible. Unfortunately, he doesn't explicitly give a definition of irreducibility for quasiprojective varieties. Based on the definition for (closed) sets in affine space, I assumed that a quasiprojective variety $X$ is reducible if there are closed sets $Z_1, Z_2$ such that neither contains $X$ and such that $$\left ( Z_1 \cap X \right ) \cup \left ( Z_2 \cap X \right ) = X$$ (i.e. $X$ can be written as a nontrivial union of sets that are closed with respect to $X$). I found a pretty easy proof that $U$ is reducible iff $\bar U$ is, which gives the proposition. However, I'm suspicious because it doesn't rely at all on the fact that $U$ is an affine variety, or even a quasiprojective variety at all (in the sense that if you defined irreducibility for arbitrary subsets of projective space, it would still work). One reason for this could be that a quasiprojective variety is irreducible iff its projective closure is, and the point of the exercise is to notice this, and that the definitions of irreducibility for affine varieties and for affine closed sets agree. The other could be that my solution doesn't work, or that I have the wrong definition. I'm asking which is the case. My proof is as follows: First assume that $U$ is reducible. The we have closed sets $Z_1$, $Z_2$, neither of which contain $U$, such that $\left ( Z_1 \cap U \right ) \cup \left (Z_2 \cap U \right ) = U$. As $Z_1 \cup Z_2 \supseteq U$ is closed, we have $Z_1 \cup Z_2 \supseteq \bar U$. But then neither of the $Z_i$ contain $\bar U$, since $\bar U \supseteq U$, and since $Z_1 \cup Z_2$ is a closed set containing $U$, $Z_1 \cup Z_2$ contains $\bar U$ and $\left ( Z_1 \cap \bar U \right ) \cup \left ( Z_2 \cap \bar U \right ) = \bar U$ so that $\bar U$ is reducible. The proof for the other direction is almost identical. - 1 In your proof, should that be $Z_1\cup Z_2\supseteq U$? – Andrew Aug 19 '12 at 7:19 @Andrew Yes, it should. Thanks. – Calvin McPhail-Snyder Aug 19 '12 at 7:28 Added the tag "general-topology".. – Joachim Aug 19 '12 at 7:28 ## 2 Answers Indeed, the proof doesn't need the variety structure. All we need is that the affine variety is a nonempty open subset of its projective closure. The definition of irreducible is in fact purely topological: if some space $X$ is the union of two closed sets, one of them has to be the whole space. Note that whether you see an affine variety as affine or open in its projective closure does not change the topology. We have the following: • Any nonempty open subset of an irreducible space is itself irreducible and dense • If $Y \subset X$ is irreducible in the subspace topology, the closure of $Y$ in $X$ is again irreducible Which you should be able to prove. See Hartshorne 1.1.3 and 1.1.4. Hope that this helps. Edit: By the way, do you know the equivalent definitions of irreducible? The following are equivalent: • $X$ is irreducible • Any nonempty open in $X$ is dense • Any two nonempty opens in $X$ have nonempty intersection Proving the equivalence is not hard and a good exercise. - You're right that irreducibility does not depend on whether your variety is affine or not. It only depends on the topology. For example, in the Zariski topology $\mathbb R$ is irreducible. But with the usual topology it is not: consider the proper closed subsets $(-\infty,1]\cup[0,\infty)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563265442848206, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/798/constraint-force-on-a-rod/808
# Constraint force on a rod I really hope someone will take a quick look at the following, I would just love to better understand it... This exercise is from Arnold's "Mathematical Methods of Classical Mechanics", p. 97 in the chapter on d'Alemberts principle: A rod of weight P, tilted at an angle of 60° to the plane of a table, begins to fall with initial velocity zero. Find the constraint force of the table at the initial moment, considering the table as (a) absolutely smooth (b) absolutely rough (In the first case, the holonomic constraint holds the end of the rod on the plane of the table, and in the second case, at a given point.)" I must admit, I am pretty unsure on how to do calculations using this "fancy" mathematical kind of physics. First off, I'm lost with (a). But I'll have a go at (b): As far as I understood, d'Alemberts principle states, that if $M$ is a constraining manifold, $x(q)$ is a curve in $M$ and $\xi$ is a vector perpendicular to $T_xM$, then $x$ satisfies Lagrange's equations $\frac{d}{dt} \frac{\partial L}{\partial \dot q} = \frac{\partial L}{\partial q} \qquad L = \frac{{\dot x}^2}{2} - U(x)$ iff for the following inner product, we have $\left(m \ddot x + \frac{\partial U}{\partial x}, \xi \right) = 0$ I guess that's about right so far? The constraint would in this case essentially be $\mathbb S^1$, since the rod moves on a circe around the point in contact with the table. Can we assume that all the rod's mass is at the center of mass? Would we then have $U(x) = -gx_2$ in this case (where $x_2$ is the vertical component of $x$)? If yes, then $\partial U / \partial x = -ge_2$. Where $e_1, e_2$ are the horizontal and vertical unit vectors, respectively. At the inital moment, we have $x = \cos(60°) e_1 + \sin(60°) e_2$, so by d'Alembert's principle we must have $\left(m \ddot x + \frac{\partial U}{\partial x}, \cos(60°) e_1 + \sin(60°) e_2 \right) = 0$ or written differently $m \ddot {x_1} \cos(60°) + m \ddot{x_2} \sin(60°) - \sin(60°)g = 0$ So: Is this correct so far? Or am I way off? Is (a) handled any differently up to this point? If anyone could recommend a good problem book (with solutions), in which this kind of mathematical approach is used (I don't know if this is how physicists would actually compute stuff??), I would greatly appreciate it. Kind regards, Sam - ## 3 Answers The first way to consider the problem, as we have a simple rigid body is to consider two points on it, say one end (the one that will touch the table first) and the center of mass (or the other end). The embedding configuration space is thus $R^2 \times R^2$ (the three coordinates of the end of the rod and the three coordinates of the other) if we consider that the rod can only move in the vertical plane including a line on the table (I hope this is the actual question ...). The action of the surface lead to the [holonomic] constraint equation that can be written $f(x,y) = 0$ which is the one of a surface (here a curve) in the configuration space $R^2$. In addition to this constraint equation we have the Newton's equation which is $m \frac{d^2}{dt^2} \vec{r} = \vec{F}+\vec{N}$ (where $N$ is the force of constraint. This is for one end. For the other (or for the center of mass) you have two similar equations, with the constraint being that the other end (or the center of mass) is at a given distance to the other end. In that view, we can consider the points as being free and we have 4 Newtonian equations and 2 constraint equations thus reducing the degrees of freedom to 2. We can also consider that the constraints do not work and that they are perpendicular to the plane of the table: $\vec{N} = \lambda \vec{\nabla}f$ where $f$ is the one of the constraint equation. Newton's equation can be rewritten $\langle m \frac{d^2}{dt^2} \vec{r} - \vec{F}, \vec{\xi} \rangle = 0$ where $\xi$ is a tangent vector defined as $\langle \vec{\xi} ,\vec{\nabla} f \rangle= 0$ ** (1) **. This is d'Alembert principle. As the components of $\vec{xi}$ are constrained by (1), this lead to 2N (newton) - 2 (constraints) independant relations. The next step is to choose a system of coordinates where (1) is automatically satisfied: these are the generalized coordinates. The rod is modeled when in addition to specifying the coordinate of the end that is on the plane you also specify the angle that the rod makes with the surface: we can choose to independent generalized coordinates: $x$ and $\theta$. The motion of the system (now the system is not free anymore) will take place in the Manifold $M$ where $x$ and $\theta$ are independent coordinates. These $(x, \theta)$ are usually written $q = (q_1, q_2)$. This manifold is the direct product of $R$ and $S^1$ (the angles take values in a 1-torus, a segment whose ends are identified). It can then be derived that the motion in these coordinates follow Lagrange's equations. The Lagrangian is this case as the natural $L(x, \theta, \dot{x}, \dot{\theta})=T-V$ form. For your case (b) this problem is simplified: x is fixed and you should only care about $\theta$, you have $L = \frac{m}{2} l^2 \dot{\theta}^2 - l m g sin\theta$ where $l$ is the distance between one end and the center of mass. From Lagrange's equation you obtain $m l \ddot{\theta} = -m g cos\theta$ From that you can obtain the complete motion of the rod. For the case (a) you have to consider a more complete Lagrangian: $L = \frac{m}{2}( l^2 \dot{\theta}^2 - 2 l \dot{x} sin\theta \dot{\theta} + \dot{x}^2)- l m g sin\theta$ This Lagrangian lead to a more complicated motion; if I did the calculation correctly we have $\ddot{\theta} = - \frac{g}{l} cos\theta$ $\ddot{x} = l sin \theta \ddot{\theta}$. To obtain the constraint, you can solve this and then you go back to Newton's equations. In the case (b) you have in addition to force $\dot{x} = \ddot{x} = 0$ to obtain the tangential constraint. - You have a wrong sign in the equation for (b). $\theta$ must be surely a decreasing function but your equation would imply it's increasing. – Marek Nov 15 '10 at 1:41 @Marek: Right, I corrected. – Cedric H. Nov 15 '10 at 1:45 @Cedric: also, your equations of motion for (a) are wrong. For $\ddot{\theta}$ you forgot to include the term obtained by differentiation of ${1 \over l}\dot{x}\sin(\theta)$ with respect to time and the RHS for $\ddot{x}$ must be a differentation of $l\sin(\theta)\dot{\theta}$. – Marek Nov 15 '10 at 2:09 @Marek: right again, I should really sleep (3 am ...) and I'll look at this later. – Cedric H. Nov 15 '10 at 2:15 Thanks for this detailed answer! I think your answer (as well as Marek's) showed me the approach one should take beautifully. Again, thanks a lot! =) – Sam Nov 15 '10 at 5:16 show 1 more comment I will not address your first question as I am not really sure. I've read what you've written and given enough time I'd be probably be able to understand whether the derivation is correct, but from top of my head I can't and, more importantly (and this answers your other question): this is not how physicists think about these problems (at least I don't know anyone who does), because once you learn Lagrange's formalism there's no need to go back to this strange principle that, as far as I know, is only used as a motivation on the road to better formalisms (Lagrange's and Hamilton's). So, here's what I'd do: 1. In (b), as you correctly state, the endpoint (or any other point for that matter) of the rod will be constrained to $S^1$. Now this gives us a nice way to parametrize the problem: the angle $\varphi$ between the rod and the table (this is all part of the Lagrange formalism. If you haven't yet learned it, I strongly advise you do). Also let us denote by $r$ the distance between the center of mass and the point of contact with the table. We write out the Lagrangian for the center of mass (assuming the rod is homogeneous) $$L(\varphi, \dot{\varphi}) = T(\varphi, \dot{\varphi}) - U(\varphi) = m{r^2\dot{\varphi}^2 \over 2} - mgr\sin(\varphi)$$ Now just apply Lagrange's equations to obtain $$mr^2\ddot{\varphi} = - mgr\cos(\varphi)$$ 2. For (a) we should assume smooth table, which means no friction, which means the rod will slide. It is clear that it will be confined to the half-plane perpendicular to the table in which the rod initially lies. We can parametrize this half-plane by $p$: the position of the point of contact of the rod with the table and $\phi$ as before. Again using Lagrange formalism one obtains $$L(p, \varphi, \dot{p}, \dot{\varphi}) = {m \over 2}(\dot{p}^2 -2r\dot{p}\sin(\phi)\dot{\phi} + r^2\dot{\varphi}^2 ) - mgr\sin(\varphi)$$ This is not a particularly nice Lagrangian and one has to wonder whether there are some better coordinates in which the problem simplifies (as in the case (b) above). - Thanks a lot Marek. So, I understand d'Alemberts principle is less of practical than of theoretical/historical interest, and it should simply be seen as yet another point of view which is mathematically equivalent to Lagrange's and Hamilton's equations / the principle of least action. – Sam Nov 15 '10 at 5:06 I'm just wondering the miss use of "law of cosines",I think using $\cos(\theta)$for substitution of $\sin(\theta)$ that will get a correct solution. : ) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943911612033844, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Causality_(physics)
# Causality (physics) For causality in philosophy, see Causality. For the disambiguation page on causality, see Causation. Causality is the relationship between causes and effects.[1][2] It is considered to be fundamental to all natural science, especially physics. Causality is also a topic studied from the perspectives of philosophy and statistics. ## Cause and Effect in Physics In physics it is useful to interpret certain terms of a physical theory as causes and other terms as effects. Thus, in classical (Newtonian) mechanics a cause may be represented by a force acting on a body, and an effect by the acceleration which follows as quantitatively explained by Newton's second law. For different physical theories the notions of cause and effect may be different. For instance, in the general theory of relativity, acceleration is not an effect (since it is not a generally relativistic vector); the general relativistic effects comparable to those of Newtonian mechanics are the deviations from geodesic motion in curved spacetime.[3] Also, the meaning of "uncaused motion" is dependent on the theory being employed: for Newton it is inertial motion (constant velocity with respect to an inertial frame of reference), in the general theory of relativity it is geodesic motion (to be compared with frictionless motion on the surface of a sphere at constant tangential velocity along a great circle). So what constitutes a "cause" and what constitutes an "effect" depends on the total system of explanation in which the putative causal sequence is embedded. A formulation of physical laws in terms of cause and effect is useful for the purposes of explanation and prediction. For instance, in Newtonian mechanics, an observed acceleration can be explained by reference to an applied force. So Newton's second law can be used to predict the force necessary to realize a desired acceleration. In classical physics, a cause should always precede its effect. In relativity theory this requirement is strengthened[why?] so as to limit causes to the back (past) light cone of the event to be explained (the "effect"); nor can an event be a cause of any event outside the former event's front (future) light cone. These restrictions are consistent with the grounded belief (or assumption) that causal influences cannot travel faster than the speed of light and/or backwards in time. Another requirement, at least valid at the level of human experience, is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than did Descartes' theory. The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) lends heavy influence against the idea of the importance of causality. Causality has accordingly sometimes been downplayed (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach[4] the notion of force in Newton's second law was pleonastic, tautological and superfluous. Indeed it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies, $m_1 \frac{d^2 {\mathbf r}_1 }{ dt^2} = -\frac{m_1 m_2 g ({\mathbf r}_1 - {\mathbf r}_2)}{ |{\mathbf r}_1 - {\mathbf r}_2|^3};\; m_2 \frac{d^2 {\mathbf r}_2 }{dt^2} = -\frac{m_1 m_2 g ({\mathbf r}_2 - {\mathbf r}_1) }{ |{\mathbf r}_2 - {\mathbf r}_1|^3},$ as two coupled equations describing the positions $\scriptstyle {\mathbf r}_1(t)$ and $\scriptstyle {\mathbf r}_2(t)$ of the two bodies, without interpreting the right hand sides of these equations as forces; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times. The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way. The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace determinism (rather than `Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem. Confusion of causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined). In modern physics, the notion of causality had to be clarified. The insights of the theory of special relativity confirmed the assumption of causality, but they made the meaning of the word "simultaneous" observer-dependent.[5] Consequently, the relativistic principle of causality says that the cause must precede its effect according to all inertial observers. This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel backward in time. For this reason, special relativity does not allow communication faster than the speed of light. In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In quantum field theory, causality is closely related to the principle of locality. However, the principle of locality is disputed: whether it strictly holds depends on the interpretation of quantum mechanics chosen, especially for experiments involving quantum entanglement that satisfy Bell's Theorem. Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture. ### Distributed causality Theories in physics like the Butterfly effect from chaos theory open up the possibility of a type of distributed parameter systems in causality. The butterfly effect theory proposes: "Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system." This opens up the opportunity to understand a distributed causality. A related way to interpret the Butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly. ### Causal dynamical triangulation Main article: Causal dynamical triangulation Causal dynamical triangulation (abbreviated as "CDT") invented by Renate Loll, Jan Ambjørn and Jerzy Jurkiewicz, and popularized by Fotini Markopoulou and Lee Smolin, is an approach to quantum gravity that like loop quantum gravity is background independent. This means that it does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. The Loops '05 conference, hosted by many loop quantum gravity theorists, included several presentations which discussed CDT in great depth, and revealed it to be a pivotal insight for theorists. It has sparked considerable interest as it appears to have a good semi-classical description. At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time. Using a structure called a simplex, it divides spacetime into tiny triangular sections. A simplex is the generalized form of a triangle, in various dimensions. A 3-simplex is usually called a tetrahedron, and the 4-simplex, which is the basic building block in this theory, is also known as the pentatope, or pentachoron. Each simplex is geometrically flat, but simplices can be "glued" together in a variety of ways to create curved spacetimes. Where previous attempts at triangulation of quantum spaces have produced jumbled universes with far too many dimensions, or minimal universes with too few, CDT avoids this problem by allowing only those configurations where cause precedes any event. In other words, the timelines of all joined edges of simplices must agree. Thus, maybe, causality lies in the foundation of the spacetime geometry. ### Causal Sets Main article: Causal Sets In Causal Set Theory causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct it's conformal class. So knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this Rafael Sorkin proposed the idea of Causal Set Theory. Causal Set Theory is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a Poset, while the conformal factor can be reconstructed by identifiying each poset element with a unit volume. ## References 1. Green, Celia (2003). The Lost Cause: Causation and the Mind–Body Problem. Oxford: Oxford Forum. ISBN 0-9536772-1-4. Includes three chapters on causality at the microlevel in physics. 2. Bunge, Mario (1959). Causality: the place of the causal principle in modern science. Cambridge: Harvard University Press. 3. e.g. R. Adler, M. Bazin, M. Schiffer, Introduction to general relativity, McGraw–Hill Book Company, 1965, section 2.3. 4. Ernst Mach, Die Mechanik in ihrer Entwicklung, Historisch-kritisch dargestellt, Akademie-Verlag, Berlin, 1988, section 2.7. 5. A. Einstein, "Zur Elektrodynamik bewegter Koerper", Annalen der Physik 17, 891–921 (1905). ## Further reading • Bohm, David. (2005). Causality and Chance in Modern Physics. London: Taylor and Francis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938666820526123, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/8746/the-density-hex
The density hex Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Gale famously showed that the determinacy of n-player, n-dimensional Hex is equivalent to the Brouwer fixed point theorem in n dimensions. We can (and Gale does) view this as saying that if you d-color the vertices of a certain graph specifically, the graph with vertex set $[n]^d$ and two vertices $v, w$ adjacent iff the max norm of $v - w$ is 1 and all the nonzero components of $v - w$ have the same sign -- then there's a certain monochromatic path. Alternatively, you can think of d-coloring a d-dimensional $n \times \ldots \times n$ cube, and the determinacy of Hex/Brouwer fixed-point says that a certain "twisted path" must exist. Here's what I want to know: Is there a topological proof of the density version of the determinacy of Hex? The density version ends up following from density Hales-Jewett, since combinatorial lines are paths in the underlying graph. But density Hales-Jewett is hard, and this seems like it should admit a proof along the lines of Gale's. What I mean by the "density version" is: for any $\delta > 0$, and fixed n, for sufficiently large dimension d any choice of $\delta n^d$ moves must connect two opposite sides of the hypercube/d-dimensional Hex board. (I'm fairly sure this is the correct statement, but it's possible I'm wrong. Let me know if this is for some reason utterly trivial or false.) - Actually, what is the density version of the determinacy of Hex? – Ilya Nikokoshev Dec 13 2009 at 11:25 2 "Gale famously showed..." I did not know it. Any link/reference? – Gil Kalai Dec 13 2009 at 12:17 4 David Gale, The game of hex and the Brouwer fixed-point theorem, American Mathematical Monthly, Dec 1979, 818-827. – Konrad Swanepoel Dec 13 2009 at 12:21 2 @Harrison: can you move the statement of what you had in mind to the statement of question? – Mariano Suárez-Alvarez Dec 13 2009 at 17:01 1 I have found the Gale paper online: cs.cmu.edu/afs/cs/academic/class/15859-f01/www/… – Kristal Cantwell Dec 13 2009 at 21:13 show 3 more comments 1 Answer For a closely related question when you do not insist that all non zero components of v-w has the same sign, then the answer is known: See the following paper: B. Bollobas, G. Kindler, I. Leader, and R. O'Donnell, Eliminating cycles in the discrete torus, LATIN 2006: Theoretical informatics, 202{210, Lecture Notes in Comput. Sci., 3887, Springer, Berlin, 2006. Also: Algorithmica 50 (2008), no. 4, 446-454. This Graph is referred to as G_\inf and there is a beautiful new proof via the Brunn Minkowski's theorem by Alon and Feldheim. For this graph a rather strong form of a density result follows, and the results are completely sharp. The paper by Alon and Klartag http://www.math.tau.ac.il/~nogaa/PDFS/torus3.pdf is a good source and it also studies the case where we allow only a single non zero coordinate in v-u. An even sharper result is given in another paper by Noga Alon. There, there is a log n gap which can be problematic if we are interested in the case that n is fixed and d large. See also this post. As Harrison points out, the graph he proposes (that we can call the Gale-Brown graph) is in-between the two graphs. So the unswer is not known but we can hope that some discrete isoperimetric methods can be helpful. The statement is an isoperimetric-type result so this can be regarded as a quantitative version of the topological notion of connectivity. Two more remarks: 1) The Gale result seems to give an example of a graph where there might be a large gap between coloring number and fractional coloring. This is rare and an important other example is the Kneser graph where analyzing its chromatic number is a famous use (of Lovasz) of a topological method. 2) Hex is closely related to planar percolation and the topological property based on planar duality is very important in the study of planar percolation and 1/2 being the critical probability. (See eg this paper) It seems that we might have here an interesting high dimensional extension with some special significance to chosing each vertex with probability 1/d. - "Note that the topological proof of the coloring theorem does not rely on assuming n being fixed and d is large. when d is fixed (two) and n is large the coloring result is correct but the density result is far from being correct." This is essentially what motivates the question -- it seems to suddenly change from a topological problem to a combinatorial one, and I'd be very interested to see a way of bridging the gap. Incidentally, the failure of the density version fixing d and varying n is because there are sets with positive measure but lots of connected components. – Harrison Brown Dec 14 2009 at 20:14 The combination of a measure (in the wrong continuous density version) and fixed points (in the coloring version) makes me think of ergodic theory, but I don't know enough about ergodic theory to know if this is meaningful. As to your (#) statement, certainly any two consecutive vertices in the path lie in a combinatorial line, but three consecutive vertices can determine a comb. subspace of large dimension. I guess we can think of combinatorial lines as paths that remain locally comb. lines no matter what adjacency structure we put on them. Can we use this to derive DHJ from density hex? – Harrison Brown Dec 14 2009 at 20:49 I think we can expect for n fixed (say n=3) and d large a simpler combinatorial proof with much better bounds. anyway I think the results on separating all cycles in [0,1]^n and their discrete analogues are relevant. Look here (and the links there) :gilkalai.wordpress.com/2009/05/27/… – Gil Kalai Dec 14 2009 at 21:02 Huh -- Looking at the paper, I realize I heard Alon give a talk on the isoperimetric proof, and I even remember thinking "This might apply to this problem..." Apparently I then proceeded to forget about it entirely! But I don't think it's proved directly in either Bollobas et al or Alon-Klartag; their $G_\infty$ is slightly different from my graph and somewhat larger. But I'd be very surprised if the basic method didn't extend; I'll try it later. (Taking a break from mathematics today. Or so I'm telling myself...) – Harrison Brown Dec 15 2009 at 11:46 Yes, your graph seems intermediate one between G_\infty considered by BKLO and G_1 considered by AK; and your question for G_1 is also interesting and does not seem to be known (and also for the specific graph you study). – Gil Kalai Dec 15 2009 at 17:24 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367489218711853, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=658591
Physics Forums Verify about the solution of wave equation of potential. I read in the book regarding a point charge at the origin where $Q(t)= \rho_{(t)}Δv'\;$. The wave eq is. $$\nabla^2V-\mu\epsilon\frac{\partial^2 V}{\partial t^2}= -\frac {\rho_v}{\epsilon}$$ For point charge at origin, spherical coordinates are used where: $$\nabla^2V=\frac 1 {R^2}\frac {\partial}{\partial R}\left( R^2 \frac {\partial V}{\partial R}\right)$$ This is because point charge at origin, $\frac {\partial}{\partial \theta} \hbox{ and }\; \frac {\partial}{\partial \phi}$ are all zero. My question is this: The book then said EXCEPT AT THE ORIGIN, V satisfies the following homogeneous equation: $$\frac 1 {R^2}\frac {\partial}{\partial R}\left( R^2 \frac {\partial V}{\partial R}\right)-\mu\epsilon \frac {\partial^2 V}{\partial t^2}=0$$ The only reason I can think of why this equation has to exclude origin is because R=0 and origin and this won't work. Am I correct or there's another reason? Thanks Alan PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Yep, you're right. Nothing complicated here. Thanks Alan Verify about the solution of wave equation of potential. I have another question follow up with the original wave equation: $$\frac 1 {R^2}\frac {\partial}{\partial R}\left( R^2 \frac {\partial V}{\partial R}\right)-\mu\epsilon \frac {\partial^2 V}{\partial t^2}=0$$ According to the book, to simplify this equation, Let: $$V_{(R,t)}=\frac 1 R U_{(R,t)}$$ This will reduce the wave equation to: $$\frac {\partial^2U_{(R,t)}}{\partial R^2}-\mu\epsilon\frac {\partial^2U_{(R,t)}}{\partial t^2}=0$$ One of the solution is $U_{(R,t)}=f\left(t-\frac R {\mu\epsilon}\right)$ My question is I want to find the solution if U(R,t) is time harmonic which is a continuous sine wave. The book does not show the anything. So I am using the regular solution of time harmonic wave equation and using direction to be $\hat R$. So the formula become: $$\frac {\partial^2U_{(R,t)}}{\partial R^2}-\mu\epsilon\frac {\partial^2U_{(R,t)}}{\partial t^2}= \frac {\partial^2U_{(R,t)}}{\partial R^2}+\omega^2\mu\epsilon U= 0.$$ $$δ^2=-\omega^2\mu\epsilon\;\Rightarrow\; \frac {\partial^2U_{(R,t)}}{\partial R^2}-δ^2 U= 0.$$ This is a 2nd order ODE with constant coef and the solution is: $$U_{(R,t)}= V_0^+ e^{-δ R}+V_0^- e^{δ R}$$ For potential that reach to infinity space, there will be no reflection so the second term disappeared, whereby: $$U_{(R,t)}= V_0^+ e^{-δ R}$$ Do I get this right? This might look very obvious, but I am working on retarded potential and I am interpreting what the book don't say, I have to be careful to make sure I get everything right. Thanks Alan $\frac {\partial^2U_{(R,t)}}{\partial R^2}-\mu\epsilon\frac {\partial^2U_{(R,t)}}{\partial t^2}= \frac {\partial^2U_{(R,t)}}{\partial R^2}+\omega^2\mu\epsilon U= 0.$ just from the last in it you can write the periodic solution.Don't substitute any δ after. Quote by andrien $\frac {\partial^2U_{(R,t)}}{\partial R^2}-\mu\epsilon\frac {\partial^2U_{(R,t)}}{\partial t^2}= \frac {\partial^2U_{(R,t)}}{\partial R^2}+\omega^2\mu\epsilon U= 0.$ just from the last in it you can write the periodic solution.Don't substitute any δ after. Thanks for the reply, I just finished updating the last post, please take a look again. The reason I use δ is because this will be used in phasor form which is going to be in form of: $$U_{(R,t)}= V_0^+ e^{\alpha R} e^{j\beta R}\; \hbox { where } δ=\alpha + j\beta$$ a plus sign before ω2 in eqn is necessary for periodic solution.First write solution in terms of ω and then introduce other definitions. Quote by andrien a plus sign before ω2 in eqn is necessary for periodic solution.First write solution in terms of ω and then introduce other definitions. This is because for lossy dielectric medium ε=ε'+jε'' therefore $δ=j\omega \sqrt{\mu(ε'+jε'')}$ is a complex number where $δ=\alpha + j \beta$. Solution of: $$\frac {\partial ^2 U}{\partial R^2} -δ^2U=0 \hbox { is }\; U_{(R,t)}= U_0^+ e^{-δR}\;=\; U_0^+ e^{-\alpha R} e^{-j\beta R}$$ This is a decay periodic function. For lossless, we use $k=\omega\sqrt{\mu\epsilon}\hbox { where ε is real.}$ Then $$\frac {\partial ^2 U}{\partial R^2} +k^2U=0 \hbox { is }\; U_{(R,t)}= U_0^+ e^{-jkR}$$ Actually because you brought this up, I dig deep into this and answer the question why the books use δ for lossy media and k for lossless media. I never quite get this all these years until now. that is why it is better to go with + sign.but it is ok,now.In case,ε is complex it is still possible to define things after.But you already got it. Thanks. Thread Tools | | | | |-------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Verify about the solution of wave equation of potential. | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 6 | | | Calculus & Beyond Homework | 5 | | | Introductory Physics Homework | 2 | | | Calculus & Beyond Homework | 2 | | | Calculus & Beyond Homework | 8 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9159168601036072, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/7332/for-what-values-of-a-and-b-is-the-gap-vc-a-b-problem-np-hard
# For what values of A and B is the gap-VC-[A,B] problem NP-HARD? For which values $A,B$ is the problem $\mathsf{gap\mathord-VC}\mathord-[A,B]$ NP-hard? VC is the vertex cover problem. I am given three options: $B=\frac{3}{4},A=\frac{1}{2}$ or $B=\frac{3}{4},A=\frac{1}{4}$ or none. I would to review what I think that I need to do, I'm not sure that the way I think of it is correct. This is what I think: I need to decide if it NP-hard to approximate the VC to $\frac{1}{2}$, i.e., can I build an NP Turing machine that would return Yes iff for a given graph, it can guarantee that it has less than $\frac{1}{4}V$ vertices that cover the whole graph? Maybe even for $\frac{1}{2}V$ vertices? This is a question from a past midterm that I'm solving now in order to prepare myself for my own midterm in a "Computational Complexity Theory" course. - Can you define the gap-VC-[$A,B$] problem? – Yuval Filmus Dec 12 '12 at 20:39 ## 1 Answer Vertex cover has an easy $2$-approximation, so gap-VC-[$1/4,3/4$] is easy. The best unconditional hardness result is by Dinur and Safra, which gives $1.3606$ hardness. Since $(3/4)/(1/2) = 1.5 > 1.3606$, their method definitely doesn't imply gap-VC-[$1/2,3/4$]-hardness. Assuming the Unique Games Conjecture, the hardness of vertex cover is $2-\epsilon$, but this doesn't necessarily imply that gap-VC-[$1/2,3/4$] is hard - you'd have to look at the proof to determine that. In summary, I have no idea what the correct answer is, and I'm not sure that anyone else knows which answer is correct (unconditionally). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9689947366714478, "perplexity_flag": "head"}
http://tmont.com/blargh/2011/11/the-eymology-of-lifting
# tommy montgomery / blargh / 2011 / 11 ## The Etymology of Lifting Wednesday November 11, 2011 1:47 AM "Lifting" is an esoteric term that is used by a few languages (C♯, most notably) to describe a feature usually associated with nullable types. It's used colloquially to signify that a member (or operator) is "lifted" from another object automatically by the compiler. What this actually means is best illustrated with an example. C♯'s reference types are nullable by default. This means you can do things like `object foo = null;` and not get a compiler error. It also means you can do things like `foo.ToString()` and get an ever familiar NullReferenceException. This is because reference types in C♯ are nullable by default. But not all types are this way. Value types like `int` are never null, and have a default value that is retrievable with the default operator: `var zero = default(int);` The expected behavior becomes less clear when you use nullable value types like `int?`. By using nullable types, you can assign a null value to a value type without a compiler error: `int? nullableInt = null;` For example, what happens in the following code? Is it a compiler error, a runtime error, or is it valid code that will run without exception? ```int? nullableInt = null; int nonNullableInt = 2; int result = nullableInt + nonNullableInt; //what happens? int? result = nullableInt + nonNullableInt; //what about here? ``` The answer I'll leave as an exercise to the reader, as it's mostly irrelevant to what I want to talk about. That is, the subject of this post is not the peculiarities of the C♯ language specification and the validity (or lack thereof) of nullable reference types. Both subjects have been hotly debated by people far more qualified than I. Rather, the subject of this post is what "lifting" means in the context of a programming language. I gave the example above to illustrate potential ambiguities when performing operations on nullable types. The actual value of result is insignificant, but the ways in which the compiler determines how to perform addition between a potentially null value and a non-null value are a good lead-in to lifting. ## Let's get mathematical Lifting comes from the prestigious and panty-dropping field of topology, which has to do with "structured space", which is basically a catch-all term in mathematics. You can hunt down the exact definition later. In algebraic topology, there's a thing called a homotopy. Two functions in two different topological spaces are homotopic if one can be transformed into the other. What that actually means is not really relevant, but homotopic functions (and in particular homotopies in more than two dimensions) played a role in proving the Poincaré Conjecture. So they've served a purpose at some point. So where does lifting come in? Well, if you have a homotopy on a space X to another space B and a mapping function δ from another topological space E to B, then δ has the homotopic lifting property on X if it satisifies some other conditions. Obviously, that makes no sense, and nor should it, unless you happen to be a grad student studying algebraic topology. The exact meaning isn't important, but a general inkling of what it represents will aid in understanding why programming languages borrowed the term lifting. So. The δ function above is a map bridging two different spaces. Instead of spaces, we'll call them sets (since that's actually what they are). Say E is the set of integers `{ 1, 2, 3, 4 }`, and B is the set of integers `{ 2, 4, 6, 8 }`. In this trivial case, δ could be $$f(x) = 2x$$. It should be pretty obvious to see that δ will map each element in E to an element in B. Now, to say that δ has the lifting property is where it starts to get interesting. Well, more interesting. Whatever. Anyway, to say a function has the lifting property requires a bit of verification. Specifically, it requires several conditions to be true, all of which I'm not going to discuss. You can read about them on wikipedia if you want. The important thing is that if δ has the lifting property on a space X (which, remember, contains a homotopy from X to B), then there exists another function g that maps X to E. So, continuing with our trivial example, say X is `{ 5, 6, 7, 8 }`. Then we could have $$g(x) = x - 4$$. This would map all of the elements in X to an element in E (which was `{ 1, 2, 3, 4 }`, as was defined above). Now, bear in mind that this is all completely contrived and dumbed down for the sake of illustrating the concept of lifting in programming languages. This is not a valid mathematical notion. This ain't Wolfram|Alpha. For example, X must be defined on the continuous closed interval $$[0, 1]$$, and there must also exist a map from $$X \times {0}$$ to E. But we won't get into that. So what is happening is that the homotopy from X to B is "lifted" to E by g. What this (approximately) means in layman's terms is that the homotopy from X to B can also take place from X to E. And what THAT means is a function that can transformed from X to B (the definition of homotopy) can also be transfromed from X to E. Duh. *pushes up glasses* ## Lifting in C♯ So how does all of this relate to C♯ and nullable types? Well, to be able to conveniently perform addition, C♯ "lifts" the `+` operator on `int` to `int?`. If you think of the set of operators on integers a topological space, and the set of operators on nullable integers another topological space, the magic that happens that allows addition between nullable integers would be the δ mapping function that lifts addition to the nullable integers. I think that might be the worst of analogy of all time. I don't think taking something simple and rewording it be more complex is an actual literary technique. It's possible that I just invented it. But I digress. Using the term "lifting" is a bit of a misnomer, as it has concrete meaning in other fields (namely algebraic topology). "Lifting" in C♯ is a bastardized form of "lifting" that ignores many of the essential properties of homotopy theory (like the fact that it only applies to continuous functions). But honestly, it doesn't really matter. The visualization of literally (well, figuratively) "lifting" an operator from one object and applying it to another object is pretty apt. But sometimes it's good to understand WHY these things are done this way.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553408026695251, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/08/27/elementary-row-and-column-operations/?like=1&source=post_flair&_wpnonce=3fdfac1636
# The Unapologetic Mathematician ## Elementary Row and Column Operations Here’s a topic that might be familiar all the way back to high school mathematics classes. We’re going to use the elementary matrices to manipulate a matrix. Rather than work out abstract formulas, I’m going to take the specific matrix $\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}$ and perform example manipulations on it to draw conclusions. The effects of elementary matrices are so, well, elementary that it will be clear how they generalize. So how do we use the matrices to manipulate matrices? Well, we’re using the elementary matrices to change the input or output bases of the linear transformation represented by the matrix. So to change the output basis we’ll multiply on the left by an elementary matrix, while to change the input basis we’ll multiply on the right by the inverse of an elementary matrix — which itself is an elementary matrix of the same kind. $\displaystyle\begin{pmatrix}1&0&0\\{0}&0&1\\{0}&1&0\end{pmatrix}\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}=\begin{pmatrix}A&B&C&D\\I&J&K&L\\E&F&G&H\end{pmatrix}$ while on the right we might have $\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}\begin{pmatrix}{0}&0&1&0\\{0}&1&0&0\\1&0&0&0\\{0}&0&0&1\end{pmatrix}=\begin{pmatrix}C&B&A&D\\G&F&E&H\\K&J&I&L\end{pmatrix}$ On the left, the action of a swap is to swap two rows, while on the right the action is to swap two columns of the matrix. Next come the scalings. On the left $\displaystyle\begin{pmatrix}c&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}=\begin{pmatrix}cA&cB&cC&cD\\E&F&G&H\\I&J&K&L\end{pmatrix}$ and on the right $\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}\begin{pmatrix}1&0&0&0\\{0}&c&0&0\\{0}&0&1&0\\{0}&0&0&1\end{pmatrix}=\begin{pmatrix}A&cB&C&D\\E&cF&G&H\\I&cJ&K&L\end{pmatrix}$ On the left, the action of a scaling is to multiply a row by the scaling factor, while on the right the effect is to multiply a column by the scaling factor. Finally, the shears. On the left $\displaystyle\begin{pmatrix}1&0&c\\{0}&1&0\\{0}&0&1\end{pmatrix}\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}=\begin{pmatrix}A+cI&B+cJ&C+cK&D+cL\\E&F&G&H\\I&J&K&L\end{pmatrix}$ and on the right $\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}\begin{pmatrix}1&0&0&0\\{0}&1&0&0\\{0}&0&1&0\\{0}&c&0&1\end{pmatrix}=\begin{pmatrix}A&B+cD&C&D\\E&F+cH&G&H\\I&J+cL&K&L\end{pmatrix}$ On the left, the shear $H_{i,j,c}$ adds $c$ times the $j$th row to the $i$th row, while on the right, the shear adds $c$ times the $i$th column to the $j$th column. So in general we see that acting on the left manipulates the rows of a matrix, while acting on the right manipulates the columns. We call these the “elementary row operations” and “elementary column operations”, respectively. Any manipulation of the form of a matrix we can effect by these operations can be seen as the result of applying a change of basis matrix on the left (output) or right (input) side. And so any two matrices related by these operations can be seen as representing “the same” transformation in two different bases. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 8 Comments » 1. Quit making it look so easy! I just graduated with a degree in physics, and such simple operations never seemed more obvious! Thanks for the great explanation; keep up the good work (the simple stuff is the best stuff!). Comment by AndrewB | August 29, 2009 | Reply 2. [...] Echelon Form For now, I only want to focus on elementary row operations. That is, transformations of matrices that can be effected by multiplying by elementary matrices on [...] Pingback by | September 1, 2009 | Reply 3. [...] unique. Indeed, we are not allowed to alter the basis of the input space (since that would involve elementary column operations), so we can view this as a process of creating a basis for the output space in terms of the given [...] Pingback by | September 3, 2009 | Reply 4. [...] Matrices Generate the General Linear Group Okay, so we can use elementary row operations to put any matrix into its (unique) reduced row echelon form. As we stated last time, this consists [...] Pingback by | September 4, 2009 | Reply 5. [...] terms of elementary row operations, first we add the third row to the second. Then we add the second to the first, effectively adding [...] Pingback by | September 11, 2009 | Reply 6. Really cool explanation……thanks a lot. Keep it up. Comment by dpak | December 6, 2009 | Reply 7. Can we have column operations in analogy to row operations? Comment by Tahsin | March 18, 2010 | Reply 8. See the last paragraph, Tahsin. Comment by | March 18, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8997735381126404, "perplexity_flag": "middle"}
http://en.wikisource.org/wiki/Relativity:_The_Special_and_General_Theory/Part_III
# Relativity: The Special and General Theory/Part III From Wikisource Jump to: navigation, search by Albert Einstein, translated by Robert William Lawson Part III - Considerations on the Universe as a Whole Relativity: The Special and General Theory — Part III - Considerations on the Universe as a Whole Albert Einstein Robert William Lawson ## Part III - Considerations on the Universe as a Whole ### Section 30 - Cosmological Difficulties of Newton's Theory Apart from the difficulty discussed in Section 21, there is a second fundamental difficulty attending classical celestial mechanics, which, to the best of my knowledge, was first discussed in detail by the astronomer Seeliger. If we ponder over the question as to how the universe, considered as a whole, is to be regarded, the first answer that suggests itself to us is surely this: As regards space (and time) the universe is infinite. There are stars everywhere, so that the density of matter, although very variable in detail, is nevertheless on the average everywhere the same. In other words: However far we might travel through space, we should find everywhere an attenuated swarm of fixed stars of approximately the same kind and density. [ 126 ] This view is not in harmony with the theory of Newton. The latter theory rather requires that the universe should have a kind of centre in which the density of the stars is a maximum, and that as we proceed outwards from this centre the group-density of the stars should diminish, until finally, at great distances, it is succeeded by an infinite region of emptiness. The stellar universe ought to be a finite island in the infinite ocean of space.[1] This conception is in itself not very satisfactory. It is still less satisfactory because it leads to the result that the light emitted by the stars and also individual stars of the stellar system are perpetually passing out into infinite space, never to return, and without ever again coming into interaction with other objects of nature. Such a finite material universe would be destined to become gradually but systematically impoverished. [ 127 ] In order to escape this dilemma, Seeliger suggested a modification of Newton's law, in which he assumes that for great distances the force of attraction between two masses diminishes more rapidly than would result from the inverse square law. In this way it is possible for the mean density of matter to be constant everywhere, even to infinity, without infinitely large gravitational fields being produced. We thus free ourselves from the distasteful conception that the material universe ought to possess something of the nature of a centre. Of course we purchase our emancipation from the fundamental difficulties mentioned, at the cost of a modification and complication of Newton's law which has neither empirical nor theoretical foundation. We can imagine innumerable laws which would serve the same purpose, without our being able to state a reason why one of them is to be preferred to the others; for any one of these laws would be founded just as little on more general theoretical principles as is the law of Newton. [ 128 ] ### Section 31 - The Possibility of a "Finite" and Yet "Unbounded" Universe But speculations on the structure of the universe also move in quite another direction. The development of non-Euclidean geometry led to the recognition of the fact, that we can cast doubt on the infiniteness of our space without coming into conflict with the laws of thought or with experience (Riemann, Helmholtz). These questions have already been treated in detail and with unsurpassable lucidity by Helmholtz and Poincaré, whereas I can only touch on them briefly here. In the first place, we imagine an existence in two dimensional space. Flat beings with flat implements, and in particular flat rigid measuring-rods, are free to move in a plane. For them nothing exists outside of this plane: that which they observe to happen to themselves and to their flat "things" is the all-inclusive reality of their plane. In particular, the constructions of plane Euclidean geometry can be carried out by means of the rods, e.g. the lattice construction, considered [ 129 ] in Section 24. In contrast to ours, the universe of these beings is two-dimensional; but, like ours, it extends to infinity. In their universe there is room for an infinite number of identical squares made up of rods, i.e. its volume (surface) is infinite. If these beings say their universe is "plane," there is sense in the statement, because they mean that they can perform the constructions of plane Euclidean geometry with their rods. In this connection the individual rods always represent the same distance, independently of their position. Let us consider now a second two-dimensional existence, but this time on a spherical surface instead of on a plane. The flat beings with their measuring-rods and other objects fit exactly on this surface and they are unable to leave it. Their whole universe of observation extends exclusively over the surface of the sphere. Are these beings able to regard the geometry of their universe as being plane geometry and their rods withal as the realisation of "distance"? They cannot do this. For if they attempt to realise a straight line, they will obtain a curve, which we "three-dimensional beings" designate as a great circle, i.e. a self-contained line of definite finite length, which can be measured up by means of a measuring-rod. Similarly, this universe has a finite area that can be compared with the area, of a [ 130 ] square constructed with rods. The great charm resulting from this consideration lies in the recognition of the fact that the universe of these beings is finite and yet has no limits. But the spherical-surface beings do not need to go on a world-tour in order to perceive that they are not living in a Euclidean universe. They can convince themselves of this on every part of their "world," provided they do not use too small a piece of it. Starting from a point, they draw "straight lines" (arcs of circles as judged in three dimensional space) of equal length in all directions. They will call the line joining the free ends of these lines a "circle." For a plane surface, the ratio of the circumference of a circle to its diameter, both lengths being measured with the same rod, is, according to Euclidean geometry of the plane, equal to a constant value π, which is independent of the diameter of the circle. On their spherical surface our flat beings would find for this ratio the value $\pi\frac{\sin\left(\frac{r}{R}\right)}{\left(\frac{r}{R}\right)}$ i.e. a smaller value than π, the difference being the more considerable, the greater is the radius of the circle in comparison with the radius $R$ of the "world-sphere." By means of this relation [ 131 ] the spherical beings can determine the radius of their universe ("world"), even when only a relatively small part of their world-sphere is available for their measurements. But if this part is very small indeed, they will no longer be able to demonstrate that they are on a spherical "world" and not on a Euclidean plane, for a small part of a spherical surface differs only slightly from a piece of a plane of the same size. Thus if the spherical surface beings are living on a planet of which the solar system occupies only a negligibly small part of the spherical universe, they have no means of determining whether they are living in a finite or in an infinite universe, because the "piece of universe" to which they have access is in both cases practically plane, or Euclidean. It follows directly from this discussion, that for our sphere-beings the circumference of a circle first increases with the radius until the "circumference of the universe" is reached, and that it thenceforward gradually decreases to zero for still further increasing values of the radius. During this process the area of the circle continues to increase more and more, until finally it becomes equal to the total area of the whole "world-sphere." Perhaps the reader will wonder why we have placed our "beings" on a sphere rather than on another closed surface. But this choice has its [ 132 ] justification in the fact that, of all closed surfaces, the sphere is unique in possessing the property that all points on it are equivalent. I admit that the ratio of the circumference $c$ of a circle to its radius $r$ depends on $r$, but for a given value of $r$ it is the same for all points of the "world-sphere"; in other words, the "world-sphere" is a "surface of constant curvature." To this two-dimensional sphere-universe there is a three-dimensional analogy, namely, the three-dimensional spherical space which was discovered by Riemann. its points are likewise all equivalent. It possesses a finite volume, which is determined by its "radius" $\left(2\pi^{2}R^{3}\right)$. Is it possible to imagine a spherical space? To imagine a space means nothing else than that we imagine an epitome of our "space" experience, i.e. of experience that we can have in the movement of "rigid" bodies. In this sense we can imagine a spherical space. Suppose we draw lines or stretch strings in all directions from a point, and mark off from each of these the distance $r$ with a measuring-rod. All the free end-points of these lengths lie on a spherical surface. We can specially measure up the area ($F$) of this surface by means of a square made up of measuring-rods. If the universe is Euclidean, then $F=4\pi r^{2}$; if it is spherical, then $F$ is always less than $4\pi r^{2}$. With increasing values [ 133 ] of r, F increases from zero up to a maximum value which is determined by the "world-radius," but for still further increasing values of $r$, the area gradually diminishes to zero. At first, the straight lines which radiate from the starting point diverge farther and farther from one another, but later they approach each other, and finally they run together again at a "counter-point" to the starting point. Under such conditions they have traversed the whole spherical space. It is easily seen that the three-dimensional spherical space is quite analogous to the two-dimensional spherical surface. It is finite (i.e. of finite volume), and has no bounds. It may be mentioned that there is yet another kind of curved space: "elliptical space." It can be regarded as a curved space in which the two "counter-points" are identical (indistinguishable from each other). An elliptical universe can thus be considered to some extent as a curved universe possessing central symmetry. It follows from what has been said, that closed spaces without limits are conceivable. From amongst these, the spherical space (and the elliptical) excels in its simplicity, since all points on it are equivalent. As a result of this discussion, a most interesting question arises for astronomers and physicists, and that is whether the universe in which we live is infinite, or whether it is finite [ 134 ] in the manner of the spherical universe. Our experience is far from being sufficient to enable us to answer this question. But the general theory of relativity permits of our answering it with a moderate degree of certainty, and in this connection the difficulty mentioned in Section 30 finds its solution. [ 135 ] ### Section 32 - The Structure of Space According to the General Theory of Relativity According to the general theory of relativity, the geometrical properties of space are not independent, but they are determined by matter. Thus we can draw conclusions about the geometrical structure of the universe only if we base our considerations on the state of the matter as being something that is known. We know from experience that, for a suitably chosen co-ordinate system, the velocities of the stars are small as compared with the velocity of transmission of light. We can thus as a rough approximation arrive at a conclusion as to the nature of the universe as a whole, if we treat the matter as being at rest. We already know from our previous discussion that the behaviour of measuring-rods and clocks is influenced by gravitational fields, i.e. by the distribution of matter. This in itself is sufficient to exclude the possibility of the exact validity of Euclidean geometry in our universe. But it is conceivable that our universe differs only slightly [ 136 ] from a Euclidean one, and this notion seems all the more probable, since calculations show that the metrics of surrounding space is influenced only to an exceedingly small extent by masses even of the magnitude of our sun. We might imagine that, as regards geometry, our universe behaves analogously to a surface which is irregularly curved in its individual parts, but which nowhere departs appreciably from a plane: something like the rippled surface of a lake. Such a universe might fittingly be called a quasi-Euclidean universe. As regards its space it would be infinite. But calculation shows that in a quasi-Euclidean universe the average density of matter would necessarily be nil. Thus such a universe could not be inhabited by matter everywhere ; it would present to us that unsatisfactory picture which we portrayed in Section 30. If we are to have in the universe an average density of matter which differs from zero, however small may be that difference, then the universe cannot be quasi-Euclidean. On the contrary, the results of calculation indicate that if matter be distributed uniformly, the universe would necessarily be spherical (or elliptical). Since in reality the detailed distribution of matter is not uniform, the real universe will deviate in individual parts from the spherical, i.e. the universe will be quasi-spherical. But it will be [ 137 ] necessarily finite. In fact, the theory supplies us with a simple connection[2] between the space-expanse of the universe and the average density of matter in it. 1. Proof.— According to the theory of Newton, the number of "lines of force" which come from infinity and terminate in a mass $m$ is proportional to the mass $m$. If, on the average, the Mass density $\rho_{0}$ is constant throughout the universe, then a sphere of volume $V$ will enclose the average mass $\rho_{0}V$. Thus the number of lines of force passing through the surface $F$ of the sphere into its interior is proportional to $\rho_{0}V$. For unit area of the surface of the sphere the number of lines of force which enters the sphere is thus proportional to $\rho_{0}\frac{v}{F}$ or to $\rho_{0}R$. Hence the intensity of the field at the surface would ultimately become infinite with increasing radius $R$ of the sphere, which is impossible. 2. For the radius $R$ of the universe we obtain the equation $R^{2}=\frac{2}{\kappa\rho}$ The use of the C.G.S. system in this equation gives $\tfrac{2}{\kappa}=1\cdot08.10^{27}$; ρ is the average density of the matter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500062465667725, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Spherical_harmonics
# Spherical harmonics "Ylm" redirects here. For other uses, see YLM (disambiguation). Visual representations of the first few spherical harmonics. Red portions represent regions where the function is positive, and green portions represent regions where the function is negative. In mathematics, spherical harmonics are the angular portion of a set of solutions to Laplace's equation. Represented in a system of spherical coordinates, Laplace's spherical harmonics $Y_\ell^m$ are a specific set of spherical harmonics that forms an orthogonal system, first introduced by Pierre Simon de Laplace in 1782.[1] Spherical harmonics are important in many theoretical and practical applications, particularly in the computation of atomic orbital electron configurations, representation of gravitational fields, geoids, and the magnetic fields of planetary bodies and stars, and characterization of the cosmic microwave background radiation. In 3D computer graphics, spherical harmonics play a special role in a wide variety of topics including indirect lighting (ambient occlusion, global illumination, precomputed radiance transfer, etc.) and recognition of 3D shapes. ## History Spherical harmonics were first investigated in connection with the Newtonian potential of Newton's law of universal gravitation in three dimensions. In 1782, Pierre-Simon de Laplace had, in his Mécanique Céleste, determined that the gravitational potential at a point x associated to a set of point masses mi located at points xi was given by $V(\mathbf{x}) = \sum_i \frac{m_i}{|\mathbf{x}_i - \mathbf{x}|}.$ Each term in the above summation is an individual Newtonian potential for a point mass. Just prior to that time, Adrien-Marie Legendre had investigated the expansion of the Newtonian potential in powers of r = |x| and r1 = |x1|. He discovered that if r ≤ r1 then $\frac{1}{|\mathbf{x}_1 - \mathbf{x}|} = P_0(\cos\gamma)\frac{1}{r_1} + P_1(\cos\gamma)\frac{r}{r_1^2} + P_2(\cos\gamma)\frac{r^2}{r_1^3}+\cdots$ where γ is the angle between the vectors x and x1. The functions Pi are the Legendre polynomials, and they are a special case of spherical harmonics. Subsequently, in his 1782 memoire, Laplace investigated these coefficients using spherical coordinates to represent the angle γ between x1 and x. (See Applications of Legendre polynomials in physics for a more detailed analysis.) In 1867, William Thomson (Lord Kelvin) and Peter Guthrie Tait introduced the solid spherical harmonics in their Treatise on Natural Philosophy, and also first introduced the name of "spherical harmonics" for these functions. The solid harmonics were homogeneous solutions of Laplace's equation $\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} + \frac{\partial^2 u}{\partial z^2} = 0.$ By examining Laplace's equation in spherical coordinates, Thomson and Tait recovered Laplace's spherical harmonics. The term "Laplace's coefficients" was employed by William Whewell to describe the particular system of solutions introduced along these lines, whereas others reserved this designation for the zonal spherical harmonics that had properly been introduced by Laplace and Legendre. The 19th century development of Fourier series made possible the solution of a wide variety of physical problems in rectangular domains, such as the solution of the heat equation and wave equation. This could be achieved by expansion of functions in series of trigonometric functions. Whereas the trigonometric functions in a Fourier series represent the fundamental modes of vibration in a string, the spherical harmonics represent the fundamental modes of vibration of a sphere in much the same way. Many aspects of the theory of Fourier series could be generalized by taking expansions in spherical harmonics rather than trigonometric functions. This was a boon for problems possessing spherical symmetry, such as those of celestial mechanics originally studied by Laplace and Legendre. The prevalence of spherical harmonics already in physics set the stage for their later importance in the 20th century birth of quantum mechanics. The spherical harmonics are eigenfunctions of the square of the orbital angular momentum operator $-i\hbar\mathbf{r}\times\nabla,$ and therefore they represent the different quantized configurations of atomic orbitals. ## Laplace's spherical harmonics Real (Laplace) spherical harmonics $Y_{\ell}^m$ for $\ell=0$ to $4$ (top to bottom) and $m=0$ to $4$ (left to right). The negative order harmonics $Y_{\ell}^{-m}$ are rotated about the $z$ axis by $90^\circ/m$ with respect to the positive order ones. Laplace's equation imposes that the divergence of the gradient of a scalar field f is zero. In spherical coordinates this is:[2] $\nabla^2 f = {1 \over r^2}{\partial \over \partial r}\left(r^2 {\partial f \over \partial r}\right) + {1 \over r^2\sin\theta}{\partial \over \partial \theta}\left(\sin\theta {\partial f \over \partial \theta}\right) + {1 \over r^2\sin^2\theta}{\partial^2 f \over \partial \varphi^2} = 0.$ Consider the problem of finding solutions of the form f(r,θ,φ) = R(r)Y(θ,φ). By separation of variables, two differential equations result by imposing Laplace's equation: $\frac{1}{R}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right) = \lambda,\qquad \frac{1}{Y}\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta \frac{\partial Y}{\partial\theta}\right) + \frac{1}{Y}\frac{1}{\sin^2\theta}\frac{\partial^2Y}{\partial\varphi^2} = -\lambda.$ The second equation can be simplified under the assumption that Y has the form Y(θ,φ) = Θ(θ)Φ(φ). Applying separation of variables again to the second equation gives way to the pair of differential equations $\frac{1}{\Phi(\varphi)} \frac{d^2 \Phi(\varphi)}{d\varphi^2} = -m^2$ $\lambda\sin ^2(\theta) + \frac{\sin(\theta)}{\Theta(\theta)} \frac{d}{d\theta} \left [ \sin(\theta) \frac{d\Theta}{d\theta} \right ] = m^2$ for some number m. A priori, m is a complex constant, but because Φ must be a periodic function whose period evenly divides 2π, m is necessarily an integer and Φ is a linear combination of the complex exponentials e±imφ. The solution function Y(θ,φ) is regular at the poles of the sphere, where θ=0,π. Imposing this regularity in the solution Θ of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter λ to be of the form λ = ℓ(ℓ+1) for some non-negative integer with ℓ ≥ |m|; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables t = cosθ transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial $P_\ell^m(\cos\theta)$. Finally, the equation for R has solutions of the form R(r) = Arℓ + Br−ℓ−1; requiring the solution to be regular throughout R3 forces B = 0.[3] Here the solution was assumed to have the special form Y(θ,φ) = Θ(θ)Φ(φ). For a given value of ℓ, there are 2ℓ+1 independent solutions of this form, one for each integer m with −ℓ ≤ m ≤ ℓ. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials: $Y_\ell^m (\theta, \varphi ) = N \, e^{i m \varphi } \, P_\ell^m (\cos{\theta} )$ which fulfill $r^2\nabla^2 Y_\ell^m (\theta, \varphi ) = -\ell (\ell + 1 ) Y_\ell^m (\theta, \varphi ).$ Here $Y_\ell^m$ is called a spherical harmonic function of degree ℓ and order m, $P_\ell^m$ is an associated Legendre polynomial, N is a normalization constant, and θ and φ represent colatitude and longitude, respectively. In particular, the colatitude θ, or polar angle, ranges from 0 at the North Pole to π at the South Pole, assuming the value of π/2 at the Equator, and the longitude φ, or azimuth, may assume all values with 0 ≤ φ < 2π. For a fixed integer ℓ, every solution Y(θ,φ) of the eigenvalue problem $r^2\nabla^2 Y = -\ell (\ell + 1 ) Y$ is a linear combination of $Y_\ell^m$. In fact, for any such solution, rℓY(θ,φ) is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are 2ℓ+1 linearly independent such polynomials. The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor rℓ, $f(r, \theta, \varphi) = \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell f_\ell^m \, r^\ell \, Y_\ell^m (\theta, \varphi ),$ where the $f_\ell^m$ are constants and the factors $r^\ell \, Y_\ell^m$ are known as solid harmonics. Such an expansion is valid in the ball $r < R = 1/\limsup_{\ell\to\infty} |f_\ell^m|^{1/\ell}.$ ### Orbital angular momentum In quantum mechanics, Laplace's spherical harmonics are understood in terms of the orbital angular momentum[4] $\mathbf{L} = -i\hbar\mathbf{x}\times \nabla = L_x\mathbf{i} + L_y\mathbf{j}+L_z\mathbf{k}.$ The $\hbar$ is conventional in quantum mechanics; it is convenient to work in units in which $\hbar = 1$. The spherical harmonics are eigenfunctions of the square of the orbital angular momentum $\begin{align} \mathbf{L}^2 &= -r^2\nabla^2 + \left(r\frac{\partial}{\partial r}+1\right)r\frac{\partial}{\partial r}\\ &= -{1 \over \sin\theta}{\partial \over \partial \theta}\sin\theta {\partial \over \partial \theta} - {1 \over \sin^2\theta}{\partial^2 \over \partial \varphi^2}. \end{align}$ Laplace's spherical harmonics are the joint eigenfunctions of the square of the orbital angular momentum and the generator of rotations about the azimuthal axis: $\begin{align} L_z &= -i\left(x\frac{\partial}{\partial y} - y\frac{\partial}{\partial x}\right)\\ &=-i\frac{\partial}{\partial\varphi}. \end{align}$ These operators commute, and are densely defined self-adjoint operators on the Hilbert space of functions ƒ square-integrable with respect to the normal distribution on R3: $\frac{1}{(2\pi)^{3/2}}\int_{\mathbb{R}^3} |f(x)|^2 e^{-|x|^2/2}\,dx < \infty.$ Furthermore, L2 is a positive operator. If Y is a joint eigenfunction of L2 and Lz, then by definition $\begin{align} \mathbf{L}^2Y &= \lambda Y\\ L_zY &= mY \end{align}$ for some real numbers m and λ. Here m must in fact be an integer, for Y must be periodic in the coordinate φ with period a number that evenly divides 2π. Furthermore, since $\mathbf{L}^2 = L_x^2+L_y^2+L_z^2$ and each of Lx, Ly, Lz are self-adjoint, it follows that λ ≥ m2. Denote this joint eigenspace by Eλ,m, and define the raising and lowering operators by $\begin{align} L_+ &= L_x + iL_y\\ L_- &= L_x - iL_y \end{align}$ Then L+ and L− commute with L2, and the Lie algebra generated by L+, L−, Lz is the special linear Lie algebra, with commutation relations $[L_z,L_+] = L_+,\quad [L_z,L_-] = -L_-, \quad [L_+,L_-] = 2L_z.$ Thus L+ : Eλ,m → Eλ,m+1 (it is a "raising operator") and L− : Eλ,m → Eλ,m−1 (it is a "lowering operator"). In particular, Lk + : Eλ,m → Eλ,m+k must be zero for k sufficiently large, because the inequality λ ≥ m2 must hold in each of the nontrivial joint eigenspaces. Let Y ∈ Eλ,m be a nonzero joint eigenfunction, and let k be the least integer such that $L_+^kY = 0.\,$ Then, since $L_-L_+ = \mathbf{L}^2 - L_z^2 -L_z$ it follows that $0=L_-L_+^k Y = (\lambda - (m+k)^2-(m+k))Y.$ Thus λ = ℓ(ℓ+1) for the positive integer ℓ = m+k. ## Conventions ### Orthogonality and normalization Several different normalizations are in common use for the Laplace spherical harmonic functions. Throughout the section, we use the standard convention that (see associated Legendre polynomials) $P_\ell ^{-m} = (-1)^m \frac{(\ell-m)!}{(\ell+m)!} P_\ell ^{m}$ which is the natural normalization given by Rodrigues' formula. In physics and seismology, the Laplace spherical harmonics are generally defined as $Y_\ell^m( \theta , \varphi ) = \sqrt{{(2\ell+1)\over 4\pi}{(\ell-m)!\over (\ell+m)!}} \, P_\ell^m ( \cos{\theta} ) \, e^{i m \varphi }$ which are orthonormal $\int_{\theta=0}^\pi\int_{\varphi=0}^{2\pi}Y_\ell^m \, Y_{\ell'}^{m'*} \, d\Omega=\delta_{\ell\ell'}\, \delta_{mm'},$ where δaa = 1, δab = 0 if a ≠ b, (see Kronecker delta) and dΩ = sinθ dφ dθ. This normalization is used in quantum mechanics because it ensures that probability is normalized, i.e. $\int{|Y_\ell^m|^2 d\Omega} = 1$. The disciplines of geodesy and spectral analysis use $Y_\ell^m( \theta , \varphi ) = \sqrt{{(2\ell+1) }{(\ell-m)!\over (\ell+m)!}} \, P_\ell^m ( \cos{\theta} )\, e^{i m \varphi }$ which possess unit power ${1 \over 4 \pi} \int_{\theta=0}^\pi\int_{\varphi=0}^{2\pi}Y_\ell^m \, Y_{\ell'}^{m'*} d\Omega=\delta_{\ell\ell'}\, \delta_{mm'}.$ The magnetics community, in contrast, uses Schmidt semi-normalized harmonics $Y_\ell^m( \theta , \varphi ) = \sqrt{{(\ell-m)!\over (\ell+m)!}} \, P_\ell^m ( \cos{\theta} ) \, e^{i m \varphi }$ which have the normalization $\int_{\theta=0}^\pi\int_{\varphi=0}^{2\pi}Y_\ell^m \, Y_{\ell'}^{m'*}d\Omega={4 \pi \over (2 \ell + 1)}\delta_{\ell\ell'}\, \delta_{mm'}.$ In quantum mechanics this normalization is often used as well, and is named Racah's normalization after Giulio Racah. It can be shown that all of the above normalized spherical harmonic functions satisfy $Y_\ell^{m*} (\theta, \varphi) = (-1)^m Y_\ell^{-m} (\theta, \varphi),$ where the superscript * denotes complex conjugation. Alternatively, this equation follows from the relation of the spherical harmonic functions with the Wigner D-matrix. ### Condon–Shortley phase One source of confusion with the definition of the spherical harmonic functions concerns a phase factor of (−1) m for m > 0, 1 otherwise, commonly referred to as the Condon–Shortley phase in the quantum mechanical literature. In the quantum mechanics community, it is common practice to either include this phase factor in the definition of the associated Legendre polynomials, or to append it to the definition of the spherical harmonic functions. There is no requirement to use the Condon–Shortley phase in the definition of the spherical harmonic functions, but including it can simplify some quantum mechanical operations, especially the application of raising and lowering operators. The geodesy[5] and magnetics communities never include the Condon–Shortley phase factor in their definitions of the spherical harmonic functions nor in the ones of the associated Legendre polynomials.[citation needed] ### Real form A real basis of spherical harmonics can be defined in terms of their complex analogues by setting $Y_{\ell m} = \begin{cases} {1\over\sqrt2}\left(Y_\ell^m+(-1)^m \, Y_\ell^{-m}\right) = \sqrt{2} N_{(\ell,m)} P_\ell^m(\cos \theta) \cos m\varphi & \mbox{if } m>0 \\ Y_\ell^0 & \mbox{if } m=0\\ {1\over i\sqrt2}\left(Y_\ell^{-m}-(-1)^{m}\, Y_\ell^{m}\right) = \sqrt{2} N_{(\ell,|m|)} P_\ell^{|m|}(\cos \theta) \sin |m|\varphi &\mbox{if } m<0. \end{cases}$ where $N_{(\ell,m)}$ denotes the normalization constant, which, in the physics convention, is $N_{(\ell,m)} \equiv \sqrt{{(2\ell+1)\over 4\pi}{(\ell-m)!\over (\ell+m)!}}\;.$ The real form requires only associated Legendre polynomials $P_\ell^{|m|}$ of non-negative |m|. The harmonics with m > 0 are said to be of cosine type, and those with m < 0 of sine type. These real spherical harmonics are sometimes known as tesseral spherical harmonics.[6] These functions have the same normalization properties as the complex ones above. See here for a list of real spherical harmonics up to and including $\ell = 5$. Note, however, that the listed functions do not use Condon–Shortley phase factor and differ by the phase (−1) m from the phase given in this article. ## Spherical harmonics expansion The Laplace spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions. On the unit sphere, any square-integrable function can thus be expanded as a linear combination of these: $f(\theta,\varphi)=\sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell f_\ell^m \, Y_\ell^m(\theta,\varphi).$ This expansion holds in the sense of mean-square convergence — convergence in L2 of the sphere — which is to say that $\lim_{N\to\infty} \int_0^{2\pi}\int_0^\pi \left|f(\theta,\varphi)-\sum_{\ell=0}^N \sum_{m=- \ell}^\ell f_\ell^m Y_\ell^m(\theta,\varphi)\right|^2\sin\theta\, d\theta \,d\phi = 0.$ The expansion coefficients are the analogs of Fourier coefficients, and can be obtained by multiplying the above equation by the complex conjugate of a spherical harmonic, integrating over the solid angle $\Omega\!\,$, and utilizing the above orthogonality relationships. This is justified rigorously by basic Hilbert space theory. For the case of orthonormalized harmonics, this gives: $f_\ell^m=\int_{\Omega} f(\theta,\varphi)\, Y_\ell^{m*}(\theta,\varphi)\,d\Omega = \int_0^{2\pi}d\varphi\int_0^\pi \,d\theta\,\sin\theta f(\theta,\varphi)Y_\ell^{m*} (\theta,\varphi).$ If the coefficients decay in ℓ sufficiently rapidly — for instance, exponentially — then the series also converges uniformly to ƒ. A real square-integrable function ƒ can be expanded in terms of the real harmonics Yℓm above as a sum $f(\theta, \varphi) = \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell f_{\ell m} \, Y_{\ell m}(\theta, \varphi).$ Convergence of the series holds again in the same sense. ## Spectrum analysis ### Power spectrum in signal processing The total power of a function ƒ is defined in the signal processing literature as the integral of the function squared, divided by the area of its domain. Using the orthonormality properties of the real unit-power spherical harmonic functions, it is straightforward to verify that the total power of a function defined on the unit sphere is related to its spectral coefficients by a generalization of Parseval's theorem: $\frac{1}{4 \, \pi} \int_\Omega f(\Omega)^2\, d\Omega = \sum_{\ell=0}^\infty S_{f\!f}(\ell),$ where $S_{f\!f}(\ell) = \sum_{m=-\ell}^\ell f_{\ell m}^2$ is defined as the angular power spectrum. In a similar manner, one can define the cross-power of two functions as $\frac{1}{4 \, \pi} \int_\Omega f(\Omega) \, g(\Omega) \, d\Omega = \sum_{\ell=0}^\infty S_{fg}(\ell),$ where $S_{fg}(\ell) = \sum_{m=-\ell}^\ell f_{\ell m} g_{\ell m}$ is defined as the cross-power spectrum. If the functions ƒ and g have a zero mean (i.e., the spectral coefficients ƒ00 and g00 are zero), then Sƒƒ(ℓ) and Sƒg(ℓ) represent the contributions to the function's variance and covariance for degree ℓ, respectively. It is common that the (cross-)power spectrum is well approximated by a power law of the form $S_{f\!f}(\ell) = C \, \ell^{\beta}.$ When β = 0, the spectrum is "white" as each degree possesses equal power. When β < 0, the spectrum is termed "red" as there is more power at the low degrees with long wavelengths than higher degrees. Finally, when β > 0, the spectrum is termed "blue". The condition on the order of growth of Sƒƒ(ℓ) is related to the order of differentiability of ƒ in the next section. ### Differentiability properties One can also understand the differentiability properties of the original function ƒ in terms of the asymptotics of Sƒƒ(ℓ). In particular, if Sƒƒ(ℓ) decays faster than any rational function of ℓ as ℓ → ∞, then ƒ is infinitely differentiable. If, furthermore, Sƒƒ(ℓ) decays exponentially, then ƒ is actually real analytic on the sphere. The general technique is to use the theory of Sobolev spaces. Statements relating the growth of the Sƒƒ(ℓ) to differentiability are then similar to analogous results on the growth of the coefficients of Fourier series. Specifically, if $\sum_{\ell=0}^\infty (1+\ell^2)^s S_{ff}(\ell) < \infty,$ then ƒ is in the Sobolev space Hs(S2). In particular, the Sobolev embedding theorem implies that ƒ is infinitely differentiable provided that $S_{ff}(\ell) = O(\ell^{-s})\quad\rm{as\ }\ell\to\infty$ for all s. ## Algebraic properties ### Addition theorem A mathematical result of considerable interest and use is called the addition theorem for spherical harmonics. This is a generalization of the trigonometric identity $\cos(\theta'-\theta)=\cos\theta'\cos\theta + \sin\theta\sin\theta'$ in which the role of the trigonometric functions appearing on the right-hand side is played by the spherical harmonics and that of the left-hand side is played by the Legendre polynomials. Consider two unit vectors x and y, having spherical coordinates (θ,φ) and (θ′,φ′), respectively. The addition theorem states $P_\ell( \mathbf{x}\cdot\mathbf{y} ) = \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^\ell Y_{\ell m}^*(\theta',\varphi') \, Y_{\ell m}(\theta,\varphi).$ () where Pℓ is the Legendre polynomial of degree ℓ. This expression is valid for both real and complex harmonics.[7] The result can be proven analytically, using the properties of the Poisson kernel in the unit ball, or geometrically by applying a rotation to the vector y so that it points along the z-axis, and then directly calculating the right-hand side.[8] In particular, when x = y, this gives Unsöld's theorem[9] $\sum_{m=-\ell}^\ell Y_{\ell m}^*(\theta,\varphi) \, Y_{\ell m}(\theta,\varphi) = \frac{2\ell + 1}{4\pi}$ which generalizes the identity cos2θ + sin2θ = 1 to two dimensions. In the expansion (1), the left-hand side Pℓ(x·y) is a constant multiple of the degree ℓ zonal spherical harmonic. From this perspective, one has the following generalization to higher dimensions. Let Yj be an arbitrary orthonormal basis of the space Hℓ of degree ℓ spherical harmonics on the n-sphere. Then $Z^{(\ell)}_{\mathbf{x}}$, the degree ℓ zonal harmonic corresponding to the unit vector x, decomposes as[10] $Z^{(\ell)}_{\mathbf{x}}({\mathbf{y}}) = \sum_{j=1}^{\dim(\mathbf{H}_\ell)}\overline{Y_j({\mathbf{x}})}\,Y_j({\mathbf{y}})$ () Furthermore, the zonal harmonic $Z^{(\ell)}_{\mathbf{x}}({\mathbf{y}})$ is given as a constant multiple of the appropriate Gegenbauer polynomial: $Z^{(\ell)}_{\mathbf{x}}({\mathbf{y}}) = C_\ell^{((n-1)/2)}({\mathbf{x}}\cdot {\mathbf{y}})$ () Combining (2) and (3) gives (1) in dimension n = 2 when x and y are represented in spherical coordinates. Finally, evaluating at x = y gives the functional identity $\frac{\dim \mathbf{H}_\ell}{\omega_{n-1}} = \sum_{j=1}^{\dim(\mathbf{H}_\ell)}|Y_j({\mathbf{x}})|^2$ where ωn−1 is the volume of the (n−1)-sphere. ### Clebsch-Gordan coefficients Main article: Clebsch-Gordan coefficients The Clebsch-Gordan coefficients are the coefficients appearing in the expansion of the product of two spherical harmonics in terms of spherical harmonics itself. A variety of techniques are available for doing essentially the same calculation, including the Wigner 3-jm symbol, the Racah coefficients, and the Slater integrals. Abstractly, the Clebsch-Gordan coefficients express the tensor product of two irreducible representations of the rotation group as a sum of irreducible representations: suitably normalized, the coefficients are then the multiplicities. ### Parity Main article: Parity (physics) The spherical harmonics have well defined parity in the sense that they are either even or odd with respect to reflection about the origin. Reflection about the origin is represented by the operator $P\Psi(\vec{r}) = \Psi(-\vec{r})$. For the spherical angles, $\{\theta,\phi\}$ this corresponds to the replacement $\{\pi-\theta,\pi+\phi\}$. The associated Legendre polynomials gives (−1)ℓ-m and from the exponential function we have (−1)m, giving together for the spherical harmonics a parity of (-1)ℓ: $Y_\ell^m(\theta,\phi) \rightarrow Y_\ell^m(\pi-\theta,\pi+\phi) = (-1)^\ell Y_\ell^m(\theta,\phi)$ This remains true for spherical harmonics in higher dimensions: applying a point reflection to a spherical harmonic of degree ℓ changes the sign by a factor of (−1)ℓ. ## Visualization of the spherical harmonics Schematic representation of $Y_{\ell m}$ on the unit sphere and its nodal lines. $\text{Re}[Y_{\ell m}]$ is equal to 0 along $m$ great circles passing through the poles, and along $\ell-m$ circles of equal latitude. The function changes sign each time it crosses one of these lines. 3D color plot of the spherical harmonics of degree $n=5$. Note that $n=\ell$. The Laplace spherical harmonics $Y_\ell^m$ can be visualized by considering their "nodal lines", that is, the set of points on the sphere where $\text{Re}[Y_\ell^m] = 0$, or alternatively where $\text{Im}[Y_\ell^m] = 0$. Nodal lines of $Y_\ell^m$ are composed of circles: some are latitudes and others are longitudes. One can determine the number of nodal lines of each type by counting the number of zeros of $Y_\ell^m$ in the latitudinal and longitudinal directions independently. For the latitudinal direction, the real and imaginary components of the associated Legendre polynomials each possess ℓ−|m| zeros, whereas for the longitudinal direction, the trigonometric sin and cos functions possess 2|m| zeros. When the spherical harmonic order m is zero (upper-left in the figure), the spherical harmonic functions do not depend upon longitude, and are referred to as zonal. Such spherical harmonics are a special case of zonal spherical functions. When ℓ = |m| (bottom-right in the figure), there are no zero crossings in latitude, and the functions are referred to as sectoral. For the other cases, the functions checker the sphere, and they are referred to as tesseral. More general spherical harmonics of degree ℓ are not necessarily those of the Laplace basis $Y_\ell^m$, and their nodal sets can be of a fairly general kind.[11] ## List of spherical harmonics Main article: Table of spherical harmonics Analytic expressions for the first few orthonormalized Laplace spherical harmonics that use the Condon-Shortley phase convention: $Y_{0}^{0}(\theta,\varphi)={1\over 2}\sqrt{1\over \pi}$ $Y_{1}^{-1}(\theta,\varphi)={1\over 2}\sqrt{3\over 2\pi} \, \sin\theta \, e^{-i\varphi}$ $Y_{1}^{0}(\theta,\varphi)={1\over 2}\sqrt{3\over \pi}\, \cos\theta$ $Y_{1}^{1}(\theta,\varphi)={-1\over 2}\sqrt{3\over 2\pi}\, \sin\theta\, e^{i\varphi}$ $Y_{2}^{-2}(\theta,\varphi)={1\over 4}\sqrt{15\over 2\pi} \, \sin^{2}\theta \, e^{-2i\varphi}$ $Y_{2}^{-1}(\theta,\varphi)={1\over 2}\sqrt{15\over 2\pi}\, \sin\theta\, \cos\theta\, e^{-i\varphi}$ $Y_{2}^{0}(\theta,\varphi)={1\over 4}\sqrt{5\over \pi}\, (3\cos^{2}\theta-1)$ $Y_{2}^{1}(\theta,\varphi)={-1\over 2}\sqrt{15\over 2\pi}\, \sin\theta\,\cos\theta\, e^{i\varphi}$ $Y_{2}^{2}(\theta,\varphi)={1\over 4}\sqrt{15\over 2\pi}\, \sin^{2}\theta \, e^{2i\varphi}$ ## Higher dimensions The classical spherical harmonics are defined as functions on the unit sphere S2 inside three-dimensional Euclidean space. Spherical harmonics can be generalized to higher dimensional Euclidean space Rn as follows.[12] Let Pℓ denote the space of homogeneous polynomials of degree ℓ in n variables. That is, a polynomial P is in Pℓ provided that $P(\lambda \mathbf{x}) = \lambda^\ell P(\mathbf{x}).$ Let Aℓ denote the subspace of Pℓ consisting of all harmonic polynomials; these are the solid spherical harmonics. Let Hℓ denote the space of functions on the unit sphere $S^{n-1} = \{\mathbf{x}\in\mathbb{R}^n\,\mid\, |x|=1\}$ obtained by restriction from Aℓ. The following properties hold: • The sum of the spaces Hℓ is dense in the set of continuous functions on Sn−1 with respect to the uniform topology, by the Stone-Weierstrass theorem. As a result, the sum of these spaces is also dense in the space L2(Sn−1) of square-integrable functions on the sphere. Thus every square-integrable function on the sphere decomposes uniquely into a series a spherical harmonics, where the series converges in the L2 sense. • For all ƒ ∈ Hℓ, one has $\Delta_{S^{n-1}}f = -\ell(\ell+n-2)f.$ where ΔSn−1 is the Laplace–Beltrami operator on Sn−1. This operator is the analog of the angular part of the Laplacian in three dimensions; to wit, the Laplacian in n dimensions decomposes as $\nabla^2 = r^{1-n}\frac{\partial}{\partial r}r^{n-1}\frac{\partial}{\partial r} + r^{-2}\Delta_{S^{n-1}}.$ • It follows from the Stokes theorem and the preceding property that the spaces Hℓ are orthogonal with respect to the inner product from L2(Sn−1). That is to say, $\int_{S^{n-1}} f\bar{g}\,d\Omega = 0$ for ƒ ∈ Hℓ and g ∈ Hk for k ≠ ℓ. • Conversely, the spaces Hℓ are precisely the eigenspaces of ΔSn−1. In particular, an application of the spectral theorem to the Riesz potential $\Delta_{S^{n-1}}^{-1}$ gives another proof that the spaces Hℓ are pairwise orthogonal and complete in L2(Sn−1). • Every homogeneous polynomial P ∈ Pℓ can be uniquely written in the form $P(x) = P_\ell(x) + |x|^2P_{\ell-2} + \cdots + \begin{cases} |x|^\ell P_0 & \ell \rm{\ even}\\ |x|^{\ell-1} P_1(x) & \ell\rm{\ odd} \end{cases}$ where Pj ∈ Aj. In particular, $\dim \mathbf{H}_\ell = \binom{n+\ell-1}{n-1}-\binom{n+\ell-3}{n- 1}.$ An orthogonal basis of spherical harmonics in higher dimensions can be constructed inductively by the method of separation of variables, by solving the Sturm-Liouville problem for the spherical Laplacian $\Delta_{S^{n-1}} = \sin^{2-n}\phi\frac{\partial}{\partial\phi}\sin^{n-2}\phi\frac{\partial}{\partial\phi} + \sin^{-2}\phi \Delta_{S^{n-2}}$ where φ is the axial coordinate in a spherical coordinate system on Sn−1. The end result of such a procedure is[13] $Y_{l_1, \dots l_{n-1}} (\theta_1, \dots \theta_{n-1}) = \frac{1}{\sqrt{2\pi}} e^{i l_1 \theta_1} \prod_{j = 2}^{n-1} {}_j \bar{P}^{l_{n-1} - 1}_{l_j} (\theta_j)$ where the indices satisfy |ℓ1| ≤ ℓ2 ≤ ... ≤ ℓn−1 and the eigenvalue is −ℓn−1(ℓn−1 + n−2). The functions in the product are defined in terms of the Legendre function ${}_j \bar{P}^l_{L} (\theta) = \sqrt{\frac{2L+j-1}{2} \frac{(L+l+j-2)!}{(L-l)!}} \sin^{\frac{2-j}{2}} (\theta) P^{-(l + \frac{j-2}{2})}_{L+\frac{j-2}{2}} (\cos \theta)$ ## Connection with representation theory The space Hℓ of spherical harmonics of degree ℓ is a representation of the symmetry group of rotations around a point (SO(3)) and its double-cover SU(2). Indeed, rotations act on the two-dimensional sphere, and thus also on Hℓ by function composition $\psi \mapsto \psi\circ\rho$ for ψ a spherical harmonic and ρ a rotation. The representation Hℓ is an irreducible representation of SO(3). The elements of Hℓ arise as the restrictions to the sphere of elements of Aℓ: harmonic polynomials homogeneous of degree ℓ on three-dimensional Euclidean space R3. By polarization of ψ ∈ Aℓ, there are coefficients $\psi_{i_1\dots i_\ell}$ symmetric on the indices, uniquely determined by the requirement $\psi(x_1,\dots,x_n) = \sum_{i_1\dots i_\ell}\psi_{i_1\dots i_\ell}x_{i_1}\cdots x_{i_\ell}.$ The condition that ψ be harmonic is equivalent to the assertion that the tensor $\psi_{i_1\dots i_\ell}$ must be trace free on every pair of indices. Thus as an irreducible representation of SO(3), Hℓ is isomorphic to the space of traceless symmetric tensors of degree ℓ. More generally, the analogous statements hold in higher dimensions: the space Hℓ of spherical harmonics on the n-sphere is the irreducible representation of SO(n+1) corresponding to the traceless symmetric ℓ-tensors. However, whereas every irreducible tensor representation of SO(2) and SO(3) is of this kind, the special orthogonal groups in higher dimensions have additional irreducible representations that do not arise in this manner. The special orthogonal groups have additional spin representations that are not tensor representations, and are typically not spherical harmonics. An exception are the spin representation of SO(3): strictly speaking these are representations of the double cover SU(2) of SO(3). In turn, SU(2) is identified with the group of unit quaternions, and so coincides with the 3-sphere. The spaces of spherical harmonics on the 3-sphere are certain spin representations of SO(3), with respect to the action by quaternionic multiplication. ### Generalizations The angle-preserving symmetries of the two-sphere are described by the group of Möbius transformations PSL(2,C). With respect to this group, the sphere is equivalent to the usual Riemann sphere. The group PSL(2,C) is isomorphic to the (proper) Lorentz group, and its action on the two-sphere agrees with the action of the Lorentz group on the celestial sphere in Minkowski space. The analog of the spherical harmonics for the Lorentz group is given by the hypergeometric series; furthermore, the spherical harmonics can be re-expressed in terms of the hypergeometric series, as SO(3) = PSU(2) is a subgroup of PSL(2,C). More generally, hypergeometric series can be generalized to describe the symmetries of any symmetric space; in particular, hypergeometric series can be developed for any Lie group.[14][15][16][17] ## See also Wikimedia Commons has media related to: Spherical harmonics ## Notes 1. A historical account of various approaches to spherical harmonics in three-dimensions can be found in Chapter IV of MacRobert 1967. The term "Laplace spherical harmonics" is in common use; see Courant & Hilbert 1962 and Meijer & Bauer 2004. 2. Physical applications often take the solution that vanishes at infinity, making A = 0. This does not affect the angular portion of the spherical harmonics. 3. Heiskanen and Moritz, Physical Geodesy, 1967, eq. 1-62 4. This is valid for any orthonormal basis of spherical harmonics of degree ℓ. For unit power harmonics it is necessary to remove the factor of $4 \pi$. 5. Higuchi, Atsushi (1987). "Symmetric tensor spherical harmonics on the N-sphere and their application to the de Sitter group SO(N,1)". Journal of Mathematical Physics 28 (7). 6. N. Vilenkin, Special Functions and the Theory of Group Representations, Am. Math. Soc. Transl., vol. 22, (1968). 7. J. D. Talman, Special Functions, A Group Theoretic Approach, (based on lectures by E.P. Wigner), W. A. Benjamin, New York (1968). 8. W. Miller, Symmetry and Separation of Variables, Addison-Wesley, Reading (1977). 9. A. Wawrzyńczyk, Group Representations and Special Functions, Polish Scientific Publishers. Warszawa (1984). ## References Cited references • Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume I, Wiley-Interscience . • Edmonds, A.R. (1957), Angular Momentum in Quantum Mechanics, Princeton University Press, ISBN 0-691-07912-9. • Eremenko, Alexandre; Jakobson, Dmitry; Nadirashvili, Nikolai (2007), "On nodal sets and nodal domains on $S^2$ and $\mathbb{R}^2$", 57 (7): 2345–2360, ISSN 0373-0956, MR2394544 • MacRobert, T.M. (1967), Spherical harmonics: An elementary treatise on harmonic functions, with applications, Pergamon Press . • Meijer, Paul Herman Ernst; Bauer, Edmond (2004), Group theory: The application to quantum mechanics, Dover, ISBN 978-0-486-43798-9 . • Solomentsev, E.D. (2001), "Spherical harmonics", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 . • Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9 . • Unsöld, Albrecht (1927), "Beiträge zur Quantenmechanik der Atome", Annalen der Physik 387 (3): 355–393, Bibcode:1927AnP...387..355U, doi:10.1002/andp.19273870304 . • Watson, G. N.; Whittaker, E. T. (1927), A Course of Modern Analysis, Cambridge University Press, p. 392 . General references • E.W. Hobson, The Theory of Spherical and Ellipsoidal Harmonics, (1955) Chelsea Pub. Co., ISBN 978-0-8284-0104-3. • C. Müller, Spherical Harmonics, (1966) Springer, Lecture Notes in Mathematics, Vol. 17, ISBN 978-3-540-03600-5. • E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra, (1970) Cambridge at the University Press, ISBN 0-521-09209-4, See chapter 3. • J.D. Jackson, Classical Electrodynamics, ISBN 0-471-30932-X • Albert Messiah, Quantum Mechanics, volume II. (2000) Dover. ISBN 0-486-40924-4. • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.7. Spherical Harmonics", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 • D. A. Varshalovich, A. N. Moskalev, V. K. Khersonskii Quantum Theory of Angular Momentum,(1988) World Scientific Publishing Co., Singapore, ISBN 9971-5-0107-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 118, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8600735664367676, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/hawking-radiation
# Tagged Questions Radiation that comes from pair production quantum effects in close vicinity to an event horizon, leading to the potential for eventual evaporation of black holes. Two mirror particles are created with one falling behind the horizon, becoming casually lost to the rest of the universe, including its ... 5answers 276 views ### Theoretical physics and education: Does it really matter a great deal about what happens inside a black hole, or about Hawking radiation? [closed] I stumbled across this article http://blogs.scientificamerican.com/cross-check/2010/12/21/science-faction-is-theoretical-physics-becoming-softer-than-anthropology/ It got me thinking. Why do we ... 1answer 75 views ### Will the black hole evaporate in finite time from external observer's perspective? There is the problem that is bothering me with the black hole evaporation because of Hawking radiation. According to Hawking theory the black hole will evaporate in finite time because of quantum ... 2answers 163 views ### Is a black hole a perfect black body? A black body absorbs all light/radiation in its reach. According to basic laws of physics, the more energy a body absorbs the more it can emit. Therefore, a black body absorbs all energy directed at ... 0answers 21 views ### Magnetic field-pulsed microwave transmission line Here's the reference: The researchers showed that a magnetic field-pulsed microwave transmission line containing an array of superconducting quantum interference devices, or SQUIDs, not only ... 4answers 203 views ### Time inside a Black hole If time stops inside a black hole, due to gravitational time dilation, how can it's life end after a very long time? If time doesn't pass inside a black hole, then an event to occur inside a black ... 1answer 66 views ### The paradoxical nature of Hawking radiation [duplicate] The definition of a classical black hole is when even electromagnetic radiation can not escape from it. Why then can Hawking radiation be emitted from semi-classical black holes? What is difference ... 1answer 94 views ### Reconstruction of the initial state from Hawking radiation? I hear that unitary evolution and information conservation must imply that information about information content that defines the initial state of matter used to create a black hole can be inferred ... 1answer 142 views ### Hawking Radiation: how does a particle ever cross the event horizon? The heuristic argument for Hawking Radiation is, that a virtual pair-production happens just at the event horizon. One particle goes into the black hole, while the other can be observed as radiation. ... 1answer 59 views ### An infalling object in a black hole looks “paused” for a far away observer, for how long? As I understand, to an observer well outside a black hole, anything going towards it will appear to slow down, and eventually come to a halt, never even touching the event horizon. What happens if ... 2answers 205 views ### How does the evaporation of a black hole look for a distant observer? Let's assume an observer looking at a distant black hole that is created by collapsing star. In observer frame of reference time near black hole horizon asymptotically slows down and he never see ... 2answers 103 views ### Is there something like Hawking radiation that makes protons emit component quarks? If Hawking radiation can escape from black holes, could quarks perhaps become separated from protons despite it being "impossible" for that to happen? 0answers 67 views ### Black hole entropy from collapsed entangled pure light Consider the following scenario, very similar to the one proposed in this question, but this time, the pure quantum radiation used for the black hole collapse, is now being split with down-converter ... 1answer 112 views ### How would you detect Hawking radiation? Hawking theorized that a black hole must radiate and therefore lose mass (Hawking radiation). According to classical relativity though, nothing can escape a black hole, the hawking radiation would ... 2answers 400 views ### Extremal black hole with no angular momentum and no electric charge A black hole will have a temperature that is a function of the mass, the angular momentum and the electric charge. For a fixed mass, Angular momentum and electric charge are bounded by the extremality ... 1answer 62 views ### does the background spacetime of a black hole affects its thermodynamic properties? The question is this: will the thermodynamic properties of a black hole (Hawking radiation spectra and temperature, entropy, area, etc.) depend if the black hole sits in a DeSitter or an Anti-DeSitter ... 1answer 138 views ### is this generalized Hawking radiation formula right? Look at equation 11.2.17 in this page. The expression is: $$T = 10^{-5} \text{K m} \frac{\xi}{\frac{GM}{c^2} \lbrace \frac{GM}{c^2} + \xi \rbrace - e^2 }$$ where \xi = (r_s^2 - a^2 - ... 0answers 67 views ### Is the Hawking radiation of a charged black hole thermal? Suppose you have a Schwarzschild black hole of mass $M$ and angular parameter $a = 0$ (no rotation). Question: is it possible to throw a charge $Q$ at a faster rate that it will be reradiated? Will ... 1answer 123 views ### Multipolar expansion profile of Hawking radiation on Kerr black holes I would be very curious if Kerr black holes emit Hawking radiation at the same temperature in the equatorial bulges and in their polar regions. I've been looking some reference for this for a couple ... 0answers 84 views ### Information scrambling and Hawking non-thermal radiation states Could a very small black hole where half of its entropy has been radiated, emit Hawking radiation that is macroscopically distinct from being thermal? i.e: not a black body radiator. Or would the ... 3answers 227 views ### Why can't light escape from inside event horizon of Black Holes? The simple answer: Its because Gravity of Black Hole there doesn't allow it. See also this and this Phys.SE posts. Isn't it a classical answer? When we're unable to connect Gravity with Quantum ... 0answers 87 views ### Hawking radiation for closely orbiting black holes Suppose we have two black holes of radius $R_b$ orbiting at a distance $R_r$. I believe semi-classical approximations describe correctly the case where $R_r$ is much larger than the average black body ... 0answers 43 views ### transition between extremal and nonextremal black hole states Extremal black holes are at zero temperature, hence they do not radiate. my question is twofold: 1) is extremality of micro black holes a stable property? electric charge is quickly emitted from ... 0answers 152 views ### micro black hole forces A black hole would radiate mass optimally for interstellar-travel applications in the range between $10^7$ and $10^8$ kilograms. Assuming a light-only radiation emission spectrum, with a parabolic ... 1answer 130 views ### Hawking Radiation from the WKB Approximation Reading this paper which is itself an exposition of Parikh and Wilczek's paper, I get to a point where I fail to be able to follow the calculation. Now this is undoubtably because my calculational ... 0answers 71 views ### Hawking Radiation as Tunneling Firstly, I'm aware that Hawking radiation can be derived in the "normal" way using the Bogoliubov transformation. However, I was intrigued by the heuristic explanation in terms of tunneling. The ... 3answers 322 views ### Thermodynamically reversed black holes, firewalls, Casimir effect, null energy condition violations Scott Aaronson asked a very deep question at Hawking radiation and reversibility about what happens if black hole evolution is reversed thermodynamically. Most of the commenters missed his point ... 1answer 134 views ### Formation of a black hole and Hawking radiation From the perspective of an outside observer it takes infinitely long for the black hole to form. But if the black hole is no extremal black hole, it emits Hawking radiation. So the outside observer ... 3answers 327 views ### Hawking radiation and reversibility It's often said that, as long as the information that fell into a black hole comes out eventually in the Hawking radiation (by whatever means), pure states remain pure rather than evolving into mixed ... 1answer 180 views ### What is a virtual photon pair? When describing a black hole evaporation in the hawking black body radiation it is usually said that is due to a virtual photon pair, is it this what happens? And what is virtual photon pair, does the ... 1answer 137 views ### Does cosmological horizon grow or decrease as it radiates? Ron Maimon in many posts claimed that cosmological horizon is like a big black hole. Black holes decrease as they evaporate and their radius decreases as well. So what is with a cosmological ... 2answers 227 views ### Why isn't Hawking radiation frozen on the boundary, like in-falling matter? From the perspective of a far-away observer, matter falling into a black hole never crosses the boundary. Why doesn't a basic symmetry argument prove that Hawking radiation is therefore also frozen on ... 1answer 199 views ### black hole event horizon Given gravitational time dilation, under what conditions will a test particle cross an event horizon before the black hole evaporates? Assume zero background radiation. 2answers 132 views ### event horizons are untraversable by observers far from the collapse? Consider this a followup question of this one In the classical schwarszchild solution with an eternal black hole, the user falls through the event horizon in finite local time, but this event does ... 2answers 259 views ### Hawking radiation from point of view of a falling observer This paper tells that Hawking claimed that the falling to a black hole observer will not detect any radiation. But only because the frequency of the Hawking radiation will be of the order $1/R_s$ so ... 3answers 627 views ### Black holes and positive/negative-energy particles I was reading Brian Greene's "Hidden Reality" and came to the part about Hawking Radiation. Quantum jitters that occur near the event horizon of a black hole, which create both positive-energy ... 1answer 221 views ### Wasn't the Hawking Paradox solved by Einstein? I just watched a BBC Horizon episode where they talked about the Hawking Paradox. They mentioned a controversy about information being lost but I couldn't get my head around this. Black hole ... 2answers 518 views ### How do we determine the temperature of a Black Hole? How do we determine the temperature of a Black Hole? Since we cannot see a Black Hole, which I presume, is because it absorbs light, would it not also prevent radiation from escaping, making ... 1answer 124 views ### Intensity of Hawking radiation for different observers relative to a black hole Consider three observers in different states of motion relative to a black hole: Observer A is far away from the black hole and stationary relative to it; Observer B is suspended some distance ... 3answers 781 views +50 ### From where (in space-time) does Hawking radiation originate? According to my understanding of black hole thermodynamics, if I observe a black hole from a safe distance I should observe black body radiation emanating from it, with a temperature determined by its ... 1answer 108 views ### Hawking radiation: direct matter -> energy conversion? When a black hole evaporates, does it turn all the matter that has fallen in directly to energy, or will it somehow throw back out the same kind of matter (normal or anti) that went in? 2answers 460 views ### Are information conservation and energy conservation related? as evident from the title, are both, conservation of energy and conservation of information two sides of the same coin?? Is there something more to the hypothesis of hawking's radiation other than ... 2answers 575 views ### Analog Hawking radiation I am confused by most discussions of analog Hawking radiation in fluids (see, for example, the recent experimental result of Weinfurtner et al. Phys. Rev. Lett. 106, 021302 (2011), ... 2answers 286 views ### Theoretical basis for black hole evaporation What is the basis for black hole evaporation? I understand that Hawking-radiation is emitted at the event horizon, a theoretical result originating in General Relativity and Quantum Field Theory, but ... 1answer 339 views ### How do we know that black holes evaporate? This has been bugging me for some time. As I understand it, Hawking radiation is the result of the mismatch between the vacuum state of a quantum field as seen by a free falling observer (falling ... 1answer 370 views ### Why isn't black hole information loss this easy (am I missing something basic)? Ok, so on Science channel was a special about Hawking/Susskind debating black holes, which can somehow remove information from the universe. A) In stars, fusion converts 4 hydrogen into 1 helium, ... 2answers 413 views ### Do apparent event horizons have Hawking radiation? As I understand it, black holes have an absolute event horizon and an apparent horizon specific an observer. In addition to black holes, an apparent horizon can come from any sustained acceleration. ... 2answers 246 views ### How can one reconcile the temperature of a black hole with asymptotic flatness? A stationary observer very close to the horizon of a black hole is immersed in a thermal bath of temperature that diverges as the horizon is approached. $$T^{-1} = 4\pi \sqrt{2M(r-2M)}$$ The ... 1answer 494 views ### On black holes, Hawking radiation and gravitational atoms Over the past hour or so I've been following one of my standard physics-based, wanders-through-the-internet. Specifically, I began by reviewing some details of dark energy theory but soon found myself ... 3answers 359 views ### Why is there a flux of radiation in the Hawking effect but not in the Unruh effect? (and other questions) This question is slightly related to this one Do all massive bodies emit Hawking radiation?, which I think was poorly posed and so didn't get very useful answers. There are several questions in this ... 4answers 450 views ### de sitter cosmologic limit It has been said that our universe is going to eventually become a de sitter universe. Expansion will accelerate until their relative speed become higher than the speed of light. So i want to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9130157232284546, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/115898?sort=newest
## Does a uniform space have a closed embedding in a product of metric spaces? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am assuming that uniform spaces are Hausdorff (although it probably doesn't matter for this question). It is more-or-less obvious that a uniform space can be embedded in a product of metric space (if d is a semi-metric on the space) form the quotient gotten by identifying pairs of points at d-distance 0 and then map into the product of these quotients), but I would be interested either to know that every uniform space can be embedded as a closed subspace of such a product or an example of one that cannot. - Having some trouble parsing the question. Likely because of a () mismatch. And do you mean semi-metric or pseudo-metric? As defined here, for instance: en.wikipedia.org/wiki/Uniformity_%28topology%29 – Igor Khavkine Dec 9 at 17:39 1 Sorry, I got mixed up between semi- and pseudo-metric. Having just googled them, I see I should have said pseudo-metric. You are right, the question was senseless. But I still would like to know the answer. Maybe I should mention the context. I can classify the limit closure, in the category of uniform spaces, of the metric spaces as those uniform spaces that are closed subspaces of a product of metric spaces and want to know if that is all uniform spaces. If only John Isbell were still around to ask. – Michael Barr Dec 9 at 18:06 Is the notion of "realcompact" what you are looking for? – Todd Eisworth Dec 9 at 18:14 The condition of the question is equivalent to the uniform space being complete. The other concepts mentioned here are topological and so not appropriate in this context. One could, of course, pose the same question for completely regular spaces with, e.g., the finest uniformity compatible with the topology and so obtain results of this type. – jbc Dec 9 at 18:59 1 No the answer is not "complete". It is clear that complete uniform spaces are the limit closure of the complete metric spaces, a question I had already answered. As for "realcompact" the uniform spaces whose associated topology is realcompact appear to be the limit closure of the full subcategory whose only object is the reals. I have not yet written up the details of that, so I am not quite certain. – Michael Barr Dec 9 at 20:51 show 2 more comments ## 1 Answer You are looking for the notion of Dieudonne complete spaces (which turn out to be exactly the closed subspaces of products of metric spaces). As Todd mentioned in his comment, this notion is closely related with the notion of realcompactness (the two notions coincide if there are no measurable cardinals). A way to find examples is to look for pseudo-compact non-compact spaces (for instance $\omega_1$ with (a uniformity compatible with) the order topology). - Thank you. This is the full answer I was looking for. – Michael Barr Dec 9 at 20:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480145573616028, "perplexity_flag": "head"}
http://www.msri.org/seminars/20097
Mathematical Sciences Research Institute Home » Constructing modules with prescribed cohomology (COMMA) Seminar Constructing modules with prescribed cohomology (COMMA) March 26, 2013 (10:00am PDT - 12:00pm PDT) Location: MSRI: Baker Board Room Speaker(s) No Speakers Assigned Yet. Description No Description Abstract/Media Reverse homological algebra deals with questions like the following ones, concerning a ring $R$ and a (left) $R$-module $k$: What $Ext_R(k,k)$-modules have the form $\mathrm{Ext}_R(M,k)$ for some $R$-module $M$? What are the essential images of the functor $\mathrm{RHom}_R(?,k)$ from various subcategories of the derived category of $R$-modules to the derived category of DG modules over $\mathrm{RHom}_R(k,k)$? Some answers to the second question will be presented when $R$ is commutative, noetherian and local and $k$ is its residue field. Under an additional hypothesis on $R$, which holds for complete intersections and for Golod rings, the first question will be answered "up to truncations." A crucial step of the proof involves a contravariant Koszul duality for (not necessarily commutative) connected DG algebras. Part of the talk is based on joint work with David Jorgensen. No Notes/Supplements Uploaded No Videos Uploaded Video No Video Uploaded • Seminar Home
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8671550154685974, "perplexity_flag": "middle"}
http://cms.math.ca/Events/summer13/abs/cp
2013 CMS Summer Meeting Dalhousie University, June 4 - 7, 2013 Contributed Papers Org: Robert Dawson (Saint Mary's) and Toby Kenney (Dalhousie University) [PDF] SOPHIE BURRILL, Simon Fraser University On the use of generating trees in a variety of combinatorial classes  [PDF] We present a variety of combinatorial classes which can all be represented using arc diagrams: matchings, set partitions, permutations, RNA pseudostructures and Skolem sequences. Then by describing a more general object, namely open arc diagrams, we are able to employ generating trees to exhaustively generate and enumerate each of these classes according to various parameters, including nestings, crossings, and arc lengths. Presenting a unified method for the generation of such parameterized combinatorial classes is our central task. TOKTAM DINEVARI, University of Montreal Fixed point results for multivalued G-contractions  [PDF] We consider multivalued maps defined on a complete metric space endowed with a directed graph. We present fixed point results for maps, called weak G-contractions, which send connected points into connected points and only contract the length of paths. We compare the fixed point sets obtained by Picard iterations from different starting points. The homotopical invariance property of having a fixed point for a family of weak G-contractions will be also presented. DARYL FUNK, Simon Fraser University The 2-separated excluded minors for the class of bias matroids  [PDF] A matroid is a \emph{minor} of another if it can be obtained from the second by a sequence of operations analogous to edge deletion and contraction in graphs. An excluded minor theorem describes the structure of a family of graphs, or matroids, having no minor isomorphic to some prescribed set of graphs, or matroids. For example, Kuratowski famously characterised planar graphs as precisely those with no $K_5$ or $K_{3,3}$ minor. Robertson and Seymour's Graph Minor Theorem states that, as for planar graphs, every family of graphs closed under minors may be characterised by exhibiting a finite set of excluded minors. Much recent work in matroid theory has focused on extending the theory of the graph minors project to certain classes of matroids. Bias (also called frame) matroids generalise graphic matroids. Bias matroids include the class of Dowling geometries, and are important in matroid structure theory. We present a first step toward showing that there are only finitely many excluded minors for the class of bias matroids. We describe those excluded minors that may be constructed by identifying an element in each of two smaller matroids (\textit{i.e.} obtained by a 2-sum). This is joint work with Matt DeVos, Luis Goddyn, and Irene Pivotto. SANJIV KUMAR GUPTA, Sultan Qaboos University Transference of Multipliers on Lie Groups  [PDF] De Leeuw's multiplier theorem relates the multiplier on the circle group $% {\bf T}$ and the real line ${\bf R}$ in a spectacular way. This result has been generalised in many ways in the context of non-commutative harmonic analysis, most notably by Coifman and Weiss. Let $G$ be a real rank one semi--simple Lie group and $G=KAN$ be its Iwasawa decomposition and $M$ be the centraliser of $A$ in $K$. An analogue of De Leeuw's theorem was proved by Rice, Dooley and Gaudry for the pair $(K/M,N)$ for $G=SO(p,1)$. But the transference of multipliers from $N$ to $K/M$ part was not the exact converse of the transference from $K/M$ to $N$. In De Leeuw' s original theorem, transference from ${\bf R}$ to ${\bf T}$ and from ${\bf T}$ to $% {\bf R}$ are exact converse to each other. Ricci and Rubin proved the transference from $K/M$ to $N$ for $G=SU(2,1)$ but $N$ to $K/M$ case remained open. In this talk, I will present an exact analogue of De Leeuw's theorem for $G=SU(p,1)$. Our work resolves a conjecture of C. Herz. This is joint work with A. Dooley and F. Ricci. MARYAM LOTFIPOUR, University of Isfahan Nonempty intersection theorems via KKM theory  [PDF] There exist a lot of problems in different sciences which can be formulated and solved by finding an intersection point of a family of sets. KKM theory is an important tool for showing the existence of such point. In this work, we present some KKM-type results to find an intersection point for set-valued mappings. Furthermore, some applications of these results are obtained. The conditions of the presented theorems improve most of the known results in the literature. KERRY OJAKIAN, Bronx Community College (CUNY) Cops and Robber on the Hypercube  [PDF] The game of "cops and robber" is a two player game, played on a graph, between some number of cops and a single robber. On the robber's turn, he may move to an adjacent vertex. On the cop's turn (under the standard rules), any number of them may move to adjacent vertices while the rest remain where they are. The cops win if they ever occupy the same vertex as the robber, while the robber wins if he can evade the cops indefinitely. The cop number of a graph is the fewest number of cops needed to guarantee a win for the cops. We determine the cop number of the hypercube for various versions of the game. This is joint work with David Offner.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293317198753357, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/136794-confusing-volume-problem.html
# Thread: 1. ## Confusing Volume Problem I'm completely stumped as far as this problem goes. I've got to find the volume of this solid and I can't figure out the area of the cross section. It starts out like this: The solid lies between planes perpendicular to the x-axis at x=0 and x=4. The cross sections perpendicular to the axis on the x interval [0,4] are squares whose diagonals run from y=-√x to y=√x. After talking to my instructor, I realized that the cross section I can see on the graph is half of the square cross section. Basically I figure that I find the volume of what I can see from the graph and double it. So I find that the base of the triangle is 2√x. Unfortunately I can't seem to go any farther, because I have neither a height nor any of the two equal sides. I'm wondering if there's a way I haven't thought of to find the area of this triangle. I realize, by the way, that this is more of a geometry problem than a calculus one, but I posted it here anyway in case there was something in the initial concept I missed. 2. Originally Posted by SethP I'm completely stumped as far as this problem goes. I've got to find the volume of this solid and I can't figure out the area of the cross section. It starts out like this: After talking to my instructor, I realized that the cross section I can see on the graph is half of the square cross section. Basically I figure that I find the volume of what I can see from the graph and double it. So I find that the base of the triangle is 2√x. Unfortunately I can't seem to go any farther, because I have neither a height nor any of the two equal sides. I'm wondering if there's a way I haven't thought of to find the area of this triangle. I realize, by the way, that this is more of a geometry problem than a calculus one, but I posted it here anyway in case there was something in the initial concept I missed. The triangles are isosceles right triangles so the height is $\sqrt{x}$ 3. Dude, thanks! I totally forgot geometry class.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638531804084778, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/121486-isomorphism-ring-p.html
# Thread: 1. ## Isomorphism ring :P Suppose $r$ and $s$ are positive integer numbers with $gcd(r,s)=1$, then show that the mapping $\varphi : Z_{rs}\rightarrow Z_r \times Z_s$ with $\varphi (n) = n (1,1)$ is isomorphism ring! 2. What you mean is that it is a ring isomorphism. I'll let you show that it's a ring homomorphism. To see that it's an isomorphism, note that $\ker \varphi = \{n \in \mathbb{Z}_{rs} : n(1,1) = (1,1)\} = \{n \in \mathbb{Z}_{rs} : n \equiv 1 \mod r \mbox{ and } n \equiv 1 \mod s\}$ $= \{n \in \mathbb{Z}_{rs} : n \equiv 1 \mod r\} \cap \{ n \equiv 1 \mod s\} = \{1\}$. (Chinese remainder theorem!) So the map is injective. Since we have $|\mathbb{Z}_{rs}| = |\mathbb{Z}_{r} \times \mathbb{Z}_{s}| = rs$ we are done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96024090051651, "perplexity_flag": "head"}
http://mathoverflow.net/questions/91442/background-for-classic-forcing/91455
## Background for classic forcing ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When I learning forcing theory, I am surprised by various foring definition,I want to know the original purpose, why define partial order in such way and some background material which can help me gain more intuition. 1. Cohen forcing.It was made by Cohen when he solve CON(ZFC+nonCH),It add many new reals,it is natural to think this if we consider the patial order form,But I heard Cohen used the boolean value model, if condider the boolean value model,how can I understand the intuition? 2. Random forcing.I just know it made by solovy.But I don't know the background. 3. Laver forcing.I just know laver forcing used to prove CON(ZFC+BC) by laver. How about Sacks forcing,Hechler forcing, Mathias forcing,Miller forcing? - 1 For this question I would suggest instead for the title something like 'Background for classical forcing notions' – Justin Palumbo Mar 18 2012 at 0:08 ## 3 Answers The specific notions of forcing you mention are all part of the (now) basic toolbox for getting independence results in set theory at the level of the reals. The standard reference for set theory of this flavor is Bartoszynski and Judah's text "Set theory: on the structure of the real line". If you want to go further in this area (and based on your MO questions so far, perhaps you do) after Kunen's "Set theory: an introduction to independence proofs" this is the book you want to read. It has a wealth of information on the posets you've mentioned. Each of these forcings is used to add a real of a certain kind to the universe, and in a 'definable way'. There is a unifying view one can take of these forcings: for each there is some ideal $\mathcal{I}$ on the reals so that the forcing is equivalent to forcing with all borel sets not in $\mathcal{I}$ (and the ideal $\mathcal{I}$ has a basis consisting of Borel sets). This perspective on forcing to add reals has been extensively studied with great success by Zapletal and many elegant results characterization properties of the forcing in terms of the relevant ideal have been discovered. See Zapletal's monographs 'Descriptive set theory and definable forcing' or 'Forcing idealized'. Let me briefly go over the forcings you've mentioned. I'll give the ideal and the original context for the forcing, although each now has many applications for a wide variety of independence results. Cohen forcing $\mathbb{C}$ was invented by Cohen to produce a model of $\neg\mathrm{CH}$. Of course, it has found many many other uses since then. It is the unique separative countable forcing. The relevant ideal is the collection of meager subsets of $\mathbb{R}$. Random forcing $\mathbb{B}$ was invented by Solovay. He originally used it to analyze his model where all sets of reals are Lebesgue measurable. The paper "A model of set-theory in which every set of reals is Lebesgue measurable" is still quite readable, though you can also find the proof in Jech or in BJ. The relevant ideal is the collection of null subsets of $\mathbb{R}$. Sacks forcing $\mathbb{S}$. This was invented by Gerald Sacks in order to produce a minimal real. That is, if $g$ is a Sacks real over the constructible universe $L$, and $x$ is any real in $L[g]$ then either $x\in L$ or $g\in L[x]$. This was done in his paper "Forcing with perfect closed sets". The relevant ideal is the collection of countable subsets of $\mathbb{R}$. Hechler forcing $\mathbb{D}$ was invented by Stephen Hechler. It is the most basic way of adding a dominating real to the universe, and often it is referred to instead as 'dominating forcing' (including in BJ). Hechler used it produce (consistently) a wide variety of possible cofinal behaviors of the structure $(\omega^\omega,\leq^*)$; this was done in his paper 'On the existence of certain cofinal subsets of $\omega^\omega$'. The relevant ideal was isolated in the paper 'Hechler reals' by Labedzki and Repicky, and is also in BJ. Laver forcing $\mathbb{L}$ was invented by Richard Laver to produce a model of the Borel conjecture (in his paper "On the consistency of Borel's conjecture", and also in BJ). The ideal is described in Zapletal's monographs. Mathias forcing $\mathbb{M}$ is due to Adrian Mathias (see: 'Happy families'). He used it to prove (among other things) that the infinite exponent relation $\omega\rightarrow(\omega)^\omega$ holds in the Solovay model mentioned above. For the relevant ideal see either of Zapetal's monographs. Miller forcing $\mathbb{Q}$ was originally called 'Rational perfect set forcing' by Arnold Miller, and was introduced in his paper with the same name. The paper is on his webpage http://www.math.wisc.edu/~miller/res/rat.pdf where you can read all about it. The ideal is the ideal of $\sigma$-compact subsets of $\omega^\omega$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you want to learn how Cohen invented/discovered forcing, read his 2002 article "The discovery of forcing" (Rocky Mountain journal). Available at Project euclid. - There is an article by Timothy Chow that gives background motivation and eases you in to forcing called "A beginner's guide to forcing"; it is of an expository nature, but there is still technical detail. Another article I would recommend would be "A cheerful introduction to forcing and the Continuum Hypothesis" by Kenny Easwaran; it is more technical than the first paper I linked but still very accesible to anyone with basic experience with Set Theory and Model Theory. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588689208030701, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/18030/how-to-select-kernel-for-svm/18032
# How to select kernel for SVM? When using SVM, we need to select a kernel. I wonder how to select a kernel. Any criteria on kernel selection? - what is the size of the problem? (#variables, observations)? – user603 Nov 7 '11 at 11:26 I am just asking for a generalized solution, no particular problem specified – xiaohan2012 Nov 7 '11 at 11:37 ## 4 Answers The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea. In the absence of expert knowledge, the Radial Basis Function kernel makes a good default kernel (once you have established it is a problem requiring a non-linear model). The choice of the kernel and kernel/regularisation parameters can be automated by optimising a cross-valdiation based model selection (or use the radius-margin or span bounds). The simplest thing to do is to minimise a continuous model selection criterion using the Nelder-Mead simplex method, which doesn't require gradient calculation and works well for sensible numbers of hyper-parameters. If you have more than a few hyper-parameters to tune, automated model selection is likely to result insevere over-fitting, due to the variance of the model selection criterion. It is possible to use gradient based optimisation, but the performance gain is not ususally worth the effort of coding it up) Automated choice of kernels and kernel/regularisation parameters is a tricky issue, as it is very easy to overfit the model selection criterion (typically cross-validation based), and you can end up with a worse model than you started with. Automated model selection also can bias performance evaluation, so make sure your performance evaluation evaluates the who process of fitting the model (training and model selection), for details, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (pdf) and G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010.(pdf) - If you are not sure what would be best you can use automatic techniques of selection (e.g. cross validation, ... ). In this case you can even use a combination of classifiers (if your problem is classification) obtained with different kernel. However, the "advantage" of working with a kernel is that you change the usual "Euclidean" geometry so that it fits your own problem. Also, you should really try to understand what is the interest of a kernel for your problem, what is particular to the geometry of your problem. This can include: • Invariance: if there is a familly of transformations that do not change your problem fundamentally, the kernel should reflect that. Invariance by rotation is contained in the gaussian kernel, but you can think of a lot of other things: translation, homothetie, any group representation, .... • What is a good separator ? if you have an idea of what a good separator is (i.e. a good classification rule) in your classification problem, this should be included in the choice of kernel. Remmeber that SVM will give you classifiers of the form $$\hat{f}(x)=\sum_{i=1}^n \lambda_i K(x,x_i)$$ If you know that a linear separator would be a good one, then you can use Kernel that gives affine functions (i.e. $K(x,x_i)=\langle x,A x_i\rangle+c$). If you think smooth boundaries much in the spirit of smooth KNN would be better, then you can take a gaussian kernel... - In your answer, you mentioned that "The "advantage" of working with a kernel is that you change the usual "Euclidian" geometry so that it fits your own problem. Also, you should really try to understand what is the interest of a kernel for your problem, what is particular to the geometry of your problem." Can you give a few references to start with. Thanks. – Raihana May 12 '12 at 8:57 I always have the feeling that any hyper parameter selection for SVMs is done via cross validation in combination with grid search. - 1 I have the same feeling – xiaohan2012 Nov 7 '11 at 13:09 1 grid search is a bad idea, you spend a lot of time searching in areas where performance is bad. Use gradient free optimisation algorithms, such as the Nelder-Mead simplex method, which is far more efficient in practice (e.g. fminsearch() in MATLAB). – Dikran Marsupial Nov 7 '11 at 13:14 No, use graphical models or Gaussian processes for global optimization in combination with expected information. (See 'Algorithms for hyper parameter optimization', Bergstra et al, forthcoming NIPS) – bayerj Nov 7 '11 at 15:11 In general, the RBF kernel is a reasonable rst choice.Furthermore,the linear kernel is a special case of RBF,In particular,when the number of features is very large, one may just use the linear kernel. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8974105715751648, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=45445
Physics Forums ## Anti-matter question Hello, I'm new here and I have a question about anti-matter. Does annihilation occur when a particle and it's anti-particle touch, or does any combination of particles and anti-particles annihilate? Can someone help explain this to me? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Welcome Slacker ! You may like to view antimatter as Feynman understood it : antimatter is ordinary matter traveling backwards in time. Hard to swallow. Well, it is very efficient to picture it this way, and besides, nobody could tell you "you are wrong" with this conception. Imagine a one + one dimensional world : there is one time dimension, and only one space dimension. It is very convenient, because spacetime is reduced to a plane, and we can actually visualize it. Now, there is only one particle around. What do we see : a single curve. The particle moving in spacetime draws a curve. (This is the same in actual 3+1 diemsional spacetime, we just cannot visualize spacetime, that would require to go in a 5 dimensional space and contemplate the 4-dim hypersurface with a curve in it. See, let us stick to the 1+1 world for now.) OK. In order to make the discussion easier, the time axis is vertically oriented, future goes up. The unique space dimension is horizontal. Now our fuzzy particle decides to change direction in time. Let us say, it should emit a photon in order to conserve energy and momentum, but let us not care about this outgoing photon for now. It goes backwards in time for a while (down), then meets another photon, and reverses once again to go back forwards in time (up). Let us also not care for now about the ingoing photon required to make the second flow reversal. We can consider 3 regions : in the far past, there is only our single and sad particle going towards the future. Likewise, in the far future, there is the same sad single curve. But in between somewhere, we see two curves ! As it appears to us, they look like different particles. Besides, the curve corresponding to our buddy travelling backwards in time has all the properties of an antiparticle : if for instance our particle carries electrical charge, this charge will seem to us to be opposite when the flow is towards the past ! Now let us go back to our photons : there are two photons in the plane. As it seems to us, one photon in the lower part is creating a particle-antiparticle pair, the particle escapes to infinity, and the antiparticle eventually meets an identical particle and this meeting produces a decay in the outgoing photon. The photon carry no quantum number such as electrical charge, or other kinds of charges (except for angular momentum, but this is also linked to spatial movement, so it does not need to be conserved at the level of internal charges). You see that you need exactly the opposite quantum numbers to produce annihilation. And from the Feynman-backwards-in-time-flow view point, this is obvious. OK, I think I understand. So, let's say there was an isotope of hydrogen with one neutron. If an atom composed of one anti-proton, one neutron, and one positron were to touch it, they would annihilate. Does the number of neutrons affect whether or not they annihilate? ## Anti-matter question Quote by Slacker OK, I think I understand. So, let's say there was an isotope of hydrogen with one neutron. If an atom composed of one anti-proton, one neutron, and one positron were to touch it, they would annihilate. Does the number of neutrons affect whether or not they annihilate? I could not tell for sure, but in my opinion if such a reaction was performed, the first meeting would be the electron/positron annihilation, and this would already produce so much energy (twice 511 keV) that both nuclei could blow up before they even meet. So I cannot even tell if the proton/antiproton annihilation would actually occur Besides, is there really a bound state antiproton/neutron ? The annihilation would have already occured at the quark level before one can even produce the second "atom". Quote by humanino Welcome Slacker ! You may like to view antimatter as Feynman understood it : antimatter is ordinary matter traveling backwards in time. [...] The photon carry no quantum number such as electrical charge, or other kinds of charges (except for angular momentum, but this is also linked to spatial movement, so it does not need to be conserved at the level of internal charges). You see that you need exactly the opposite quantum numbers to produce annihilation. And from the Feynman-backwards-in-time-flow view point, this is obvious. Thanks humanino, I didn't know that, but when the anti-particle and particle annihiate, is it as if they were never there? Forward in time and backward in time cancel each other out? If so, where does the information go? According to quantum theory information can't be destroyed, but then again it isn't, 'cause it was never there... but it was there before it was never there... my head hurts. When they annihilate, their energy, momentum and information content is carried by the decay products: usually photons for charged particles, but other end products are also possible. Personally, I find that thinking of antiparticles as particles going back in time is just a useful mathematical trick - while some physics about the particle/antiparticle symmetry emerges, we always observe the antiparticle as moving forward through time and we can't observe stuff moving backwards through time anyway. So I try not to read too much into this - someone here can correct me if I'm missing something. Thank's zefram_c. Quote by humanino So I cannot even tell if the proton/antiproton annihilation would actually occur Besides, is there really a bound state antiproton/neutron ? . On page 1122 of my 5th Ed. of Halliday-R-W's Fundamentals of Physics extended, there is a photograph of the annihilation of an anti-proton that was propelled into a real proton loaded target that produced 4 pi+ mesons and 4 pi- mesons. 7 of these were in flight at the time of the photograph but the first +/_ pair produced were directed in the direction opposed to that of incident anti-proton which is hard to understand for momentum conservation purposes. There was a fork in the image of the first Pi+ (perhaps the magnetic field of the bubble chamber was responsible for the absence of the anticipated fork in the track of the Pi- meson.) which fork represented the decay to the mu+ meson that deays, 100 %, to a positron that annihilates with any nearby electron. The upshot is that the anti-proton annihilates in pieces with each annihilation producing 1.022 MeV of total photon energy(with no photon more energetic than 0.511 MeV). When it is remembered that the spin axis of the anti-neutron is merely the opposite end of the spin axis of the neutron, it merely flips 180-degrees. The neutron binds equally well to either real- or anti- protons. Cheers, Jim I am not saying $$p^+/p^-$$ annihilation can not occur. I was questionning whether in the case of hydrogen/antihydrogen the energy released by $$e^+/e^-$$ (which occurs first) could prevent the protons from meeting at all. It might depend on the experimental conditions really. Hi Humanino, In actual experiments, anti protons are fired into a hydride target with the result that 4 plus pions and 4 minus pions were pictured in a bubble chamber. If the exposure had been longer all mesons would have decayed to like charged muons. One positive pion that moved more slowly than the others did show a kink in its trajectory when the muon appeared. Cheers, Jim PS. Lest your forget; opposite charges attract each other so that nothing can happen before annihilation. Quote by NEOclassic In actual experiments, anti protons are fired into a hydride target with the result that 4 plus pions and 4 minus pions were pictured in a bubble chamber. In actual experiments 20 years ago ? Nobody uses bubble chambers anymore ! If the exposure had been longer all mesons would have decayed to like charged muons. One positive pion that moved more slowly than the others did show a kink in its trajectory when the muon appeared. Right. What is the point ? Lest your forget; opposite charges attract each other so that nothing can happen before annihilation. I do not hear this argument Let us look at energies : the binding energy of two unit charges at a distance of the order of the atomic scale is around 13.6 eV, whereas the annihilation of the electron/positron pair yields twice 511 keV. So once the $$e^+/e^-$$ pair has annihilated, what is happening to the $$p^+$$ remaining ? Eventually, it will annihilate for sure, ok. The question was whether this would occur with the nucleus of the original hydrogen nuclei, or with any other random proton somewhere else. Quote by humanino Besides, is there really a bound state antiproton/neutron ? The annihilation would have already occured at the quark level before one can even produce the second "atom". Yup, indeed, big problem p~=(u~u~d~), n= (udd) [~stands for anti], so we are looking at 2x2 quark annihilations (u~u)+(d~d) (therefore quite unstable) if you ask me. Thread Tools Similar Threads for: Anti-matter question Thread Forum Replies General Astronomy 1 High Energy, Nuclear, Particle Physics 1 High Energy, Nuclear, Particle Physics 2 Astrophysics 0 Beyond the Standard Model 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425455927848816, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/3227/solving-two-coupled-non-linear-second-order-differential-equations-numerically
# Solving two coupled non-linear second order differential equations numerically I have encountered the following system of differential equations in lagrangian mechanics. Can you suggest a numerical method, with relevant links and references on how can I solve it, and the implementation in C (if possible) Also, is there a shorter implementation on Matlab or Mathematica? \begin{align*} mx \dot y^2 + mg\cos(y) - Mg - (m+M)\,\ddot x &= 0 \\ g\sin(y) + 2\dot x\dot y + x \,\ddot y &= 0 \end{align*} where $\dot x$ or $\dot y$ are time derivatives, and the double dots indicate a 2nd derivative wrt time. - @ramanujan_dirac: Check my edit. Is this the set of equations you meant to type? – Paul♦ Sep 6 '12 at 18:10 @Paul: Sorry, it was actually M + m, where m, M are distinct constants in general. I have edited to reflect the same. – ramanujan_dirac Sep 6 '12 at 18:35 ## 3 Answers Why implement it by hand? Matlab, Maple and Mathematica all have tools builtin to solve differential equations numerically, and they use far better methods than you could implement yourself in finite time. In Matlab, you want to look at ode45. In Maple it's called dsolve (with the 'numeric' option set), in Mathematica it is NDSolve. - ## Did you find this question interesting? Try our newsletter email address You could use Runge-Kutta method to solve this system numerically, first rewrite your second order equation as a first order system by doing following substitution trick: $$\left\{ \begin{aligned} x_1' &= x_2 \\ y_1' & = y_2 \\ x_2' &= x_1 y_2 + g \cos y_1 -(M+m)g/m \\ y_2' &= -g(\sin y_1) /x_1 - 2x_2 y_2/x_1 \end{aligned} \right.$$ Now you could use `MATLAB`'s `ode45` or `ode23` to solve it, if you wanna implement the method on C, I believe there are many pkgs available there on the internet, like this. - Runge-Kutta methods such as (4,5) are also available in the GNU Scientific Library (which is written in C). They also include adaptive time-stepping. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227153062820435, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/130594-equation-doesnt-have-limit-does-print.html
This equation doesn't have a limit, does it? Printable View • February 24th 2010, 12:45 PM satx This equation doesn't have a limit, does it? I don't know how to format, sorry... "Find the limit, if it exists" Limit (as x approaches 2): $(x^2 - x +6)/(x-2)$ I used the quadratic formula to try to find values for the numerator, but one ends up with 1 plus or minus the square root of -23 divided by two. Obviously, negative numbers don't have real square roots, so is it safe to say there is no limit to this equation? Thanks... • February 24th 2010, 02:09 PM Archie Meade 1 Attachment(s) Quote: Originally Posted by satx I don't know how to format, sorry... "Find the limit, if it exists" Limit (as x approaches 2): (x^2 - x +6)/(x-2) I used the quadratic formula to try to find values for the numerator, but one ends up with 1 plus or minus the square root of -23 divided by two. Obviously, negative numbers don't have real square roots, so is it safe to say there is no limit to this equation? Thanks... Yes satx, here is a sketch of it • February 24th 2010, 02:59 PM Plato Quote: Originally Posted by satx I don't know how to format, sorry... Why not learn to post in symbols? You can use LaTeX tags. [tex]\lim _{x \to 2} \frac{{x^2 - x + 6}}{{x - 2}}[/tex] gives $\lim _{x \to 2} \frac{{x^2 - x + 6}}{{x - 2}}$. • February 26th 2010, 05:13 AM satx Quote: Originally Posted by Archie Meade Yes satx, here is a sketch of it So because 2+ and 2- are converging on opposite infinities, there's no limit? If they were to converge on the same infinity, there would be a limit, right? • February 26th 2010, 06:14 AM Archie Meade Not quite, satx, if they were converging on the same infinity, then both branches would be shooting off in the same direction, once again never meeting. In a situation like this, we check to see if the denominator is a factor of the numerator. If it was, we could say $\frac{x-2}{x-2}=1,$ for all x except 2. Then that situation is described as having a "hole" in the graph at x=2. If the numerator is a quadratic and the denominator linear, the graph would be indistinguishable from a straight line. Written as a fraction of course, we'd have to exclude 2 from the domain if the denominator contains (x-2). In that case the limit may be evaluated as the graph approaches f(2) from both sides, since if x is not 2, then $\frac{x-2}{x-2}=1$ hence $\frac{x^2-5x+6}{x-2}=\frac{(x-2)(x-3)}{x-2}$ has a "hole" at x=2 though the graph is indistinguishable from x-3. Hence f(2) would be -1 if the graph was x-3, therefore -1 is the limit as x approaches 2 for $\frac{x^2-5x+6}{x-2}$ We cannot "see" the hole, of course. For $\frac{x^2-x+6}{x-2}$ the denominator is not a factor of the numerator. Another way to express the function is $f(x)=(x+1)+\frac{8}{x-2}$ If x is 2, then $\frac{8}{x-2}$ again cannot be evaluated since $\frac{8}{x-2}$ approaches infinity as x approaches 2. All times are GMT -8. The time now is 01:50 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407907128334045, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/98639?sort=votes
## The number of group elements whose squares lie in a given subgroup ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This number is divisible by the order of the subgroup http://arxiv.org/abs/1205.2824. The proof is short but non-trivial. Is this fact new or is it known for a long time? - 6 Welcome to Mathoverflow! – HW Jun 2 at 10:12 ## 1 Answer Here is an easy character-theoretic proof of the fact that given a subgroup $H$ of a finite group $G$ and a positive integer $k$, the number of elements $y \in G$ such that $y^k \in H$ is divisible by $|H|$. Let $\theta_k$ be the class function on $G$ defined by $\theta_k(x)$ = |{ $y \in G \mid y^k = x$ }|. It is well known that this class function is a generalized character. (In other words, it is a $\Bbb Z$-linear combination of irreducible characters.) The number of interest here is $\sum_{x \in H} \theta_k(x)$, which is equal to $|H|[(\theta_k)_H,1_H]$. This is clearly divisible by $|H|$ since the second factor is an integer because $\theta_k$ is a generalized character. In fact, the coefficient of an irreducible character $\chi$ in $\theta_k$ is the integer I called $\nu_k(\chi)$ in my character theory book. For $k = 2$, this is the famous Frobenius-Schur indicator, whose value lies in the set {0,-1,1}. For other integers $k$, it is true that $\nu_k(\chi)$ is an integer, but there is no upper bound on its absolute value. - Thank you, Marty! Your proof is shorter than ours but less elementary. However, I would be happy if someone provides a reference proving that the fact is known. Actually, the paper cited in the question contains a more general fact (Corollary 5). <i>Suppose that $H$ is a subgroup of a group $G$ and $W$ is a subgroup (or a subset) of a free group $F$. Then the number of homomorphisms $f\colon F\to G$ such that $f(W)\subseteq H$ is divisible by $|H|$.</i> (Taking $F=\Bbb Z$, we obtain the statement from the question.) Does there exist a short character-theoretic proof for this too? – Anton Klyachko Jun 4 at 23:41 1 It seems to be easy to prove via character theory that if |W| = 1, then number of homomorphisms f such that f(W) <= H is a multiple of |H|. I don't see a proof along these lines if W has cardinality exceeding 1. – Marty Isaacs Jun 5 at 22:12 Is this because the number of tuples $(g_1,\dots,g_n)$ such that $w(g_1,\dots,g_n)=x$ is a generalised character $\theta(x)$ for ANY word $w\in F$ or this is not so easy? – Anton Klyachko Jun 6 at 17:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369155764579773, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3746254
Physics Forums ## The difference between the weak and mass eigenstates in the PNMS matrix Hi, I am hoping someone could clear up a few things about neutrinos oscillations for me. For the sake of this dicussion let's set up the neutrino mixing equations in such a way that the flavor eigenstates are a superposition of mass eigenstates. So now for example we have $|\nu_e>=\Sigma U_{1j}|\nu_j>$ Now here is where I am getting confused, the line above reads that the electron neutrino is a superposition of mass eigenstates, SO is it theoretically possible to measure the mass of one electron neutrino and find it is say $m_j$ and then measure the mass of a different electron neutrino and find its mass to be $m_k$ where the two measured masses are not equal. I.E find a pair of eletron neutrinos that have different masses? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Leaving aside the practical consideration that no-one has yet been able to specifically measure the mass of any kind of neutrino, I think the mass and flavour of neutrinos are complementary measurements under the uncertainty principle - you cannot measure both similtaneously. Think of the neutral kaons as an analogy - after a certain interval of time, a K0L that started life as a d$\bar{s}$ has a non-zero probability of being a $\bar{d}$s! But surely if that was the case then it would be nonsensical for experiments such as supernemo to attempt measurements of mass as it would not be possible to assign the mass to a particle.. ## The difference between the weak and mass eigenstates in the PNMS matrix Sorry, my previous post was a bit opaque (and also slightly confused). What I was really trying to say was that neutrinos propagate as mass eigenstates that don't have a definite 'flavour' in the sense of being a sepcific $\nu$e, $\nu$μ or $\nu$τ. Suppose in an experiment we detect neutrinos by getting them to interact with some heavy particles (eg atomic nuclei) in a detector, and observing the Cerenkov radiation produced by the e-/e+ particles into which they are converted. For simplicity we will ignore reactions that produce μs or τs. Let's also assume we can somehow very accurately measure the energy and momenta (pμ) of both the electron/positron and the recoiling nucleus. From these we could then calculate the energy and momentum of each incoming neutrino and hence its rest mass (m0 = pμpμ). What we should then find is that the incoming neutrinos may be in different mass eigenstates ($\nu$1/$\nu$2/$\nu$3). A neutrino in any of these states can be "wearing the clothes of" an electron neutrino as it arrives into our detector, so may thus get caught and detected. So if we plotted the measured neutrino masses on the X axis of a graph and the number of neutrinos detected on the Y axis, we should expect to see a separate peak around the mass of each mass eigenstate. Incidentally, SuperNEMO is an experiment designed to detect neutrinoless double beta decay, which is a somewhat different (though related) thing. It will be a very significant result if they find this, but that's a story for another day. Mentor If you took a bucket of stopped pions, and could very, very accurately measure the electron energy in the decay π→e+v, you would see three peaks, one for each of the mass eigenstates. Essentially, there are three different decays: π→e+v1, π→e+v2, π→e+v3. Why is this a problem? This isn't a problem I was confirming whether it is possible to find two electron (or mu or tau) neutrinos that have different masses. I figured that is what the maths was saying, however I had spoken to someone who insisted this was not the case, which confused me when the maths is basically staring right at you. EDIT Yes that is the experimental aim of supernemo, but from that you can set the absolute mass scale, (not the mass squared difference). Yes it also would be significant as it would confirm the majarana'ness (spelt wrong) of neutrinos in general. FURTHER EDIT what made my confusion worse was after reading some papers in which they gave upperbounds for the masses of an electron, muon and tau neutrino, which seemed non-sensical. My question is basically trivial in regards to the maths, however a lot of papers seem to imply opposite answers, spuring confusion. Im not actually a moron guys, funnily enough I have just joined the T2K experiment and I realised there was an obvious hole in my knowledge of neutrino theory. Mentor This is ordinary QM. If something is in a flavor eigenstate, it's not in a mass eigenstate. "I was confirming whether it is possible to find two electron (or mu or tau) neutrinos that have different masses" is like saying "is it possible to find two photons that are polarized along x that have different polarizations along z?" Well in this case the flavor eigenstates are superpositions of mass eigenstates, and vice versa, each mass eigenstate is a superposition of flavor, so mathematically it is. You take two neutrinos produced in two separate mass eigenstates (call them 1 and 2) and allow them to propagate somewhere, I then attempt to 'measure' their flavor, as both mass eigenstates have a non-zero probability of being in the electron eigenstates then it is possible to find two electron neutrinos with differing masses Quote by Vanadium 50 If you took a bucket of stopped pions, and could very, very accurately measure the electron energy in the decay π→e+v, you would see three peaks, one for each of the mass eigenstates. Essentially, there are three different decays: π→e+v1, π→e+v2, π→e+v3. Why is this a problem? Isn't electronness conserved? So only the decay π→e+v1 exists. It could also decay π→μ+v2. Anyways, neutrinos are created in a flavor eigenstate but propagate in a mass eigenstate. Mentor Quote by thedemon13666 then it is possible to find two electron neutrinos with differing masses An electron neutrino does not have a definite mass. It is in a mix of three mass eigenstates. Likewise, a neutrino of definite mass is not in a flavor eigenstate. So "the mass of an electron neutrino" is not a well-defined quantity. Exactly, but if you made a mass measurement the wavefunction would collapse and it would have one of three masses. (Im talking from a theoretical point of view with the mass measurement) Mentor Quote by Vanadium 50 If you took a bucket of stopped pions, and could very, very accurately measure the electron energy in the decay π→e+v, you would see three peaks, one for each of the mass eigenstates. Essentially, there are three different decays: π→e+v1, π→e+v2, π→e+v3. Quote by robert2734 Isn't electronness conserved? No. We know this because neutrino (flavor) oscillations exist. An antineutrino that was created together with an electron, e.g. in nuclear beta decay, can be absorbed in a reaction that produces a muon. My interpretation of the mathematics is that if you could select a neutrino with a particular mass by measuring the electron energy precisely in the decay that V50 describes, then that neutrino would be a mixture of e, μ and τ neutrino states, at least from the moment that you measure the electron energy. Furthermore, it would be a non-oscillating mixture because it would be a pure mass state. The probabilities of getting an e, μ or τ when the neutrino interacts, would be constant, and not vary with position or time as with neutrino oscillations. Mentor Quote by jtbell My interpretation of the mathematics is that if you could select a neutrino with a particular mass by measuring the electron energy precisely in the decay that V50 describes, then that neutrino would be a mixture of e, μ and τ neutrino states, at least from the moment that you measure the electron energy. Because energy is conserved, this happens immediately, and it happens whether or not you actually measure the energy. You could have, and that's enough to collapse the wavefunction. So, why don't neutrino beams exist in a non-oscillating set of pure mass eigenstates? Because the pion is not a free particle when it decays. It's constrained to be within the volume of the decay pipe, and by the Heisenberg Uncertainty Principle, that localization causes an uncertainty in the momentum too large to determine the neutrino mass from the electron energy. Essentially, it's quantum mechanics on a scale of tens or hundreds of meters doing this. Ok I am a little confused. Mathematically the flavor oscillates because of the way you form the pmns matrix. i.e mass = PMNS X flavor But you could just form it the other way round so: flavor = PMNS X mass so why can you not think of the mass as oscillating? Also could you ellaborate on why the pion is constrained to the volume of the pipe? Mentor Well, you can't just move matrices around like that. You would need to make it the inverse. And yes, you could write it like that, but why? It's a little like F = am. The pion is restricted to the decay pipe just like the air is restricted to a room. It's in a box - or a pipe, in this case. That wasnt my point, someone further up stated that the neutrino in the flavor eigenstate wouldnt oscillate between masses, it would be a superposition but fixed in time.... By moving them around I didnt simply mean swapping them, to one part in 10^42 the PNMS matrix is unitary, so where I have written PMNS, I mean PMNS-dagger. Oh I see, I thought you were describing something deeply mathematical.... not that it is physically trapped in there, derp... A quick calculation shows that the 10^(-3)*ev^(2) mass^(2) difference between neutrino mass eigenstates can even be "swallowed" into the pion width, its mass uncertainty due to its finite lifetime. the uncertainty on the pion mass squared is of order ev^(2), which is large compared to the neutrino mass squared contribution to the invariant mass. Thread Tools | | | | |----------------------------------------------------------------------------------------------|----------------------------------------|---------| | Similar Threads for: The difference between the weak and mass eigenstates in the PNMS matrix | | | | Thread | Forum | Replies | | | High Energy, Nuclear, Particle Physics | 3 | | | Special & General Relativity | 2 | | | Advanced Physics Homework | 0 | | | High Energy, Nuclear, Particle Physics | 3 | | | High Energy, Nuclear, Particle Physics | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519426226615906, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/107721/sobolev-space-reflexivity?answertab=active
# sobolev space reflexivity I am having problem with the following 1)Are $H^{1}$ nad $H^{1}_{0}$ a reflexive spaces? 2)If $u_{n} \rightarrow u$ weakly in $H^{1}_{0}$, can I say that it is same as $(\nabla u_{n} , \nabla w) \rightarrow (\nabla u, \nabla w)$ for any $w \in H^{1}_{0}$? Thanks a lot - ad 1) Hilbert spaces are reflexive. – martini Feb 10 '12 at 8:18 ## 1 Answer 1) Assume you're in an open set $U\subset\mathbb{R}^n$ then the mapping $u\mapsto (u,\nabla u)$ from either $H^1$ or $H_0^1$ to $L^2\times(L^2)^n$ equipped with the appropiate product norm gives an isometry. Since $H^1$ and $H_0^1$ are Banach spaces, they're also closed subspaces of a reflexive space and so are reflexive themselves. 2) By the Poincaré inequality (the one that says $\| u\|_{H_0^1}\leq c\| \nabla u \|_{L^2}$ for every $u\in C_c^\infty$) the inner product $(\nabla u , \nabla v)_{L^2}$ is equivalent to the usual inner product in $H_0^1$ so the answer is yes. - What is meant by equivalent inner product? I know equivalent norms – user16847 Feb 10 '12 at 13:11 It means there exists $c_1,c_2>0$ such that $c_1(u,v)_{H_0^1} \leq (\nabla u, \nabla v)_{L^2}\leq c_2 (u,v)_{H_0^1}$ for all $u,v\in H_0^1$. By the polarization identity this is the same as asking the norms to be equivalent. – Jose27 Feb 10 '12 at 14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365647435188293, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=538977
Physics Forums Fixed Point Problems 1. The problem statement, all variables and given/known data Find all real values x that are fixed by the function y=4-x^2 f(x)=4-x^2 2. Relevant equations x=y 3. The attempt at a solution x=4-x62 0=-x^2-x+4 0=-(x^2+x+(1/4))+(17/4) This is where i get stuck. I also have two other problems which iIdo not understand how to work with. f(x)=7+sqrt(x-1) f(x)=sqrt(10+3x) -4 The main problem keeping me from doing the the two above is not knowing what to do with the square root of the expression underneath. Thanks in advance. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Remember that you want to isolate x. Focus on doing that. If you are completing the square, be sure you know that method. Recognitions: Gold Member Science Advisor Staff Emeritus A "fixed point" for a function f is a value of x such that f(x)= x. 1) $4- x^2= x$ gives $x^2+ x- 4= 0$. Solve that by completing the square or using the quadratic formula. 2) $7+ \sqrt{x- 1}= x$ is the same as $\sqrt{x- 1}= x- 7$. Square both sides to get $x- 1= (x- 7)^2= x^2- 14x+ 49$. That is also a quadratic equation- but this time is easily factorable. Be sure to check your answers in the original equation. "Squaring both sides" of an equation can introduce spurious solutions. 3). $\sqrt{10+ 3x}- 4= x$ is the same as $\sqrt{10+ 3x}= x+ 4$. Again, square both sides to get a quadratic equation. Be sure to check your answers in the original equation. Fixed Point Problems Quote by HallsofIvy A "fixed point" for a function f is a value of x such that f(x)= x. 1) $4- x^2= x$ gives $x^2+ x- 4= 0$. Solve that by completing the square or using the quadratic formula. 2) $7+ \sqrt{x- 1}= x$ is the same as $\sqrt{x- 1}= x- 7$. Square both sides to get $x- 1= (x- 7)^2= x^2- 14x+ 49$. That is also a quadratic equation- but this time is easily factorable. Be sure to check your answers in the original equation. "Squaring both sides" of an equation can introduce spurious solutions. 3). $\sqrt{10+ 3x}- 4= x$ is the same as $\sqrt{10+ 3x}= x+ 4$. Again, square both sides to get a quadratic equation. Be sure to check your answers in the original equation. So for the 1st problem would the answer be $(\sqrt{17}/2)-(1/2)$ or $(-\sqrt{17}/2)-(1/2)$ Thanks for all of your help by the way. Tags fixed, points, precalculus, root, square Thread Tools | | | | |-------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Fixed Point Problems | | | | Thread | Forum | Replies | | | Engineering, Comp Sci, & Technology Homework | 17 | | | Calculus & Beyond Homework | 0 | | | Programming & Comp Sci | 6 | | | Calculus & Beyond Homework | 5 | | | Calculus | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046311974525452, "perplexity_flag": "middle"}
http://www.chemeurope.com/en/encyclopedia/Nordstr%C3%B6m's_theory_of_gravitation.html
My watch list my.chemeurope.com my.chemeurope.com With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • Home • Encyclopedia • Nordström's_theory_of_gravitation # Nordström's theory of gravitation In theoretical physics, Nordström's theory of gravitation was a predecessor of general relativity. Strictly speaking, there were actually two distinct theories proposed by the Finnish theoretical physicist Gunnar Nordström, in 1912 and 1913 respectively. The first was quickly shot down, but the second turned out to be the first known example of a metric theory of gravitation, in which the effects of gravitation are treated entirely in terms of the geometry of a curved spacetime. Neither of Nordström's theories are in agreement with observation and experiment. Nonetheless, the first remains of interest insofar as it led to the second. The second remains of interest both as an important milestone on the road to our current gold standard theory of gravitation, general relativity, and as a simple example of a self-consistent relativistic theory of gravitation. As an example, this theory is particularly useful in the context of pedagogical discussions of how to derive and test the predictions of a metric theory of gravitation. ## Development of the theories Nordström's theories arose at a time when several leading physicists, including Nordström in Helsinki, Max Abraham in Milan, Gustav Mie in Greifswald, Germany, and Albert Einstein in Prague, were all trying to create competing relativistic theories of gravitation. All of these researchers began by trying to suitably modify the existing theory, the field theory version of Newton's theory of gravitation. In this theory, the field equation is the Poisson equation Δφ = 4πρ, where φ is the gravitational potential and ρ is the density of matter, augmented by an equation of motion for a test particle in an ambient gravitational field, which we can derive from Newton's force law and which states that the acceleration of the test particle is given by the gradient of the potential $\frac{d \vec{u}}{dt} = -\nabla \phi$ This theory is not relativistic because the equation of motion refers to coordinate time rather than proper time, and because, should the matter in some isolated object suddenly be redistributed by an explosion, the field equation requires that the potential everywhere in "space" must be "updated" instantaneously, which violates the principle that any "news" which has a physical effect (in this case, an effect on test particle motion far from the source of the field) cannot be transmitted faster than the speed of light. Einstein's former calculus professor, Hermann Minkowski had sketched a vector theory of gravitation as early as 1908, but in 1912, Abraham pointed out that no such theory would admit stable planetary orbits. This was one reason why Nordström turned to scalar theories of gravitation (while Einstein explored tensor theories). Nordström's first attempt to propose a suitable relativistic scalar field equation of gravitation was the simplest and most natural choice imaginable: simply replace the Laplacian in the Newtonian field equation with the D'Alembertian or wave operator, which gives $\Box \phi = 4 \pi \, \rho$. This has the result of changing the vacuum field equation from the Laplace equation to the wave equation, which means that any "news" concerning redistribution of matter in one location is transmitted at the speed of light to other locations. Correspondingly, the simplest guess for a suitable equation of motion for test particles might seem to be $\dot{u}_a = -\phi_{,a}$ where the dot signifies differentiation with respect to proper time and where ua is the velocity four-vector of the test particle. This force law had earlier been proposed by Abraham, and Nordström knew that it wouldn't work. Instead he proposed $\dot{u}_a = -\phi_{,a} - \dot{\phi} \, u_a$. However, this theory is unacceptable for a variety of reasons. Two objections are theoretical. First, this theory is not derivable from a Lagrangian, unlike the Newtonian field theory (or most metric theories of gravitation). Second, the proposed field equation is linear. But by analogy with electromagnetism, we should expect the gravitational field to carry energy, and on the basis of Einstein's work on relativity theory, we should expect this energy to be equivalent to mass and therefore, to gravitate. This implies that the field equation should be nonlinear. Another objection is more practical: this theory disagrees violently with observation. Einstein and von Laue proposed that the problem might lie with the field equation, which, they suggested, should have the linear form FTmatter = ρ, where F is some yet unknown function of φ, and where Tmatter is the trace of the stress-energy tensor describing the density, momentum, and stress of any matter present. In response to these criticisms, Nordström proposed his second theory in 1913. From the proportionality of inertial and gravitational mass, he deduced that the field equation should be $\phi \, \Box \phi = -4 \pi \, T_{\rm matter}$, which is nonlinear. Nordström now took the equation of motion to be $\frac{d \left( \phi \, u_a \right)}{ds} = -\phi_{,a}$ or $\phi \, \dot{u}_a = -\phi_{,a} - \dot{\phi} \, u_a$. Einstein took the first opportunity to proclaim his approval of the new theory. In a keynote address to the annual meeting of the Society of German Scientists and Physicians, given in Vienna on September 23, 1913, Einstein surveyed the state of the art, declaring that only his own work with Marcel Grossmann and the second theory of Nordström were worthy of consideration. (Mie, who was in the audience, rose to protest, but Einstein explained his criteria and Mie was forced to admit that his own theory did not meet them.) Einstein considered the special case when the only matter present is a cloud of dust (that is, a perfect fluid in which the pressure is assumed to be negligible). He argued that the contribution of this matter to the stress-energy tensor should be: $\left( T_{\rm matter} \right)_{ab} = \phi \, \rho \, u_a \, u_b$ He then derived an expression for the stress-energy tensor of the gravitational field in Nordström's second theory, $4 \pi \, \left( T_{\rm grav} \right)_{ab} = \phi_{,a} \, \phi_{,b} - 1/2 \, \eta_{ab} \, \phi_{,m} \, \phi^{,m}$ which he proposed should hold in general, and showed that the sum of the contributions to the stress-energy tensor from the gravitational field energy and from matter would be conserved, as should be the case. Furthermore, he showed, the field equation of Nordström's second theory follows from the Lagrangian $L = \frac{1}{8 \pi} \, \eta^{ab} \, \phi_{,a} \, \phi_{,b} - \rho \, \phi$ Since Nordström's equation of motion for test particles in an ambient gravitational field also follows from a Lagrangian, this shows that Nordström's second theory can be derived from an action principle and also shows that it obeys other properties we must demand from a self-consistent field theory. Meanwhile, a gifted Dutch student, Adriaan Fokker had written a Ph.D. thesis under Hendrik Lorentz in which he derived what is now called the Fokker-Planck equation. Lorentz, delighted by his former student's success, arranged for Fokker to pursue post-doctoral study with Einstein in Prague. The result was a historic paper which appeared in 1914, in which Einstein and Fokker observed that the Lagrangian for Nordström's equation of motion for test particles, $L = \phi^2 \, \eta_{ab} \, \dot{u}^a \, \dot{u}^b$, is the geodesic Lagrangian for a curved Lorentzian manifold with metric tensor $g_{ab} = \phi^2 \, \eta_{ab}$. If we adopt Cartesian coordinates with line element $d\sigma^2 = \eta_{ab} \, x^a \, x^b$ with corresponding wave operator $\Box$ on the flat background, or Minkowski spacetime, so that the line element of the curved spacetime is $ds^2 = \phi^2 \, \eta_{ab} \, x^a \, x^b$, then the Ricci scalar of this curved spacetime is just $R = -\frac{6 \, \Box \phi}{\phi^3}$ Therefore Nordström's field equation becomes simply $R = 24 \pi \, T$ where on the right hand side, we have taken the trace of the stress-energy tensor (with contributions from matter plus any non-gravitational fields) using the metric tensor gab. This is a historic result, because here for the first time we have a field equation in which on the left hand side stands a purely geometrical quantity (the Ricci scalar is the trace of the Ricci tensor, which is itself a kind of trace of the fourth rank Riemann curvature tensor), and on the right hand stands a purely physical quantity, the trace of the stress-energy tensor. Einstein gleefully pointed out that this equation now takes the form which he had earlier proposed with von Laue, and gives a concrete example of a class of theories which he had studied with Grossmann. Some time latter, Hermann Weyl introduced the Weyl curvature tensor Cabcd, which measures the deviation of a Lorentzian manifold from being conformally flat, i.e. with metric tensor having the form of the product of some scalar function with the metric tensor of flat spacetime. This is exactly the special form of the metric proposed in Nordström's second theory, so the entire content of this theory can be even more elegantly summarized in the following two equations: $R = 24 \pi \, T, \; \; \; C_{abcd} = 0$ ## Features of Nordström's theory Einstein's enthusiasm for Nordström's second theory is well-grounded in some remarkably attractive features. Not only are the field equations strikingly simple and elegant, the vacuum field equations in Nordström's theory are simply $R = 0, \; \; \; C_{abcd} = 0$ We can immediately write down the general vacuum solution in Nordström's theory: $ds^2 = \exp (2 \psi) \, \eta_{ab} \, dx^a \, dx^b, \; \; \; \Box \psi = 0$ where φ = exp(ψ) and $d\sigma^2 = \eta_{ab} \, dx^a \, dx^b$ is the line element for flat spacetime in any convenient coordinate chart (such as cylindrical, polar spherical, or double null coordinates), and where $\Box$ is the ordinary wave operator on flat spacetime (expressed in cylindrical, polar spherical, or double null coordinates, respectively). But the general solution of the ordinary three dimensional wave equation is well known, and can be given rather explicit form. Specifically, for certain charts such as cylindrical or polar spherical charts on flat spacetime (which induce corresponding charts on our curved Lorentzian manifold), we can write the general solution in terms of a power series, and we can write the general solution of certain Cauchy problems in the manner familiar from the Lienard-Wiechert potentials in electromagnetism. In any solution to Nordström's field equations (vacuum or otherwise), if we consider ψ as controlling a conformal perturbation from flat spacetime, then to first order in ψ we have $ds^2 = \exp(2 \, \psi) \, \eta_{ab} \, dx^a \, dx^b \approx (1 + 2 \psi) \, \eta_{ab} \, dx^a \, dx^b$ Thus, in the weak field approximation, we can identify ψ with the Newtonian gravitational potential, and we can regard it as controlling a small conformal perturbation from a flat spacetime background. In any metric theory of gravitation, all gravitational effects arise from the curvature of the metric. In a spacetime model in Nordström's theory (but not in general relativity), this depends only on the trace of the stress-energy tensor. But the field energy of an electromagnetic field contributes a term to the stress-energy tensor which is traceless, so in Nordström's theory, electromagnetic field energy does not gravitate! Indeed, since every solution to the field equations of this theory is a spacetime which is among other things conformally equivalent to flat spacetime, null geodesics must agree with the null geodesics of the flat background, so this theory can exhibit no light bending. Incidentally, the fact that the trace of the stress-energy tensor for an electrovacuum solution (a solution in which there is no matter present, nor any non-gravitational fields except for an electromagnetic field) vanishes shows that in the general electrovacuum solution in Nordström's theory, the metric tensor has the same form as in a vacuum solution, so we need only write down and solve the curved spacetime Maxwell field equations. But these are conformally invariant, so we can also write down the general electrovacuum solution, say in terms of a power series. In any Lorentzian manifold (with appropriate tensor fields describing any matter and physical fields) which stands as a solution to Nordström's field equations, the conformal part of the Riemann tensor (i.e. the Weyl tensor) always vanishes. The Ricci scalar also vanishes identically in any vacuum region (or even, any region free of matter but containing an electromagnetic field). Are there any further restrictions on the Riemann tensor in Nordström's theory? To find out, note that an important identity from the theory of manifolds, the Ricci decomposition, splits the Riemann tensor into three pieces, which are each fourth-rank tensors, built out of, respectively, the Ricci scalar, the trace-free Ricci tensor $S_{ab} = R_{ab} - \frac{1}{4} \, R \, g_{ab}$ and the Weyl tensor. It immediately follows that Nordström's theory leaves the trace-free Ricci tensor entirely unconstrained by algebraic relations (other than the symmetric property, which this second rank tensor always enjoys). But taking account of the twice-contracted and detraced Bianchi identity, a differential identity which holds for the Riemann tensor in any (semi)-Riemannian manifold, we see that in Nordström's theory, as a consequence of the field equations, we have the first-order covariant differential equation ${{S_a}^b}_{;b} = 6 \, \pi \, T_{;a}$ which constrains the semi-traceless part of the Riemann tensor (the one built out of the trace-free Ricci tensor). Thus, according to Nordström's theory, in a vacuum region only the semi-traceless part of the Riemann tensor can be nonvanishing. Then our covariant differential constraint on Sab shows how variations in the trace of the stress-energy tensor in our spacetime model can generate a nonzero trace-free Ricci tensor, and thus nonzero semi-traceless curvature, which can propagate into a vacuum region. This is critically important, because otherwise gravitation would not, according to this theory, be a long-range force capable of propagating through a vacuum. In general relativity, something somewhat analogous happens, but there it is the Ricci tensor which vanishes in any vacuum region (but not in a region which is matter-free but contains an electromagnetic field), and it is the Weyl curvature which is generated (via another first order covariant differential equation) by variations in the stress-energy tensor and which then propagates into vacuum regions, rendering gravitation a long-range force capable of propagating through a vacuum. We can tabulate the most basic differences between Nordström's theory and general relativity, as follows: Comparison of Nordström's theory with General Relativity type of curvature Nordström Einstein R scalar vanishes in electrovacuum vanishes in electrovacuum Sab once traceless nonzero for gravitational radiation vanishes in vacuum Cabcd completely traceless vanishes always nonzero for gravitational radiation Another very striking feature of Nordström's theory is that it while it can be written as the theory of a certain scalar field in Minkowski spacetime, and in this form enjoys the expected conservation law for nongravitational mass-energy together with gravitational field energy, but suffers from a not very memorable force law, in the curved spacetime formulation the motion of test particles is very elegantly described (the world line of a free test particle is a timelike geodesic, and by an obvious limit, the world line of a laser pulse is a null geodesic), but we lose the conservation law. So which interpretation is correct? In other words, which metric is the one which according to Nordström can be measured locally by physical experiments? The answer is: the curved spacetime is the physically observable one in this theory (as in all metric theories of gravitation); the flat background is a mere mathematical fiction which is however of inestimable value for such purposes as writing down the general vacuum solution, or studying the weak field limit. At this point, we could show that in the limit of slowly moving test particles and slowly evolving weak gravitational fields, Nordström's theory of gravitation reduces to the Newtonian theory of gravitation. Rather than showing this in detail, we will proceed to a detailed study the two most important solutions in this theory: • the spherically symmetric static asymptotically flat vacuum solutions • the general vacuum gravitational plane wave solution in this theory. We will use the first to obtain the predictions of Nordström's theory for the four classic solar system tests of relativistic gravitation theories (in the ambient field of an isolated spherically symmetric object), and we will use the second to compare gravitational radiation in Nordström's theory and in Einstein's general theory of relativity. ## The static spherically symmetric asymptotically flat vacuum solution The static vacuum solutions in Nordström's theory are the Lorentzian manifolds with metrics of the form $ds^2 = \exp(2 \psi) \, \eta_{ab} \, dx^a \, dx^b, \; \; \Delta \psi = 0$ where we can take the flat spacetime Laplace operator on the right. To first order in ψ, the metric becomes $ds^2 = (1 + 2 \, \psi) \, \eta_{ab} \, dx^a \, dx^b$ where $\eta_{ab} \, dx^a \, dx^b$ is the metric of Minkowski spacetime (the flat background). ### The metric Adopting polar spherical coordinates, and using the known spherically symmetric asymptotically vanishing solutions of the Laplace equation, we can write the desired exact solution as $ds^2 = (1-m/\rho) \, \left( -dt^2 + d\rho^2 + \rho^2 \, ( d\theta^2 + \sin(\theta)^2 \, d\phi^2 ) \right)$ where we justify our choice of integration constants by the fact that this is the unique choice giving the correct Newtonian limit. This gives the solution in terms of coordinates which directly exhibit the fact that this spacetime is conformally equivalent to Minkowski spacetime, but the radial coordinate in this chart does not readily admit a direct geometric interpretation. Therefore, we adopt instead Schwarzschild coordinates, using the transformation $r = \rho \, (1 - m/\rho)$, which brings the metric into the form $ds^2 = (1+m/r)^{-2} \, (-dt^2 + dr^2) + r^2 \, (d\theta^2 + \sin(\theta)^2 \, d\phi^2 )$ $-\infty < t < \infty, \; 0 < r < \infty, \; 0 < \theta < \pi, \; -\pi < \phi < \pi$ Here, r now has the simple geometric interpretation that the surface area of the coordinate sphere r = r0 is just $4 \pi \, r_0^2$. Just as happens in the corresponding static spherically symmetric asymptotically flat solution of general relativity, this solution admits a four dimensional Lie group of isometries, or equivalently, a four dimensional (real) Lie algebra of Killing vector fields. These are readily determined to be $\partial_t$ (translation in time) $\partial_\phi$ (rotation about an axis through the origin) $-\cos(\theta) \, \partial_\theta + \cot(\theta) \, \sin(\theta) \, \partial_\phi$ $\sin(\theta) \, \partial_\theta + \cot(\theta) \, \cos(\theta) \, \partial_\phi$ These are exactly the same vector fields which arise in the Schwarzschild coordinate chart for the Schwarzschild vacuum solution of general relativity, and they simply express the fact that this spacetime is static and spherically symmetric. ### Geodesics The geodesic equations are readily obtained from the geodesic Lagrangian. As always, these are second order nonlinear ordinary differential equations. If we set θ = π / 2 we find that test particle motion confined to the equatorial plane is possible, and in this case first integrals (first order ordinary differential equations) are readily obtained. First, we have $\dot{t} = E \, \left( 1 + m/r \right)^2 \approx E \, \left( 1 + 2 m/r \right)$ where to first order in m we have the same result as for the Schwarzschild vacuum. This also shows that Nordström's theory agrees with the result of the Pound-Rebka experiment. Second, we have $\dot{\phi} = L/r^2$ which is the same result as for the Schwarzschild vacuum. This expresses conservation of orbital anglar momentum of test particles moving in the equatorial plane, and shows that the period of a nearly circular orbit (as observed by a distant observer) will be same as for the Schwarzschild vacuum. Third, with ε = − 1,0,1 for timelike, null, spacelike geodesics, we find $\frac{\dot{r}^2}{ \left( 1+m/r \right)^4} = E^2 - V$ where $V = \frac{L^2/r^2 - \epsilon}{ \left( 1 + m/r \right)^2}$ is a kind of effective potential. In the timelike case, we see from this that there exist stable circular orbits at rc = L2 / m, which agrees perfectly with Newtonian theory (if we ignore the fact that now the angular but not the radial distance interpretation of r agrees with flat space notions). In contrast, in the Schwarzschild vacuum we have to first order in m the expression $r_c \approx L^2/m - 3 m$. In a sense, the extra term here results from the nonlinearity of the vacuum Einstein field equation. ### Static observers It makes sense to ask how much force is required to hold a test particle with a given mass over the massive object which we assume is the source of this static spherically symmetric gravitational field. To find out, we need only adopt the simple frame field $\vec{e}_0 = \left( 1 + m/r \right) \, \partial_t$ $\vec{e}_1 = \left( 1 + m/r \right) \, \partial_r$ $\vec{e}_2 = \frac{1}{r} \, \partial_\theta$ $\vec{e}_3 = \frac{1}{r \, \sin(\theta)} \, \partial_\phi$ Then, the acceleration of the world line of our test particle is simply $\nabla_{\vec{e}_0} \vec{e}_0 = \frac{m}{r^2} \, \vec{e}_2$ Thus, the particle must maintain radially outward to maintain its position, with a magnitude given by the familiar Newtonian expression (but again we must bear in mind that the radial coordinate here cannot quite be identified with a flat space radial coordinate). Put in other words, this is the "gravitational acceleration" measured by a static observer who uses a rocket engine to maintain his position. In contrast, to second order in m, in the Schwarzschild vacuum the magnitude of the radially outward acceleration of a static observer is m r-2 + m^2 r-3; here too, the second term expresses the fact that Einstein gravity is slightly stronger "at corresponding points" than Nordström gravity. The tidal tensor measured by a static observer is $E[\vec{X}]_{ab} = \frac{m}{r^3} \, {\rm diag}(-2,1,1) + \frac{m^2}{r^4} \, {\rm diag}(-1,1,1)$ where we take $\vec{X}=\vec{e}_0$. The first term agrees with the corresponding solution in the Newtonian theory of gravitation and the one in general relativity. The second term shows that the tidal forces are a bit stronger in Nordström gravity than in Einstein gravity. ### Extra-Newtonian precession of periastria In our discussion of the geodesic equations, we showed that in the equatorial coordinate plane θ = π / 2 we have $\dot{r}^2 = (E^2 - V) \; ( 1 + m/r )^4$ where V = (1 + L2 / r2) / (1 + m / r)2 for a timelike geodesic. Differentiating with respect to proper time s, we obtain $2 \dot{r} \ddot{r} = \frac{d}{dr} \left( (E^2-V) \, (1+m/r)^4 \right) \; \dot{r}$ Dividing both sides by $\dot{r}$ gives $\ddot{r} = \frac{1}{2} \, \frac{d}{dr} \left( (E^2-V) \, (1+m/r)^4 \right)$ We found earlier that the minimum of V occurs at rc = L2 / m where Ec = L2 / (L2 + m2). Evaluating the derivative, using our earlier results, and setting $\varepsilon = r-L^2/m^2$, we find $\ddot{\varepsilon} = -\frac{m^4}{L^8} \, (m^2+L^2) \, \varepsilon + O(\varepsilon^2)$ which is (to first order) the equation of simple harmonic motion. In other words, nearly circular orbits will exhibit a radial oscillation. However, unlike what happens in Newtonian gravitation, the period of this oscillation will not quite match the orbital period. This will result in slow precession of the periastria (points of closest approach) of our nearly circular orbit, or more vividly, in a slow rotation of the long axis of a quasi-Keplerian nearly elliptical orbit. Specifically, $\omega_{\rm shm} \approx \frac{m^2}{L^4} \, \sqrt{m^2+L^2} = \frac{1}{r^2} \, \sqrt{m^2+m r}$ (where we used $L = \sqrt{m r}$ and removed the subscript from rc), whereas $\omega_{\rm orb} = \frac{L}{r^2} = \sqrt{m/r^3}$ The discrepancy is $\Delta \omega = \omega_{\rm orb} - \omega_{\rm shm} = \sqrt{\frac{m}{r^3}} - \sqrt{\frac{m^2}{r^4} + \frac{m}{r^3}} \approx -\frac{1}{2} \sqrt{ \frac{m^3}{r^5}}$ so the periastrion lag per orbit is $\Delta \phi = 2 \pi \, \Delta \omega \approx -\pi \, \sqrt{\frac{m^3}{r^5}}$ and to first order in m, the long axis of the nearly elliptical orbit rotates with the rate $\frac{ \Delta \phi}{\omega_{\rm orb}} \approx -\frac{\pi m}{r}$ This can be compared with the corresponding expression for the Schwarzschild vacuum solution in general relativity, which is (to first order in m) $\frac{ \Delta \phi}{\omega_{\rm orb}} \approx \frac{6 \pi m}{r}$ Thus, in Nordström's theory, if the nearly elliptical orbit is tranversed counterclockwise, the long axis slowly rotates clockwise, whereas in general relativity, it rotates counterclockwise six times faster. In the first case we may speak of a periastrion lag and in the second case, a periastrion advance. In either theory, with more work, we can derive more general expressions, but we shall be satisfied here with treating the special case of nearly circular orbits. For example, according to Nordström's theory, the perihelia of Mercury should lag at a rate of about 7 seconds of arc per century, whereas according to general relativity, the perihelia should advance at a rate of about 43 seconds of arc per century. ### Light delay Null geodesics in the equatorial plane of our solution satisfy $0 = \frac{-dt^2 + dr^2}{(1 + m/r)^2} + r^2 \, d\phi^2$ Consider two events on a null geodesic, before and after its point of closest approach to the origin. Let these distances be $R_1, \, R, \, R_2$ with $R_1, \, R_2 \gg R$. We wish to eliminate φ, so put $R = r \, \cos \phi$ (the equation of a straight line in polar coordinates) and differentiate to obtain $0 = -r \sin \phi \, d\phi + \cos \phi \, dr$ Thus $r^2 \, d\phi^2 = \cot(\phi)^2 \, dr^2 = \frac{R^2}{r^2-R^2} \, dr^2$ Plugging this into the line element and solving for dt, we obtain $dt \approx \frac{1}{\sqrt{r^2-R^2}} \; \left( r + m \, \frac{R^2}{r^2} \right) \; dr$ Thus the coordinate time from the first event to the event of closest approach is $(\Delta t)_1 = \int_R^{R_1} dt \approx \frac{m+R_1}{R_1} \, \sqrt{R_1^2-R^2} = \sqrt{R_1^2-R^2} + m \, \sqrt{1-(R/R_1)^2}$ and likewise $(\Delta t)_2 = \int_R^{R_2} dt \approx \frac{m+R_2}{R_2} \, \sqrt{R_2^2-R^2} = \sqrt{R_2^2-R^2} + m \, \sqrt{1-(R/R_2)^2}$ Here the elapsed coordinate time expected from Newtonian theory is of course $\sqrt{R_1^2-R^2} + \sqrt{R_2^2-R^2}$ so the relativistic time delay, according to Nordström's theory, is $\Delta t = m \, \left( \sqrt{1-(R/R_1)^2} + \sqrt{1-(R/R_2)^2} \right)$ To first order in the small ratios $R/R_1, \; R/R_2$ this is just Δt = 2m. The corresponding result in general relativity is $\Delta t = 2 m + 2 m \, \log \left( \frac{4 \, R_1 \, R_2}{R^2} \right)$ which depends logarithmically on the small ratios $R/R_1, \; R/R_2$. For example, in the classic experiment in which, at a time when, as viewed from Earth, Venus is just about to pass behind the Sun, a radar signal emitted from Earth which grazes the limb of the Sun, bounces off Venus, and returns to Earth (once again grazing the limb of the Sun), the relativistic time delay is about is about 20 microseconds according to Nordström's theory and about 240 microseconds according to general relativity. ### Summary of results We can summarize the results we found above in the following table, in which the given expressions represent appropriate approximations: Comparison of Predictions in Three Theories of Gravitation Newton Nordström Einstein Acceleration of static test particle m r-2 m r-2 m r-2 + m2 r-3 Extra-Coulomb tidal force 0 m2 r-4 diag(-1,1,1) 0 Radius of circular orbit R = L2 m -1 R = L2 m -1 R = L2 m-1 - 3 m Gravitational red shift factor 1 1 + m r -1 1 + m r -1 Angle of light bending 0 0 $\delta \phi = \frac{4 \, m}{R}$ Rate of precession of periastria 0 $\frac{\Delta \phi}{\omega_{\rm orb}} = -\frac{\pi \, m}{R}$ $\frac{\Delta \phi}{\omega_{\rm orb}} = \frac{6 \, \pi \, m}{R}$ Time delay 0 $2 \, m$ $2 \, m + 2 \, m \; \log \left( \frac{4 \, R_1 \, R_2}{R^2} \right)$ The last four lines in this table list the so-called four classic solar system tests of relativistic theories of gravitation. Of the three theories appearing in the table, only general relativity is in agreement with the results of experiments and observations in the solar system. Nordström's theory gives the correct result only for the Pound-Rebka experiment; not surprisingly, Newton's theory flunks all four relativistic tests. ## Vacuum gravitational plane wave In the double null chart for Minkowski spacetime, $ds^2 = 2 \, du \, dv + dx^2 + dy^2, \; \; \; -\infty < u, \, v, \, x, \, y < \infty$ a simple solution of the wave equation $2 \, \psi_{uv} + \psi_{xx} + \psi_{yy} = 0$ is ψ = f(u), where f is an arbitrary smooth function. This represents a plane wave traveling in the z direction. Therefore, Nordström's theory admits the exact vacuum solution $ds^2 = \exp(2 f(u)) \; \left( 2 \, du \, dv + dx^2 + dy^2 \right), \; \; \; -\infty < u, \, v, \, x, \, y < \infty$ which we can interpret in terms of the propagation of a gravitational plane wave. This Lorentzian manifold admits a six dimensional Lie group of isometries, or equivalently, a six dimensional Lie algebra of Killing vector fields: $\partial_v$ (a null translation, "opposing" the wave vector field $\partial_u$) $\partial_x, \; \; \partial_y$ (spatial translation orthogonal to the wavefronts) $-y \, \partial_x + x \, \partial_y$ (rotation about axis parallel to direction of propagation) $x \, \partial_v + u \, \partial_x, \; \; y \, \partial_v + u \, \partial_y$ For example, the Killing vector field $x \, \partial_v + u \, \partial_x$ integrates to give the one parameter family of isometries $(u,v,x,y) \longrightarrow (u, \; v+ x \, \lambda + \frac{u}{2} \, \lambda^2, \; x + u \, \lambda, \; y)$ Just as in special relativity (and general relativity), it is always possible to change coordinates, without disturbing the form of the solution, so that the wave propagates in any direction transverse to $\partial_z$. Note that our isometry group is transitive on the hypersurfaces u = u0. In contract, the generic gravitational plane wave in general relativity has only a five dimensional Lie group of isometries. (In both theories, special plane waves may have extra symmetries.) We'll say a bit more about why this is so in a moment. Adopting the frame field $\vec{e}_0 = \frac{1}{\sqrt{2}} \, \left( \partial_v + \exp(-2f) \, \partial_u \right)$ $\vec{e}_1 = \frac{1}{\sqrt{2}} \, \left( \partial_v - \exp(-2f) \, \partial_u \right)$ $\vec{e}_2 = \partial_x$ $\vec{e}_3 = \partial_y$ we find that the corresponding family of test particles are inertial (freely falling), since the acceleration vector vanishes $\nabla_{\vec{e}_0} \vec{e}_0 = 0$ Notice that if f vanishes, this family becomes a family of mutually stationary test particles in flat (Minkowski) spacetime. With respect to the timelike geodesic congruence of world lines obtained by integrating the timelike unit vector field $\vec{X} = \vec{e}_0$, the expansion tensor $\theta[\vec{X}]_{\hat{p} \hat{q}} = \frac{1}{\sqrt{2}} \, f'(u) \, \exp (-2 \, f(u)) \, {\rm diag} (0,1,1)$ shows that our test particles are expanding or contracting isotropically and transversely to the direction of propagation. This is exactly what we would expect for a transverse spin-0 wave; the behavior of analogous families of test particles which encounter a gravitational plane wave in general relativity is quite different, because these are spin-2 waves. This is due to the fact that Nordström's theory of gravitation is a scalar theory, whereas Einstein's theory of gravitation (general relativity) is a tensor theory. On the other hand, gravitational waves in both theories are transverse waves. Electromagnetic plane waves are of course also transverse. The tidal tensor $E[\vec{X}]_{\hat{p}\hat{q}} = \frac{1}{2} \, \exp (-4 \, f(u)) \; \left ( f'(u) ^2 - f''(u) \right) \, {\rm diag} (0,1,1)$ further exhibits the spin-0 character of the gravitational plane wave in Nordström's theory. (The tidal tensor and expansion tensor are three-dimensional tensors which "live" in the hyperplane elements orthogonal to $\vec{e}_0$, which in this case happens to be irrotational, so we can regard these tensors as defined on orthogonal hyperslices.) The exact solution we are discussing here, which we interpret as a propagating gravitational plane wave, gives some basic insight into the propagation of gravitational radiation in Nordström's theory, but it does not yield any insight into the generation of gravitational radiation in this theory. At this point, it would be natural to discuss the analog for Nordström's theory of gravitation of the standard linearized gravitational wave theory in general relativity, but we shall not pursue this. ## See also • Classical theories of gravitation • Congruence (general relativity) • Gunnar Nordström • Obsolete physical theories • General Theory of Relativity ## References • Ravndal, Finn (2004). "6, 2004 Scalar Gravitation and Extra Dimensions" arxiv:gr-qc/0405030May 6, 2004. • Pais, Abraham (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford: Oxford University Press. ISBN 0-19-280672-6.  See Chapter 13. • Lightman, Alan P.; Press, William H.; Price, Richard H.; and Teukolsky, Saul A. (1975). Problem Book in Relativity and Gravitation. Princeton: Princeton University Press. ISBN 0-691-08162-X.  See problem 13.2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 101, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102411270141602, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/113712/need-construction-for-coequalizer-in-mathbfposet?answertab=oldest
# Need construction for coequalizer in $\mathbf{Poset}$ My question can be stated quickly: I would like to see a construction of the coequalizer of two arbitrary Poset morphisms (along with a proof of its correctness, of course). Thanks! (The stuff beyond this point is of dubious value; I provide it mostly to show why I've come to regard the question above as a sufficiently non-trivial one, certainly not a trivial extension of the analogous construction for Set. If you don't need convincing of any of this, you won't miss anything if you skip it.) In general, the coequalizer object for the corresponding set functions cannot necessarily be ordered so as to ensure that the coequalizing map is order-preserving. (In case my reasoning is wrong, I give an example of what I mean at the end of this post). I've come up with "improved" constructions that attempt to remedy the shortcomings of the Set construction when applied to Poset, but proving their correctness involves exposure to lethal doses of tedium, which I'd prefer to avoid. Googling for the problem of constructing coequalizers in Poset has turned up surprisingly little (which, of course, I attribute to the lethality of tedium). If anyone can point me to one such construction, and more importantly, to a proof of that the resulting coequalizing map is indeed order-preserving, I'd appreciate it. BTW, in ch. 6 of Arbib & Manes, the authors give a theorem that ensures the admissibility of a map if there's an "optimal lift" for (in this case) its domain. (I post this in case their terminology is sufficiently standard to be informative to some of the readers of this post, since A&M's definition of this concept depends on a fair amount of preliminary groundwork.) If I understand their argument at all, the problem of finding an "optimal lift", in this case at least, doesn't look any easier than the problem of constructing the coequalizer object in the first place. (Perhaps what they have in mind is that one may be able to prove the existence of such an optimal lift without actually having to construct it, but this I would find very unsatisfying.) For a simple example of a Set coequalizer that cannot be made order-preserving, consider the automorphisms $i \mapsto i$ and $i \mapsto i + 2$ on $\mathbb{Z}$, equipped with its standard order (hence, these are Poset morphisms). Their standard Set coequalizer is the quotient of $\mathbb{Z}$ by the equivalence closure of $\{\;(i, i + 2) \;|\; i \in \mathbb{Z}\;\}$. If $q$ is the canonical projection of $\mathbb{Z}$ onto this quotient, then we have $q(i+1) \neq q(i) = q(i+2), \forall i \in \mathbb{Z}$, which rules out the existence of any order for the quotient that would render $q$ order-preserving. - 1 I could be missing something simple, since I don’t do category theory, but can’t you just define the same equivalence relation as in Set, use the given partial order to define the obvious preorder (quasiorder) on its equivalence classes, and collapse that to the natural quotient partial order? In your example, for instance, you’d get initially two equivalence classes, $E$ and $O$, on which the induced preorder is $E\precsim O\precsim E$, so you’d end up with the trivial poset. – Brian M. Scott Feb 27 '12 at 11:23 @BrianM.Scott:since I posted my question I thought of the same strategy. I suspected it would be something like this, but was hoping that answers to the question would point me to the standard nomenclature/algorithm/construction, etc. I did find that the construction you describe is sometimes called the "antisymmetric quotient (of a preorder/quasiorder)". Also, as I mentioned in the post, I have reasonably simple algorithm that in effect combines both quotient operations into one. I'm trying to find a proof that it in fact yields a valid poset. This may be $\cdots$ [continued] – kjo Feb 27 '12 at 13:14 [continued] $\cdots$ unreasonably difficult and not sufficiently light-shedding to be worth the effort. Of course, I must also prove that the resulting poset is in fact the desired coequalizer. Thanks for your comment! – kjo Feb 27 '12 at 13:16 ## 1 Answer Brian M. Scott's comment is more or less a complete answer to the question (any cocone admits a map from the set-theoretic coequalizer with the obvious preorder and also necessarily collapses equivalence classes). I'd just like to point out that this is an argument for working in the category of preorders instead of the category of posets, since in the former category you no longer need to perform the quotient. Note that the forgetful functor from posets to sets has a left adjoint (take the antichain on a set) but does not have a right adjoint; consequently it preserves limits but can't be expected to preserve colimits. But the forgetful functor from preorders to sets has the same left adjoint as above but also has a right adjoint (make every element of a set less than or equal to every other element), hence it preserves both limits and colimits. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496668577194214, "perplexity_flag": "head"}
http://mathoverflow.net/questions/7258/asymptotics-of-q-catalan-numbers/7280
## Asymptotics of q-Catalan numbers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) q-Catalan numbers are defined recurrently as C0=1, $C_{N+1}=\sum_{k=0}^N q^k C_k C_{N-k}$. What can be said about the asymptotics of Cn when `0<q<1`? P.S. In the case q>1 it is known that as n goes to infinity, $q^{-{n\choose 2}}C_n(q)$ tends to the partition function $\prod_{i=1}^\infty\frac1{1-q^{-i}}$. However, this doesn't help in the case `0<q<1`. - ## 4 Answers Re Leonid's comment on a previous answer. If the ratios $C_{n+1}/C_n$ converge, their limit $c(q)$ is such that $C(q,q/c(q))=c(q)$. Equivalently, $1/c(q)$ is the radius of convergence of the series $z\mapsto C(q,z)$. Or, writing $C(q,\cdot)$ as the ratio of two $q$-hypergeometric functions, one can show that $F(q,1/c(q))=0$, where $$F(q,z)=\sum_{n\ge0}(-1)^nq^{n^2-n}z^n/(q)_n.$$ This implies that $c(q)$ is the sum of a series in $q$ with integer coefficients, whose signs seem to be alternating starting with the coefficient of $q$. The first terms are $$c(q)=1+q+q^3-q^4+2q^5-3q^6+6q^7-12q^8+25q^9-52q^{10}+111q^{11}+\ldots$$ The function $q\mapsto c(q)$ is nondecreasing on $q\ge0$, obvious values are $c(0)=1$ and $c(1)=4$, and as a holomorphic function, $c(\cdot)$ might have a pole inside the unit disk at about $q\approx-.4$. But apart from that... - Well, this really counts as a nice answer. Numerically I perfectly see the above series for $c(q)$, and also the pole. The coefficients of the Taylor expansion of $c(q)$ at zero stabilize. – Leonid Petrov Oct 31 2010 at 5:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It's not hard to compute numerical values. If you do this, in the regime $0 < q < 1$ it looks like $C_n$ grows exponentially, i. e. $C_n \sim \alpha_q \beta_q^n$ for some constants $\alpha_q$ and $\beta_q$ which depend on q. Unfortunately, I don't know what $\alpha_q$ and $\beta_q$ are. For example, when q = 1/2 the ratio $C_n/C_{n-1}$ approaches a constant which is approximately 1.6022827223; I claim this is $\beta_{1/2}$. Then $C_{50}/\beta_{1/2}^{50} = 0.5757566503$, which I claim is $\alpha_{1/2}$. Neither of these constants appears in the inverse symbolic calculator. The generating function $C(q,z) = C_0 + C_1 z + C_2 z^2 + \ldots$, where the $C_n$ are $q$-Catalan numbers, ought to satisfy some functional equation, and then one could use techniques from singularity analysis (see, for example, Analytic Combinatorics by Flajolet and Sedgewick). But I am having trouble finding that functional equation. - 3 Looks to me like the functional equation is C(q,z)= 1 + zC(q,qz)C(q,z). I don't know anything about how to extract information about asymptotics from this, though. – Hugh Thomas Nov 30 2009 at 23:08 Slightly off topic: may I advertise the guessing package included in FriCAS again? guessADE(q)([c n for n in 0..10], debug==true) finds the functional equation given by Hugh... – Martin Rubey Oct 28 2010 at 17:05 Frohman and Bartoszynska did a lot of work on the asymptotics of the quantum $6j$-symbols over the last 5 to 7 years. I think their papers on these matters are found on the arxiv. This is where one should look first. - 1 But does that include $q$-Catalan numbers? – Greg Kuperberg Nov 30 2009 at 15:19 Good question, and I don't know for sure. As I recall they did an extensive bit of analysis in that work. – Scott Carter Nov 30 2009 at 16:01 Indeed, $C_n^{1/n}$ converges. Call the limit $\beta_q$ like Michael Lugo did. One can show that $\beta_q\ge 1+q$ for every positive $q$, that $\beta_q\le 2(1+q)$ and $\beta_q\le 1/(1-q)$ for every $q$ in $(0,1)$, that $\beta_q$ is related to the smallest positive zero of a given $q$-hypergeometric function, and various other estimates. The $q$-Catalan numbers are related to some properties of products of correlated Wigner matrices just like the ordinary Catalan numbers describe the (statistical properties of the) spectrum of (large random) Wigner matrices. This is explained in this paper (caveat: I am one of the authors). - Thank you for the answer, I will try to read the paper. However, the fact that $C_n^{1/n}$ converges also follows from random trees (the theory of Aldous' CRT) and I already knew it. It does not help, however. But nevertheless, thanks for the interest in this old question. – Leonid Petrov Oct 29 2010 at 4:19 OK. Sooo... you might care to state more precisely the kind of property of the $C_n$s you are interested in. :-) – Didier Piau Oct 29 2010 at 15:30 Actually, the time I asked the question I wanted to know the limit $C_{n+1}/C_n$ for a fixed $q$. Now you say that this lies in $q$-hypergeometric matters, and I know very little about these. So I think I need to investigate into that direction. – Leonid Petrov Oct 30 2010 at 16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404078125953674, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/97288/list
## Return to Question 4 added 110 characters in body This question is also motivated by the developement around my old MO question about Mobius randomness. It is also motivated by Joe O'Rourke's question on finding primes in sparse sets. Let $A$ be the set of all natural numbers with more ones than zeroes in their binary expansion. Are there infinitely many primes in $A$? More generally, for a function $f(n)$ defined on the natural numbers let $A[f]$ denotes the set of integers with $n$ digits and at least $n/2+f(n)$ ones, for $n=1,2,...$. Does $A[f]$ contains infinitely many primes? Bourgain proved the Mobius randomness of $A$ and this seems closely related to this question. But I am not sure about the exact connection. (In fact Bourgain proved Mobius randomness for every $A$ described by a balanced monotone Boolean function of the binary digits.) Showing infinitely many primes for sparse $[f]$ would be interesting. Proving this for $f(n)=\alpha n$ where $\alpha>0$ is small would be terrific. Of course, if $f(n)=n/2$ we are talking about Mersenne's prime so I would not expect an answer here. (Showing infinite primes for $A$ with smaller size than $\sqrt n$ will cross some notable barrier.) A similar question cab can be asked about balanced (and unbaland) sets described by $AC^0$-formulas. This corresponds to Ben Green's $AC^0$ prime number theorem but also here I am not sure what it will take to move from Mobius randomness to infinitude of primes. Another related question: http://mathoverflow.net/questions/22629/are-there-primes-of-every-hamming-weight 3 I linked to what I believe are the two earlier MO questions Gil meant to reference. This question is also motivated by the developement around my old MO question about Mobius randomness. It is also motivated by Joe O'Rourke's question on finding primes in sparse sets. Let $A$ be the set of all natural numbers with more ones than zeroes in their binary expansion. Are there infinitely many primes in $A$? More generally, for a function $f(n)$ defined on the natural numbers let $A[f]$ denotes the set of integers with $n$ digits and at least $n/2+f(n)$ ones, for $n=1,2,...$. Does $A[f]$ contains infinitely many primes? Bourgain proved the Mobius randomness of $A$ and this seems closely related to this question. But I am not sure about the exact connection. (In fact Bourgain proved Mobius randomness for every $A$ described by a balanced monotone Boolean function of the binary digits.) Showing infinitely many primes for sparse $[f]$ would be interesting. Proving this for $f(n)=\alpha n$ where $\alpha>0$ is small would be terrific. Of course, if $f(n)=n/2$ we are talking about Mersenne's prime so I would not expect an answer here. (Showing infinite primes for $A$ with smaller size than $\sqrt n$ will cross some notable barrier.) A similar question cab be asked about balanced (and unbaland) sets described by $AC^0$-formulas. This corresponds to Ben Green's $AC^0$ prime number theorem but also here I am not sure what it will take to move from Mobius randomness to infinitude of primes. 2 edited body This question is also motivated by the developement around my old MO question about Mobius randomness. It is also motivated by Joe O'rourke's O'Rourke's question on finding primes in sparse sets. Let $A$ be the set of all natural numbers with more ones than zeroes in their binary expansion. Are there infinitely many primes in $A$? More generally, for a function $f(n)$ defined on the natural numbers let $A[f]$ denotes the set of integers with $n$ digits and at least $n/2+f(n)$ ones, for $n=1,2,...$. Does $A[f]$ contains infinitely many primes? Bourgain proved the Mobius randomness of $A$ and this seems closely related to this question. But I am not sure about the exact connection. (In fact Bourgain proved Mobius randomness for every $A$ described by a balanced monotone Boolean function of the binary digits.) Showing infinitely many primes for sparse $[f]$ would be interesting. Proving this for $f(n)=\alpha n$ where $\alpha>0$ is small would be terrific. Of course, if $f(n)=n/2$ we are talking about Mersenne's prime so I would not expect an answer here. (Showing infinite primes for $A$ with smaller size than $\sqrt n$ will cross some notable barrier.) A similar question cab be asked about balanced (and unbaland) sets described by $AC^0$-formulas. This corresponds to Ben Green's $AC^0$ prime number theorem but also here I am not sure what it will take to move from Mobius randomness to infinitude of primes. 1 # Primes with more ones than zeroes in their Binary expansion This question is also motivated by the developement around my old MO question about Mobius randomness. It is also motivated by Joe O'rourke's question on finding primes in sparse sets. Let $A$ be the set of all natural numbers with more ones than zeroes in their binary expansion. Are there infinitely many primes in $A$? More generally, for a function $f(n)$ defined on the natural numbers let $A[f]$ denotes the set of integers with $n$ digits and at least $n/2+f(n)$ ones, for $n=1,2,...$. Does $A[f]$ contains infinitely many primes? Bourgain proved the Mobius randomness of $A$ and this seems closely related to this question. But I am not sure about the exact connection. (In fact Bourgain proved Mobius randomness for every $A$ described by a balanced monotone Boolean function of the binary digits.) Showing infinitely many primes for sparse $[f]$ would be interesting. Proving this for $f(n)=\alpha n$ where $\alpha>0$ is small would be terrific. Of course, if $f(n)=n/2$ we are talking about Mersenne's prime so I would not expect an answer here. (Showing infinite primes for $A$ with smaller size than $\sqrt n$ will cross some notable barrier.) A similar question cab be asked about balanced (and unbaland) sets described by $AC^0$-formulas. This corresponds to Ben Green's $AC^0$ prime number theorem but also here I am not sure what it will take to move from Mobius randomness to infinitude of primes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505231380462646, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/19655/finding-the-square-root-of-a-random-number-with-newtons-method-using-while-do/19660
# Finding the square root of a random number with Newton's method, using While/Do/For loops? I am trying to construct a program that will find the square root of a number, using Newton's method, which is $$x(n+1) = x_n- f(x) / f'(x_n)$$ The number, will be a random number, generated by: `RandomInteger[{1000000, 10000000}]` I am setting the first Newton estimate to be 1, so I can iterate my loop until the difference in the estimate from Newton's method after n iterations to the first estimate of 1, being less than 0.001. Since I am trying to construct this fully, I am not using any `Sqrt[x]` function or $n^.5$ relationship either. My current thoughts: So I have set: ````f[x]:=x^2 + k ```` where ````k = RandomInteger[{1000000, 10000000}] ```` Since I want to know what number I am taking the SQRT of, I am Printing that information out with: ````Print["The Square Root of ", k, " is ", ---] ```` where --- will be my program. Since I need to take an unknown number of iterations, I am thinking of using a `For` loop, as that checks the loop invariant condition until it is `False` then stops. This is the part I am stuck on -- what I can't grasp: how do I make the loop check for a condition that is outside of the loop? Any help or hints would be greatly appreciated. - It's wholly unclear what you are asking -- what is it "outside the loop" that you need to check? Also, what's the role of your function `f`: what does this have to do with your question? In order to find `x` for which `x^2 == k`, you want equivalently `x^2 - k == 0`, so the function to iterate is `f[x_] := x^2 - k`. (And you have the syntax for defining `f` wrong: you missed the pattern character `_` in the left-hand side.) – murray Feb 15 at 19:50 Why do you want to use a `For` loop? You can just use `Nest` or `NestWhile`, or if you want to see all the iterates, `NestList` or `NestWhileList`. Or is this homework exercise where somebody is forcing you to use explicitly a `For` loop? If so, you cannot expect us to do your homework for you; at the very least you need to show us the code you already have for the iteration with `For`. – murray Feb 15 at 19:52 @murray my initial understanding of Newton's Method was poor, I am correcting my attempts with the suggestions you made, thank you. (The syntax was wrong due to a missed typing error when I was trying to format correctly, I apologize) Also, I don't "need" to use all of those loops, those are just the ones at my disposal at this point, so I was wondering any combination/use of any of them. – julesverne Feb 15 at 19:54 The way the question stands, you are asking for us to create a Newton's Method algorithm using "Do/While/For" loops. However, it is much more functional and cogent to utilize the recursive elements of the function with `NestWhile`. Unless your aim really is to use only those three looping functions, could you please edit your question to be less specific about which functions to use? – VF1 Feb 15 at 19:58 One of the "Applications" given in the help for `Nest` shows how to perform a fixed number of Newton-Raphson iterations to find $\sqrt{2}$. Use `NestWhile`, as suggested by @Murray, to make this more flexible. `NestWhileList` will return the intermediate results. – whuber Feb 15 at 20:35 show 1 more comment ## 3 Answers If you want a more "traditional" solution, you can try a `While` with a `Break[]` (Your Fortran friends will understand this better ;) ````z = 81.; (*number to take its square root*) f[x_] := x^2 - z; fd[x_] := 2 x; x0 = 1; (*initial guess *) While[True, x1 = x0 - f[x0]/fd[x0]; If[Abs[x1 - x0] < 0.001, Break[]]; x0 = x1 ] ```` check ````x1 (* 9.000000000007093`*) ```` Or to make it a little more robust, you always add a guard against run-away-cases and use a flag ````z = 81.; (*number to take its square root*) f[x_] := x^2 - z; fd[x_] := 2 x; x0 = 1; (*initial guess *) maxIterations = 20; keepSearching = True; iter = 0; rootWasFound = False; While[keepSearching, x1 = x0 - f[x0]/fd[x0]; If[Abs[x1 - x0] < 0.001 || iter > maxIterations, If[Abs[x1 - x0] < 0.001, rootWasFound = True]; keepSearching = False , x0 = x1; iter++ ] ]; ```` now ````If[rootWasFound, Print["root ", x1, " was found in ", iter, " iterations"], Print["No root was found, try increasing max iterations "] ] ```` gives ```` root 9. was found in 6 iterations ```` - 1 oh better use a goto for the Fortran folks.... While[Abs[f[x0]] > .0001, x0 -= f[x0]/fd[x0];] – george2079 Feb 15 at 20:16 superb, thank you. I was wondering at first about the break as well, and ended up finishing it within the If – julesverne Feb 15 at 22:17 As you have defined in your question, Newton's Method gives us the next value in the iteration by following the tangent of the curve you are approximating. Thus, we can create a function (using your `f[x_, sq_] = x^2 - sq`) that gives us the next `x` value when looking for the square root of `sq`. ````getNext[x_, sq_] = x - f[x, sq]/D[f[x, sq], x]; ```` (Notice I do not use delayed set so that the derivative is evaluated only once) Now, instead of a `For`-loop, which usually calls for a definite number of iterations, or even a `While`-loop, which uses just a test as an ending condition, I recommend using `NestWhile`, which, appropriately, nests a function on an expression until the given test fails. The testing function and recursive function to be nested are passed as pure functions. ````sqrt[sq_, start_: 1.] := NestWhile[getNext[#, sq]&, start, Abs[f[#, sq]] > .001 &] ```` - ```` nwt[k_, tol_: (10^-4)] := Row[{"The Square Root of ", k, " is ", N@FixedPoint[(# + k/#)/2 &, 1, SameTest -> (Abs[#1 - #2] < tol &)]}] nwt /@ RandomInteger[{10, 100}, {10}] // Column ```` - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245907068252563, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/285642/is-the-dirac-delta-function-really-a-function/289749
# Is the Dirac Delta “Function” really a function? I am given to understand that the Dirac delta function is strictly not a function in the conventional sense and it is a "functional or a distribution". The part which I can not understand why the Delta "function" makes sense only when it acts on another function and that too only inside an integral and how is a "functional" or "distribution" different from a function. - – Did Jan 24 at 9:01 ## 4 Answers There are several ways to look at this. Personally, I think it is more clear to say that the Dirac $\delta$ function is actually a function -- that's what the name says. Unfortunately, the Dirac $\delta$ function does not exist. There are closely related objects, however, which do exist, and which let us do most of the things that we wish we could do with the $\delta$ function if it actually existed. Understanding both of the previous two sentences is key to understanding what is going on. Because the $\delta$ function does not exist, many people redefine the term "$\delta$ function" to mean something that does exist, so that they can still talk "as if" the $\delta$ function existed, even though it does not. I find this somewhat revisionist, but it is the most common approach I have seen among people who use the $\delta$ function in their day to day work, so you have to expect it when you look at the literature. The idea is that any statement involving the $\delta$ function is actually an abbreviation of a different statement, or family of statements, each of which only involves objects that actually exist. For example, the equation $$\int f \delta\,dx = f(0)$$ can be viewed as an abbreviation for $$\lim_{n \to \infty} \int f\phi_n\,dx = f(0)$$ where $(\phi_n)$ is a particular sequence of actually-existing functions. Thus the $\delta$ function is replaced by the $\delta$ distribution. The answer by Chris White has more details. On the other hand, $$\int f \delta\,dx = f(0)$$ can also be viewed as an abbreviation for $$\int f\,d\delta = f(0)$$ where the "$\delta$" on the right-hand side is not the $\delta$ function, it's the Dirac measure. This is explained in more detail by Isaac Solomon. Again, "$\int f\delta\,dx$" is a purely formal abbreviation because (in the jargon of measure theory) the $\delta$ measure is not absolutely continuous to Lebesgue measure and so it has no Radon-Nikodym derivative; if this derivative existed, it would be the $\delta$ function. One reason to continue writing $$\int f \delta\,dx = f(0)$$ is that it does not commit us to either of these two interpretations; we can switch back and forth between them whenever it is convenient. It is also convenient for setting up computational problems, in the same way that some people set up integrals by drawing diagrams labeled with infinitesimals while at the same time accepting that infinitesimals don't exist. Another example of a non-existent but still useful mathematical object is the field with one element. There is no field with one element - so this object does not, strictly speaking, exist. But it has nevertheless been useful as a way of thinking about results that involve objects that do exist. - First you should confront the question why should I think of the $\delta$-function as a function at all? If you are trying to imagine it as a real-valued function of real inputs, which just happens to be $0$ just about everywhere, then you are off to a bad (but very common) start. You can define $\delta$ as a symbol with certain properties relating to combining it with an actual function and some other symbols (e.g. $\int$), and this really suffices for most purposes, so why insist on trying to cram such an interesting object into a limited definition of "function?" So instead, let's take a different approach. Let $f : \mathbb{R} \to \mathbb{C}$ be a generic function from the reals to the complexes. Consider the set of all1 such functions, and call it $L$ for lack of a better letter. $L$ is a set just like $\mathbb{R}$, and so we can define maps (read: functions) from it to $\mathbb{C}$ as well. The $\delta$-function is one such beast, defined by \begin{align} \delta : L & \to \mathbb{C} \\ f & \mapsto f(0). \end{align} Thus it is a function, but not of real numbers. It is a function of functions of reals, which is sometimes called a functional. So what about the integrals? Well you can also approach this in a limiting fashion. One way is to note that $$\lim_{\sigma\to0} \int\limits_\mathbb{R} f(x) \frac{1}{\sqrt{2\pi\sigma^2}} \mathrm{e}^{-x^2/2\sigma^2} \mathrm{d}x = f(0).$$ Exchange the limit and the integral2, and you see that there is a "function" - or rather a limit of a sequence of functions from $L$ that is itself not a member of $L$ - whose values seem to be given by $$\delta(x) = \lim_{\sigma\to0} \frac{1}{\sqrt{2\pi\sigma^2}} \mathrm{e}^{-x^2/2\sigma^2}.$$ This is what a distribution is, with terminology suggestive of the probability distributions one so often integrates against (though I could be mistaken on the etymology). Note though that we really weren't allowed to switch that limit and integral while we still called that Gaussian-looking thing a member of $L$. After all, taking the pointwise limit first produces something that vanishes everywhere but a point, and such an object will cause the Lebesgue integral we were using to vanish as well. In any event, the integral was there from the very beginning. You can think of this as overbearing notation for what we really wanted to say: "Give the value that results when $\delta$ acts on $f$." The integral notation has another advantage, though, and that is in connection with inner product spaces. Secretly, we constructed $L$ to be a vector space over $\mathbb{R}$. Then the set of linear maps from $L$ to $\mathbb{C}$ form its dual space $L^*$. For every $g \in L$ there is a corresponding $g^* \in L^*$, which can conveniently be represented in this integral notation as the complex conjugate of $g$.3 The inner product of $f$ and $g$ is $$\langle f | \underbrace{g}_{g\in L} \rangle = \int\limits_\mathbb{R} f(x) \underbrace{g^*}_{g,g^*\in L}(x) \mathrm{d}x,$$ and so you can identify \begin{align} \underbrace{g^*}_{g^*\in L^*} : L & \to \mathbb{C} \\ f & \mapsto \int\limits_\mathbb{R} f\underbrace{g^*}_{g,g^*\in L}. \end{align} Now for every $g \in L$ there is a corresponding dual member that you can write as the complex conjugate of $g$ for the purposes of such integration, but the converse is not true.4 $\delta$ is an example of a member of $L^*$ that has no actual function in $L$ we can complex conjugate and integrate against to replicate its behavior. 1 In practice this is often too much. It's better to restrict attention to, e.g., all square-integrable functions from $\mathbb{R}$ to $\mathbb{C}$. 2 Beware! A very dangerous thing to do! 3 Yes, we are about to thoroughly abuse the two meanings of $*$ - be on the lookout. 4 It won't be in general unless $L$ is finite-dimensional, but in that case you have Kronecker deltas and finite sums rather than Dirac deltas and integrals. - I don't believe the term distribution as used for $\delta$ has anything etymologically to do with the term distribution used in probability. – KCd Jan 29 at 13:02 The reason that we try to make sense of the $\delta$-function inside of an integral is because it's defining characteristic is given in terms of an integral. That is, the $\delta$-function is zero everywhere other than the origin, and $$\int_{\mathbb{R}} \delta(x) dx = 1$$ Heuristically, the $\delta$-function concentrates all its mass at the origin. Of course, no actual function $f(x)$ enjoys this property, since if $f(x)$ was zero everywhere other than the origin, it would have integral $0$, even if $f(0) = + \infty$. That being said, there are situations, such as in the study of electromagnetism, that we would like to talk of positive mass existing at a point. The language of integral calculus is indispensable, but it does not classically allow for such constructions. Thus, the $\delta$-function emerges as a way of allowing our theory of integration to make sense of these point masses. One way to remedy the fact that the $\delta$-function is not a function is to reinterpret it as a distribution, as Chris explained above. Another option is to think of it as a measure. If you haven't studied measure theory, I'll avoid the technical details, only to mention that a measure is a way of assigning a size to a set. The integral above ends with $dx$, which corresponds to the measure that assigns to every set its "obvious" size. The size of $[0,4]$ is $4$, the size of a point set is $0$, and this can be extended to most "messy" sets in a sensible way. When we used the measure $dx$, it was impossible for our integral to detect point masses, since a point was assigned zero size, and hence was inconsequential with respect to integration. However, we can define a measure $\delta_0$ that assigns a set size $1$ if it contains $0$, and assigns it a size of $0$ otherwise. If we integrate using this measure, all mass is concentrated at the origin, and indeed we have $$\int_{\mathbb{R}} f(x) d\delta_0= f(0) = `` \int_{\mathbb{R}} f(x)\delta(x) dx"$$ - Not really. $δ$ is not a pointwise defined object. It's a distribution and defined in terms of how it acts on test functions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552142024040222, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/241433/find-expotential-function-from-two-points
# Find expotential function from two points Clever people on this place, I'm having trouble with this, and I'm not able to see why what I'm doing is wrong... Here are two points: $(3,1)$, $(-1,16)$ And this is what my calculations are: First, I'll find $a$: $a = (x_2-x_1)\sqrt{\frac{y2}{y1}}= (-1 + (-3) )\sqrt{\frac{16}{1}}\\\Leftrightarrow \\a = -4\sqrt{\frac{16}{1}}n \\\Leftrightarrow \\a = -4\sqrt{\frac{16}{1}} \Rightarrow a=−16$ Then, I can find $b$: $b = (\frac{y1}{a^x1}))= \frac{1}{-16^3}\\ \Rightarrow b=−0.000244$ This is wrong, my book says that the answer is $f(x) = 8(2^{-x})$ What's wrong, and what should be changed here? I'm not the biggest math professor, but i hope you can help me aswell. What i need to know is how the final function can be, as it says in the book - in this case, it's $f(x) = a (b^x)$ - 1 You need to be a bit clearer about what you are trying to do. Are you trying to find $a$ and $b$ such that the function $f(x)=a\times b^x$ passes through your two points? – Matt Pressland Nov 20 '12 at 17:06 yes, exactly .. – Frederik Witte Nov 20 '12 at 17:07 OK, great. Ideally you should edit your question to include this information. – Matt Pressland Nov 20 '12 at 17:08 I will, thanks for telling – Frederik Witte Nov 20 '12 at 17:08 I assume the function you're trying to fit is $y=ba^x$. Where did you get $a=(x_2-x_1)/\sqrt{y_2/y_1}$ from? – Rahul Narain Nov 20 '12 at 17:10 show 3 more comments ## 1 Answer You know that $f(3)=1$ and $f(-1)=16$, so as $f(x)=a\times b^x$, you have: \begin{align*} ab^3&=1\\ ab^{-1}&=16 \end{align*} Now we can cancel the $a$s by dividing: $$b^4=\frac{ab^3}{ab^{-1}}=\frac{1}{16}$$ So one choice of $b$ is $b=\frac{1}{2}$, and then you can check that to satisfy the two equations you must take $a=8$. However, taking $b=-\frac{1}{2}$ and $a=-8$ also works, and there are two more choices where $a$ and $b$ are complex numbers. - you sir, you are a genius - let your math soul bring you succes in the futute! – Frederik Witte Nov 20 '12 at 17:19 1 – peterm Nov 20 '12 at 17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948355495929718, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/96781/list
## Return to Answer 2 added 4 characters in body I don't think that there are any really easy examples. In the famous paper of Beauville, Colliot-Thélène, Sansuc and Swinnerton-Dyer "Variétés stablement rationnelles non rationnelles" they construct surfaces $S$ over $\mathbb Q$ that are not rational, but such that the products $S \times \mathbb P^3$ are rational. You get an example by taking $K$ to be a purely transcendental extension of the function field of $S$ of transcendence degree $d$, and a purely transcendental extension of $\mathbb Q$ of transcendence degree $d+2$, for some $d$ between $0$ and $3$ (I don't know the correct value of $d$). 1 I don't think that there any really easy examples. In the famous paper of Beauville, Colliot-Thélène, Sansuc and Swinnerton-Dyer "Variétés stablement rationnelles non rationnelles" they construct surfaces $S$ over $\mathbb Q$ that are not rational, but such that the products $S \times \mathbb P^3$ are rational. You get an example by taking $K$ to be a purely transcendental extension of the function field of $S$ of transcendence degree $d$, and a purely transcendental extension of $\mathbb Q$ of transcendence degree $d+2$, for some $d$ between $0$ and $3$ (I don't know the correct value of $d$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8991070985794067, "perplexity_flag": "head"}
http://nrich.maths.org/773
### Cubic Spin Prove that the graph of f(x) = x^3 - 6x^2 +9x +1 has rotational symmetry. Do graphs of all cubics have rotational symmetry? ### Sine Problem In this 'mesh' of sine graphs, one of the graphs is the graph of the sine function. Find the equations of the other graphs to reproduce the pattern. ### More Parabolic Patterns The illustration shows the graphs of twelve functions. Three of them have equations y=x^2, x=y^2 and x=-y^2+2. Find the equations of all the other graphs. # Parabolic Patterns ##### Stage: 4 and 5 Challenge Level: The illustration shows the graphs of fifteen functions. Two of them have equations $y = x^2$ $y = - (x - 4)^2$ Use a graphic calculator or a graph drawing computer program to sketch these two graphs and then locate them in this illustration. Use the clues given in this information to help you to find the equations of all the other graphs and to draw the pattern of the 15 graphs for yourself. For your solution send in the equations you have found with an explanation of how you did it. What about the equations of these parabolas? You may like to use your creative talents to devise your own pattern of graphs and send them to us so that we can base another challenge like this one on the website using your pattern. NOTES AND BACKGROUND This sort of challenge is sometimes called an inverse problem because the question is posed the opposite way round to what might have been expected. This is almost like saying: 'here is the answer, what was the question?' Instead of giving the equations of some functions and asking you to sketch the graphs, this challenge gives the graphs and asks you to find their equations. You are being asked to sketch a family of graphs. What makes this a family? All the graphs are obtained by transformations such as reflections and translations of other graphs in the family. The key is to find the simplest function and then tofind transformations of the graph of that function which give the other graphs in the family. If you have access to a graphic calculator, or tograph drawing software, it will not give you the answers. You will have to think for yourself what the equations should be and then the software will enable you to test your own theories and see if you were right. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462133049964905, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/73069/show-u-n-orthonormal-a-compact-implies-au-n-to-0
# Show $\{u_n\}$ orthonormal, A compact implies $\|Au_n\| \to 0$ I'm having a bit trouble with this homework exercise. Let $\mathcal{H}$ be a Hilbert space and $\{u_n\}_{n=1}^\infty$ an orthonormal sequence in $\mathcal{H}$. Let $A$ be a compact operator on $\mathcal{H}$. Show that $\|Au_n \| \to 0$ as $n\to \infty$. My book defines a compact operator as an operator $A$ such that whenever $f_n$ is bounded, then $Af_n$ has a convergent subsequence (equivalently, the image of $A$ is relatively compact). It seems I must somehow combine the fact that $Au_n$ has a convergent subsequence with the fact that $\{u_n\}$ is orthonormal. This is where I get stuck. Maybe I can somehow use the fact that $\|u_n-u_m \| = \sqrt{2}\,$ for $m \neq n$. - If you know $u_n$ converges weakly, and find what its weak limit is, that may help you. – GEdgar Oct 16 '11 at 17:01 ## 1 Answer I suppose it is clear for you that a compact operator is clearly bounded so continuous. Suppose that the $(Au_n)$ has a convergent subsequence towards $v$ not zero. For the sake of simplicity, let us note also $(Au_n)$ this subsequence. $(u_n)$ is then also an orthonormal sequence in $\mathcal{H}$. Set $v_n=\frac{1}{n}\sum_{k=n}^{2n}u_k$. It is clear that the sequence $(v_n)$ converges towards $0$. But $(Av_n)$ converge towards $v$ which is not zero. QEA. - Thanks for the answer. However, doesn't this just show that the convergent subsequence of $Au_n$ converges to zero? It doesn't show that $Au_n$ itself converges. (I guess it now suffices to show, for example, that $Au_n$ is Cauchy) – Fredrik Meyer Oct 16 '11 at 17:13 @FredrikMeyer it shows actually that ANY convergent subsequence of $(Au_n)$ converges towards $0$. Therefore $(Au_n)$ converges as a whole towards $0$ (if not, being relatively compact as a set, you can easily extract a subsequence not converging towards $0$ ...) – brunoh Oct 16 '11 at 17:27 Of course! Thanks! – Fredrik Meyer Oct 16 '11 at 17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535212516784668, "perplexity_flag": "head"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevA.79.012332
Synopsis: Relaxing the requirements for scalable quantum computing Fibonacci scheme for fault-tolerant quantum computation Panos Aliferis and John Preskill Published January 30, 2009 To carry out a long calculation on a quantum computer, some form of error correction is necessary. If the error probability of each logical operation is below what is called the “fault-tolerance” threshold, an error correction procedure will actually remove more errors than it introduces, and the overall failure rate can then be made arbitrarily small. The fault-tolerant threshold is typically quoted as $10-4$ or $10-5$. This is an extremely stringent tolerance, since it says failure must occur in less than $0.01%$ of the operations. A few years ago, Emanuel Knill at NIST in Boulder, Colorado, introduced a different approach to error correction that relied primarily on preparing and verifying a (possibly very large) number of auxiliary qubits, called ancillas, in special states that could be used to diagnose the errors in the computer’s qubits, and replace them if necessary. The most attractive feature of these codes was their large error tolerance, which, based on numerical simulations, Knill estimated to be of the order of $1%$. In a paper appearing in Physical Review A, Panos Aliferis, who is at the IBM Watson Research Center, and John Preskill of the California Institute of Technology, rigorously establish a lower bound for the fault-tolerance threshold for one of Knill’s constructions that has relatively small overhead requirements. Their results indicate that fault-tolerant computation should definitely be possible with this scheme, if the error probability per logical operation does not exceed $0.1%$. While lower than Knill’s original numerical estimate, this analytical bound is still at least one order of magnitude larger than was thought possible with other codes and it makes the prospect of scalable quantum computing appear that much more feasible. – Julio Gea-Banacloche Related Articles More Quantum Information Shedding Light on a Quantum Black Box Viewpoint | Mar 25, 2013 Big Shifts on an Atomic Scale Synopsis | Mar 14, 2013 New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8805702924728394, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/31261/3-manifold-with-torus-boundary-with-trivial-peripheral-ideal
## 3-manifold with torus boundary with trivial “peripheral ideal”? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a 3-manifold $M$, one can define the Kauffman bracket skein module $K_t(M)$ as the $C$-vector space with basis "links (including the empty link) in $M$ up to ambient isotopy," modulo the skein relations, which can be found in the second paragraph of section two of http://arxiv.org/abs/math/0402102. (Side question - how can I draw these relations in Latex?) If $S$ is a surface, then $K_t(S\times [0,1])$ has an algebra structure given by stacking one link on top of another. If $S$ is a boundary component of $M$, then $K_t(M)$ is a (left) $K_t(S\times [0,1])$ module, where the left module structure is given by gluing $S\times {1}$ to the copy of $S$ in the boundary of $M$. In this situation, we can define a left module map $K_t(A\times [0,1]) \to K_t(M)$ which is uniquely defined by "(empty link in $S\times [0,1]$) maps to (empty link in $M$)." The "peripheral ideal" is the kernel of this module map, and is a left ideal of $K_t(S\times [0,1])$. The motivation for these definitions comes from knot theory - if $K$ is a knot in $S_3$, then the complement of a small tubular neighborhood of $K$ is a manifold with a torus boundary, and the algebra $K_t(T^2\times [0,1])$ and module $K_t(S^3 \setminus K)$ give information about the knot $K$. Now I can ask my question: Is there a manifold $M$ with a torus boundary such that the peripheral ideal is trivial? I've just recently started learning about knot theory, and I'm having a hard time trying to figure this out. One thing that I do know is that $M$ cannot be of the form $S^3 \setminus K$, because of propositions 7 and 8 in http://arxiv.org/abs/math/9812048. I also suspect that $M$ will actually have two boundary components which are a torus, but I don't really have a good reason for this. Also, I suspect this might be a hard question, so any hints about one might approach it would be helpful. - 1 Knot complements always have non-abelian $SU(2)$ representations, as proven by Kronheimer-Mrowka. msp.warwick.ac.uk/gt/2004/08/p007.xhtml ams.org/mathscinet-getitem?mr=2106239 This implies that the $SL_2(C)$ character variety is non-trivial, and in fact the peripheral ideal is non-trivial. ams.org/journals/proc/2005-133-09/… msp.warwick.ac.uk/agt/2004/04/p050.xhtml I think this implies that the peripheral ideal is non-trivial, by specialization to $t=-1$. – Agol Jun 25 2011 at 2:46 1 In general, though, it's not known that for a manifold with torus boundary, there is a non-abelian $SL_2(C)$ representation. This holds for geometric manifolds, and their connect sums. However, it hasn't been shown in general for manifolds which have a non-trivial JSJ decomposition. In many cases, though, you can prove that there is a non-abelian $SL_2(C)$ rep. even in the presence of incompressible tori. – Agol Jun 25 2011 at 2:50 ## 2 Answers There is a gap in the proof that the peripheral ideal is nontrivial in that paper. Thang Le and Stavros came up with a more algebraic way of definining a closely related ideal that they could prove was nontrivial. I think its a great problem. A good starting point might be to prove it for Torus knots. There is a recent paper of Julien Marche that computes the Kauffman bracket skein module of all torus knots, but stop short of understanding the module structure over the skein module of the torus. You might start there. I am willing to conjecture that the peripheral ideal is always nontrivial for any link. In fact, Thang has recently proved a weak form of this. We defined the peripheral ideal to be the extension to the noncommutative torus of the kernel of the inclusion map of skein algebra of the torus into the skein module of the complement of the knot. Via an identification of the skein algebra of the torus with the symmetric part of the noncommutative torus the ideal corresponds to the ideal of the image of the $SL_2\mathbb{C}$-characters of the knot group in the characters of $\mathbb{Z}\times \mathbb{Z}$. We found a way of seeing the colored Jones polynomial of the knot as lying in the dual to the $SL_2\mathbb{C}$-characters of the knot group, and we found that the colored Jones polynomial is in the annihilator of the peripheral ideal. Thang and Stavros stepped back from the picture, and found a formal connection between the Jones polynomial and the noncommutative torus, and then just defined their ideal to be the annihilator of the Jones polynomial. Using formal properties of the $R$-matrix they were able to give an axiomatic proof that their ideal was nontrivial. The conjecture is about the relation between the formal definition of quantum invariants and their concrete realization. The Kauffman bracket skein module of a knot complement is a deformation quantization of the unreduced scheme of the $SL_2\mathbb{C}$-characters of its fundamental group. The conjecture that the peripheral ideal is nontrivial is motivated by this idea, and the fact that the $SL_2\mathbb{C}$-character variety of a nontrivial knot, is nontrivial, meaning the $A$-ideal is nontrivial. This should mean that the peripheral ideal is nontrivial. The orthogonality between the peripheral ideal and the colored Jones polynomial should lead to data about the $SL_2\mathbb{C}$-character variety of the knot being expressed in the aggregate behavior of the colored Jones polynomial of the knot. - By peripheral idea of a link, do you mean this: fix a torus component of the boundary of $S^3 \setminus L$, which gives $K_t(S^3 \setminus L$ a module structure, and take the kernel of the map determined by "empty link in the torus goes to empty link in $S^3 \setminus L$? Which paper of Le and Stavros did you mean? They have several together. Also, the paper by Marche definitely looks interesting. – Peter Samuelson Jul 10 2010 at 3:18 The natural thing to do from the viewpoint of representations is to look at the skein module of the manifold with boundary a disjoint union of tori as a module over the tensor product of the skein algebras of the tori. Poincare duality will guarantee that if the classical object is nonempty then the ideal will be nontrivial. There is an earlier paper of Razvan and I where we identify the skein algebra of the torus with the symmetric part of the noncommutative torus. Stavros and Thang work in a variation of the noncommutative torus that is a PID. I con't have access to the citation now. – Charlie Frohman Jul 11 2010 at 12:21 I found the paper by Stavros and Thang, and it looks interesting. The two ideals definitely look very related. Thanks for the suggestions, and I'll comment again if we make any progress. – Peter Samuelson Jul 15 2010 at 23:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is not an answer, more like one comment and one suggestion for an approach to this problem. The comment is that this looks like a 4-dimensional TQFT. You have an algebra associated to a surface, a module associated to a 3-manifold, and a vector associated to a 4-manifold. The reason it is not usually presented this way is that the dependence on the 4-manifold is not interesting; it only depends on the signature (i.e. the cobordism class). The suggestion for an approach is to look at $q=1$, the classical limit. Here the algebra associated to a surface is the coordinate ring of the character variety. The key observation is that a skein determines a function on the space of flat connections by taking traces of holonomy and the skein relation corresponds to $tr(A^{-1}B)+tr(AB)=tr(B)$. This was written up by Doug Bullock. My suggestion then is for you to look at your question in this context. I don't know if this will help but it seems more likely to be a question that an expert can answer. - This is a good suggestion and a nice paper - thanks. Also, I noticed you were mentioned in the introduction to the paper :-) I didn't quite follow the second paragraph, in this setting, what's the vector associated to a 4-manifold? – Peter Samuelson Jul 15 2010 at 23:08 Thanks. I explained the idea to Doug Bullock at the Banach Centre, Warsaw and he wrote it up. I should probably fess up and admit I am not clear about the 4-manifold part of this story. – Bruce Westbury Jul 16 2010 at 2:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936613142490387, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=939768
Physics Forums ## Modelling the known universe Hello, All, I hope I am not out of line here, this is the only cosmology forum I could find to ask a few questions... I am considering developing a piece of software that can produce a 3D model of cataloged objects, by querying something like NED or SIMBAD, and generating this as a real time model. In other words, representing all objects where they actually are in relation to one other at a given moment, not where we are able to observe them. Maybe even try and overlay their observed positions for effect. Maybe try to play with gravity, I dunno. However, I am having trouble locating how published distances are arrived at, and am making certain lay assumptions that may or may not be accurate. For one thing, I am assuming that the published distances of objects are calculated using the speed of light, and do not take into account the expansion of the universe... If I input the distance of an object, and take into account the red shift that is observed, can I say with any certainty that that object's current position relative to other objects (i.e. the sun) is anything? For example (and grins), if the (albeit short lived) published/observed distance of an object from Earth is 1 light second (300,000km), and the red shift of that object tells me that it is moving away from me at the rate of 300,000km/s then if I calculate the "actual" distance of the object for rendering purposes, would 600,000km be accurate (assuming I calculated it within a second, duh)? Assuming that the explanation for red-shift is not EM wave propogation (and it does not play any role in the red shift) , but that it is actually a function of the objects motion, this gets confusing over greater distances when the rate of expansion of the universe is not a constant, but is either accelerating or deccelerating. My quandary: If one observes an object with a distance of d, and observes the red shift, applies the formula to calculate how far the object has moved away from me in the time between now and however many years, months, days, hours, seconds, etc... have passed since the light that I am observing has left the object, I would be able to calculate the actual distance of the object that I am observing, right? Well, not really. If the object has sped up over time I have to use the red shift of progressively nearer objects (this of course assumes that the rate of expansion at any given moment in time is uniform throughout the cosmos) to calculate how much the acceleration of the object has increased over time (and the distance it is from the point of observation now), i.e if the universe is accelerating, there should be a more pronounced observable effect on red shift for nearer objects than more distant, right? The faster an object is moving away from you, the more the spectra will shift toward the red end, right? Well, but then aren't I riding a proverbial snowball down a hill? Each progressively nearer object should have a less pronounced shift, but the shift in the spectra as I am observing it is as it was x millenia ago, and so on and so forth... Color me confused! How can the actual real-time distance of any object be calculated when everything we can observe about it actually happened aeons ago? Including the speed at which we observe it to be moving away from us? It doesn't really seem as though an accurate expansion rate of the universe weighted over time could be calculated or extrapolated, given that we have no way to observe how fast it is expanding right now, but only the rate as it was expanding when the nearest object to the point of observation was when the observable radiation emitted from it left the object! (say that three times fast) Somebody must have puzzled these calculations out somewhere and have a formula for it... Thanks PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Science Advisor Staff Emeritus You seem to have a few misconceptions about the so-called "Hubble flow." To a first approximation, the velocity of an object with respect to us is given by Hubble's law: [latex]v = H_0 d[/latex] This is linear. Hubble's constant $H_0$ defines the rate of the expansion of the universe. Its current value is around 70 km/s per megaparsec of distance. If the universe's expansion is accelerating, then Hubble's "constant" is actually growing with time. The redshift is simply another way to express velocity, and thus, by Hubble's law, distance. [latex] z = \frac{{\Delta \lambda }} {\lambda } = \sqrt {\frac{{1 + {v \mathord{\left/ {\vphantom {v c}} \right. \kern-\nulldelimiterspace} c}}} {{1 - {v \mathord{\left/ {\vphantom {v c}} \right. \kern-\nulldelimiterspace} c}}}} - 1 [/latex] If you're trying to imagine what the objects are doing now, rather than as we see them now, you're headed down a slippery slope, indeed. There is no universal concept of now. In other words, every observation must be made by some realistic observer, and no realistic observer can see every object in the universe at the same time. One of the conclusions of the special theory of relativity, in fact, is that every observer has his own personal notion of now, and his notion of now is not necessarily related to anyone else's. You're going to have to settle for plotting your objects at the positions they appear for some realistic observer. I'm also going to note that your example, For example (and grins), if the (albeit short lived) published/observed distance of an object from Earth is 1 light second (300,000km), and the red shift of that object tells me that it is moving away from me at the rate of 300,000km/s then if I calculate the "actual" distance of the object for rendering purposes, would 600,000km be accurate (assuming I calculated it within a second, duh)? is unrealistic, because a) it implies an enormous Hubble constant, b) it implies that the object is receding from the observer at the speed of light, and is thus actually on the boundary of being invisible to the observer. If the universe's expansion is accelerating, it will momentarily become invisible to the observer. Keep in mind that, with realistic values of Hubble's constant, the distances to very distant objects are changing relatively little. If an object is 10 billion light years away, for example, what does another half a light-year per year matter? - Warren Quote by hsbrown For example (and grins), if the (albeit short lived) published/observed distance of an object from Earth is 1 light second (300,000km), and the red shift of that object tells me that it is moving away from me at the rate of 300,000km/s then if I calculate the "actual" distance of the object for rendering purposes, would 600,000km be accurate (assuming I calculated it within a second, duh)? As long as this object is a photon at moving at 300,000 km/s. But I don't know of any photons giving off mini photons without a atomic nucleus or some other object/field acting like a braking mechanism, so this would not be possible. Quote by hsbrown Assuming that the explanation for red-shift is not EM wave propogation (and it does not play any role in the red shift) , but that it is actually a function of the objects motion, this gets confusing over greater distances when the rate of expansion of the universe is not a constant, but is either accelerating or deccelerating. Doppler redshift depends on the instantaneous velocity. By plotting the the change in doppler shift over time, you can pick out the change in velocity. That same goes with any "acoustic" doppler as well. Quote by hsbrown My quandary: If one observes an object with a distance of d, and observes the red shift, applies the formula to calculate how far the object has moved away from me in the time between now and however many years, months, days, hours, seconds, etc... have passed since the light that I am observing has left the object, I would be able to calculate the actual distance of the object that I am observing, right? No. The speed of sound and speed of light are both finite. In the same way with the speed of sound, where there is delay, the same is true for light. The redshifted light that has not reached you yet will give you information that would otherwise not be known. All of a sudden, it could go the opposite direction, and you wouldn't know it because the signs of that change in velocity haven't reached you yet. Quote by hsbrown Well, not really. If the object has sped up over time I have to use the red shift of progressively nearer objects (this of course assumes that the rate of expansion at any given moment in time is uniform throughout the cosmos) to calculate how much the acceleration of the object has increased over time (and the distance it is from the point of observation now), i.e if the universe is accelerating, there should be a more pronounced observable effect on red shift for nearer objects than more distant, right? The "more pronounced" effect is due to the greatest redshift, which is case for farther objects in the expanding, accelerating universe. Quote by hsbrown The faster an object is moving away from you, the more the spectra will shift toward the red end, right? Yes. Quote by hsbrown Well, but then aren't I riding a proverbial snowball down a hill? Each progressively nearer object should have a less pronounced shift, but the shift in the spectra as I am observing it is as it was x millenia ago, and so on and so forth... Yes. But validity of the notion of "x millenia ago", depends on the optics of the environment outside the solarsystem. Quote by hsbrown Color me confused! How can the actual real-time distance of any object be calculated when everything we can observe about it actually happened aeons ago? Ya can't do it. You can guess, but the greater the delay, the less you can rely on it. Quote by hsbrown Including the speed at which we observe it to be moving away from us? Yes. Quote by hsbrown It doesn't really seem as though an accurate expansion rate of the universe weighted over time could be calculated or extrapolated, given that we have no way to observe how fast it is expanding right now, but only the rate as it was expanding when the nearest object to the point of observation was when the observable radiation emitted from it left the object! (say that three times fast) We obviously don't see the acceleration of Andromeda as it was 10 billion years ago, and we certianly don't see the acceleration of the furthest galaxies as they are right now. Quote by hsbrown Somebody must have puzzled these calculations out somewhere and have a formula for it... Yes they have. Calculations, and reality, if that. Quote by hsbrown Thanks No problemo. Recognitions: Gold Member Science Advisor Staff Emeritus ## Modelling the known universe Quote by kmarinas86 As long as this object is a photon at moving at 300,000 km/s. But I don't know of any photons giving of mini photons without a atomic nucleus or some other object/field acting like a braking mechanism, so this would not be possible. It is perfectly consistent for an object to be receding from us at a velocity equal to or greater than the speed of light, if space itself is expanding between the object and the observer; this is not a violation of special relativity. I ask that you sit out discussions for which you are unprepared to contribute. - Warren Quote by chroot It is perfectly consistent for an object to be receding from us at a velocity equal to or greater than the speed of light, if space itself is expanding between the object and the observer; this is not a violation of special relativity. Of course not. It's a matter language really though. It is said that a mass cannot travel at the speed of light, and of course, if this is with respect to the space itself, then the limit is imposed. But if it is with respect to the previous state of that space, then that's where your statement comes in. Quote by chroot I ask that you sit out discussions for which you are unprepared to contribute. Then I will prepare to contribute. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by kmarinas86 Of course not. It's a matter language really though. No, it really isn't. - Warren Hi, all, Sorry, just a little clarification... Okay, so no subjective observer can see the universe as a whole at one moment in time, or that is to say, from the perspective of another "subject" millions or so light years away. Which in and of itself would seem to make this a fool's game, eh? I can make predictions (read: guesses) regarding the relative positions of objects based only on subjective observation from my observers point of view (and could never account for anything that altered the objects path in the "past"; the time between when the light left and "now"). Any quasi-real time model, would always be from the standpoint of the subject, and would be unable to account for an infinite number of variables, but might make a unique conversation piece. Although, eventually one would think that complex enough mathematics could eventually produce a theory that, while only explaining what we observe from our collective/subjective point of view (like dark matter; we can infer it's existence based on what we observe, but noone has ever actually seen it. Or stars that appear to wobble imply the existence of a mass orbiting it.), if the same laws apply (singular events excepted) throughout this particular universe one could use our point of view to apply those theories to another point of view. It would just depend on the accuracy of information that is applied to that point of view... I would think, but I digress. In essence, what you are saying is that I should be able to take the Hubble constant, or my subjective view of it, and given its observed value over time (by making observations of object near and distant), I could extrapolate its apparent value over time? i.e (and this is rudimentary, and only an example) if I have information regarding the apparent value of the Hubble constant based on observation of Andromeda (2.2 million light years) and Abell 1835 IR1916 (13.23 billion) whose distances I would assume have been calculated by standard candle or something similar, one could approximate their actual distance from the subject which should be significantly greater than the distances referenced, as well as infer the value of the Hubble constant in any direction before or after (assuming that no unpredictable changes occurred to it, that acceleration is a gradient over time, not sudden and jerky)? But hmmm... This would seem to have an impact on the approximate age of the universe. If something is 13.23 billion light years away, then it would seem to indicate that the two objects could not have been at the same place (the "big bang") any less than n years ago (sort of the inverse of Hubbles law, rewind the distance and apply the velocity backwards accounting for the acceleration), which would seemingly be significantly greater than the currently accepted age of the universe... You couldn't really pinpoint anything (like exactly when they were at the same "place", or where that place might be), but one would think you could say with some certainty that the universe had to be at least so many years old. It would also at some point place the Hubble constant at 0, which may or may not correlate with that age. Ah the joys of a lack of understanding, I r a programmer. One thing confuses me though. Bear with me! If more distant objects have the greatest redshift, and the speed of light is finite, then that would seem to mean that more distant objects appear to be moving away from us at a faster rate than nearer objects. Which in turn would seem to indicate to me that objects closer to us, and therefore observed as they were at a moment in time significantly closer to "now" (ouch, not sure how to express that, other than to put it in quotes) are moving away at a slower rate, then wouldn't that indicate that the expansion of the universe is actually slowing down? Well, I guess more than one thing confuses me... If red shift is more of an optical illusion (so to speak) and is not related to the "point in time" that the light left the object, but is related to the point in time the light arrived at its "destination" (current accepted value of 70 for the Hubble constant), than wouldn't it appear to be the same for all objects we could observe? The long and short of it is, lacking understanding, can I apply Hubble's law to any observed object in the universe (given the ongoing refinements of the constant, but applying the currently accepted value), and thus calculate a "best guess" regarding an objects subjective "actual" distance from the subject, apply whatever other physical laws there may be accepted programmatically, and have a somewhat workable "thing"? If it were somewhat workable, although nowhere near all of the objects in the universe are known, and very few (relatively) are cataloged, it would still be a nice toy to play around with... Recognitions: Gold Member Science Advisor Staff Emeritus Quote by hsbrown Okay, so no subjective observer can see the universe as a whole at one moment in time Correct. In fact, there's no such thing as a universal "moment" that observers across the universe could even agree to. Although, eventually one would think that complex enough mathematics could eventually produce a theory that, while only explaining what we observe from our collective/subjective point of view... to apply those theories to another point of view. Well, it's fundamentally impossible for us to know what's happening in Andromeda until 2.2 million years have elapsed and its light has reached us. We can't say anything at all about it "now," and it's not because we lack mathematical sophistication. If you make the tacit assumption that the objects themselves do not change with time, but only move, then yes, you should be able to "translate" the view at one location to a view at any other location by simply applying the Lorentz transform. You have the coordinates in one frame of reference, so you can translate those coordinates into any other frame of reference. This is a good question, though, and one that I would need to ponder a bit. The most important conclusion of your program should be that, no matter where you place your hypothetical observer, the universe appears much the same -- everything appears to be moving away from every observer, with the same Hubble relationship. I have information regarding the apparent value of the Hubble constant based on observation of Andromeda (2.2 million light years) and Abell 1835 IR1916 (13.23 billion) whose distances I would assume have been calculated by standard candle or something similar Hubble's constant was originally derived by statistics. Obviously, you can't determine the expansion of the universe by looking at only one object, because each individual object has its own "proper motion," a deviation from the average Hubble flow, caused by gravitational interactions and so on. In fact, distances are usually not determined by "standard candle," but are instead calculated directly from redshift. The redshift-to-distance conversion requires Hubble's constant, so you can see that it's a circular sort of argument. Let me explain a bit about the so-called "distance ladder" employed by astrophysicists to measure distances. The closest objects, like the Moon and nearby planets, can actually be ranged with radar. You bounce pulses of EM radiation off of them, and time how long the echos take to return. For nearby stars, you can use parallax. When you look at a nearby star's apparant position relative to much more distant background stars, you'll notice it varies over a 12-month period as the Earth moves around the Sun. You can use the magnitude of the variation to measure distance. The more a star appears to move, the closer it is. For nearby galaxies, you can use Cepheid variable stars, which change in brightness over time. Cepheids have a distinct and well-studied relationship between luminosity (power output) and period of variation. Since the period is not affected by distance, you can measure the period and calculate the luminosity. Next, you can compare the luminosity to the power you receive from the star with your telescope, and determine the distance. At this point, you begin calibrating the redshift. You can use the galaxies with known distances (due to Cepheids) to determine Hubble's constant, which you can then use to measure distances to much, much more distant things. It's a "ladder" in the sense that each step depends upon the last. In fact, if we were to discover that Cepheids don't work exactly as we thought they did, we'd have to update every catalog yet made; Hubble's constant would be shown to not be what we thought it was. There are also more "high-tech" ways to measure Hubble's constant, and satellite experiments like WMAP have succeeded in measuring many of our universe's parameters to great precision by studying the cosmic microwave background radiation. The bottom line, though, is this: the distances quoted for very distant objects like quasars are calculated by redshift alone, with the assumption that Hubble's constant is already known accurately. But hmmm... This would seem to have an impact on the approximate age of the universe. If something is 13.23 billion light years away, then it would seem to indicate that the two objects could not have been at the same place (the "big bang") any less than n years ago (sort of the inverse of Hubbles law, rewind the distance and apply the velocity backwards accounting for the acceleration), which would seemingly be significantly greater than the currently accepted age of the universe... Some of the early calculations of the age of the universe were, in fact, made by the method you propose -- by rewinding the Hubble flow and determining when everything was in the same spot. I'll note that the currently accepted age of the universe is 13.7 billion years (per WMAP). I'll also note that we can see objects much further than 13.7 billion light years away, because the Universe was much smaller in the past, and it didn't take as long for their early light to reach us. In fact, if you do the calculus, you'll find that the so-called "particle horizon," the furthest distance we can physically see, is about 46 billion light years. One thing confuses me though. Bear with me! If more distant objects have the greatest redshift, and the speed of light is finite, then that would seem to mean that more distant objects appear to be moving away from us at a faster rate than nearer objects. Correct so far. Which in turn would seem to indicate to me that objects closer to us, and therefore observed as they were at a moment in time significantly closer to "now" (ouch, not sure how to express that, other than to put it in quotes) are moving away at a slower rate, then wouldn't that indicate that the expansion of the universe is actually slowing down? I don't see how your conclusion follows from your premise. The velocities of objects depend linearly on their distances. If the same parameter applies equally to both nearby and more distant objects, it would imply that the expansion has been constant. If red shift is more of an optical illusion (so to speak) and is not related to the "point in time" that the light left the object, but is related to the point in time the light arrived at its "destination" (current accepted value of 70 for the Hubble constant), than wouldn't it appear to be the same for all objects we could observe? It isn't a function of the observer. It's a function of the expansion of space between the emitter and the observer. As space expands between the emitter and the observer, the photons lose energy and appear (to any observer who wishes to observe them) as redshifted. The long and short of it is, lacking understanding, can I apply Hubble's law to any observed object in the universe (given the ongoing refinements of the constant, but applying the currently accepted value), and thus calculate a "best guess" regarding an objects subjective "actual" distance from the subject, apply whatever other physical laws there may be accepted programmatically, and have a somewhat workable "thing"? I believe so, yes. If it were somewhat workable, although nowhere near all of the objects in the universe are known, and very few (relatively) are cataloged, it would still be a nice toy to play around with... It sounds like a neat project. You'd probably be able to visually demonstrate a number of the conclusions of modern cosmology. The only problem I see is that we can only see objects in a 46 billion light-year radius, and we've only catalogued a tiny fraction of the closest such objects. The most interesting "translation" you could make would be one that places the observer close to the edge of our known universe -- but it would also be the most boring, because we know nothing at all about most of that observers's sky. - Warren Just a quick one re: conclusion follows premise... My thought was that if we are looking at an object 13 billion years back in time, and it appears to be receding away from us (based on red shift) faster than an object 2 million years back in time, then I thought that to mean that 13 billion years ago, the universe was expanding faster that it was 2 million years ago, hence that it is slowing. I have no sort of transcended beyond the project and been bitten by an inexplicable desire to understand... Cosmosis? Thanks, chroot! Recognitions: Gold Member Science Advisor Staff Emeritus hsbrown, I think your last conclusion is based on the (very common) misconception that space is expanding from some specific point, i.e. that the universe has a 'center' from which everything is expanding. This is not true, of course; space is expanding everywhere, at the same rate. Every observer thus sees every other object as moving away, and with a (now famous) linear dependence of velocity on distance. The most popular analogy is that of a rising loaf of raisin bread. If each raisin is a galaxy, an observer on any raisin will always see all the other raisins moving away. You can calculate (relatively simply) the velocity each raisin will see the other raisins moving. You'll find it's linear with distance. And isn't cosmosis what you get when you drink the water in Mexico? - Warren Thread Tools | | | | |---------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Modelling the known universe | | | | Thread | Forum | Replies | | | Engineering, Comp Sci, & Technology Homework | 1 | | | Precalculus Mathematics Homework | 1 | | | Classical Physics | 7 | | | Calculus & Beyond Homework | 3 | | | General Astronomy | 9 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560217261314392, "perplexity_flag": "middle"}