url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/88171/efficiently-computing-a-few-localized-eigenvectors
## Efficiently computing a few localized eigenvectors ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $H = \triangle + V(x) : \mathbb{R}^2 \rightarrow \mathbb{R}^2$. I am interested in domain decomposition for an eigenproblem involving $H$. The lowest 1000 eigenfunctions of $H$, $\psi_i$, can be partitioned using a region, $\Omega \subset \mathbb{R}^2$, such that each $\psi_i$ localizes either inside of $\Omega$ or outside of $\Omega$. $\Omega$ is not a subspace of $\mathbb{R}^2$ as it may be an oddly shaped region. Label the inner eigenfunctions $\psi_i^{in}$ and the outer ones $\psi_i^{out}$. There's only about 10 $\psi_i^{in}$s. Given $\Omega$, my goal is to efficiently compute the $\psi_i^{in}$. One way to find the $\psi_i^{in}$ would be to discretize, compute all 1000 $\psi_i$s, and then partition. This is what I do now (5-point stencil for $\triangle$ on a $10^3 \times 10^3$ grid). The problem is that this requires diagonalizing over a 1000 dimensional space in order to get 10 eigenvectors. It seems like there would be a cheaper way to compute the $\psi_i^{in}$. Edit: I reposted to http://scicomp.stackexchange.com/questions/1396/efficiently-computing-a-few-localized-eigenvectors#comment2200_1396 and hopefully clarified the problem statement. Edit I think I can solve this if I can at least figure a way to solve \begin{equation} \max \psi^T H \psi \text{ subject to } P\psi = \psi \text{ and } \psi^T \psi = 1 \end{equation} where $P$ is projection onto the space of functions localized over $\Omega$. My guess is that this will end up looking like power iterations with a projection step built in between matrix applies. If this is doable then something like inverse iteration should be doable which will give me what I want. - 1 From what I know, if you are willing to compute 10% of the spectrum, you might as well go all the way. Besides, 1000 x 1000 is quite small by today's standards---or am I missing something obvious? – S. Sra Feb 11 2012 at 4:48 2 If no help here may be ask at scicomp.stackexchange.com – Alexander Chervov Feb 11 2012 at 12:34 Suvrit, I've never heard of that 10% rule before. A full decomposition of a 1000x1000 matrix is probably possible on cellphones, but I need to compute these eigenvectors thousands of times as part of a larger computation so any speedup is useful. – rcompton Feb 13 2012 at 17:42 Also, I was using "1000","100", and "10" simply because I thought it made the exposition clearer than "n","m","k" etc. The actual dimensions change depending on the simulation. – rcompton Feb 13 2012 at 17:49 @rcompton: that 10% is a ballpark number---dense matrix computations build on BLAS3 kernels, which can give them an advantage. I'd say try out both ARPACK and LAPACK for the kinds of matrices you have (including embedding the eigendecomp into an inner loop), and then decide. – S. Sra Feb 13 2012 at 19:41 show 6 more comments ## 2 Answers Just a random idea: The standard method for getting a small part of the spectrum in large and sparse symmetric problems is the restarted Lanczos method. Essentially, you run some iterations of the Lanczos method, then you check the eigenpairs that have been computed, keep some of them and throw away the rest. Typically the pairs to drop and keep are selected based on the eigenvalues (you want the ones with smallest or largest modulus, for instance), but in this case you could try to modify the method and keep the ones that are "localized" in your region of interest. Problems: 1. I cannot tell you for sure that this would work --- as far as I know there is no easy way to tell to which eigenvalues Lanczos will converge, and it is known that it has a tendency to converge to those at the border of the spectrum, so your efforts to steer it away from selected eigenpairs could be useless. 2. As far as I know, hooks for the selection strategy are not present in the usual Lanczos implementations, so you may have to code it yourself. In any case I agree with Suvrit's comment --- your dimensions are kind of borderline for Lanczos to be more effective than full diagonalization. - 1 Actually, a friend of mine who does eigenvectors for a living insisted, that if we are willing to settle for 10% of the spectrum, we might as well compute the whole spectrum for $n$ even as large as 50,000. So I guess, unless absolutely short on storage, for a tiny $1000 \times 1000$ matrix, I would go ahead and compute the entire spectrum. It takes only 3 seconds on my 5 year old laptop to compute the full spectrum of a 1000 x 1000 matrix! – S. Sra Feb 11 2012 at 19:36 As for the size, I'd like to work with larger matrices in the future. The Lanczos idea is interesting. I don't have a good intuition for how the eigenvectors converge so I'm hesitant to throw out candidates that are not localizing early on. – rcompton Feb 13 2012 at 18:02 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To build on Federico's answer, why not run a restarted Lanczos iteration but compute harmonic Ritz vectors to get approximations to the interior eigenpairs? For something so small, though, why not just diagonalize, as has already be stated. - Yes Ritz vectors should improve over the Lanczos method. I suppose if the Lanczos idea can work then this will work better. – rcompton Feb 13 2012 at 18:05 1 @rcompton, just to be clear, I am not talking about regular Ritz vectors. I am talking about harmonic Ritz vectors which yield approximations to eigenvectors associated to eigenvalues near the origin (the so-called interior eigenvalues). – Kirk S. Feb 16 2012 at 6:48 @Kirk I've never worked with those before. Is there a standard reference? – rcompton Feb 16 2012 at 23:36 1 @rcompton Absolutely! Here is a reference by Ronald Morgan that may be a good starting point: Computing interior eigenvalues of large matrices. Linear Algebra Appl. 154/156 (1991), 289–309. – Kirk S. Feb 17 2012 at 3:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307655692100525, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2072/on-this-infinite-grid-of-resistors-whats-the-equivalent-resistance?answertab=oldest
On this infinite grid of resistors, what's the equivalent resistance? I searched and couldn't find it on the site, so here it is (quoted to the letter): On this infinite grid of ideal one-ohm resistors, what's the equivalent resistance between the two marked nodes? With a link to the source. I'm not really sure if there is an answer for this question. However, given my lack of expertise with basic electronics, it could even be an easy one. - 1 Answer: number of all spanning trees containing an edge between the marked points on the grid graph with said edge attached divided by the number of all spanning trees on the original grid graph (as a limit of grid size going to infinity). I am going to sleep and hope I'll find out in the morning that someone will have counted those trees :-) One way to do so is as determinant of a certain matrix associated with the graph or else as a certain correlator of the $q \to 0$ limit of an associated Potts model on the given graph (with and without edge between the marked points). – Marek Dec 20 '10 at 0:39 2 @Mark C This question could hardly have been answered accurately in a high school text. Perhaps it asked about two adjoining nodes? – Mark Eichenlaub Dec 20 '10 at 3:08 2 I instantly recognized the title from XKCD [Nerd Snipping is one of my favorites]. – muntoo Dec 20 '10 at 6:36 2 – Marek Dec 20 '10 at 8:35 2 – user172 Dec 20 '10 at 10:49 show 6 more comments 2 Answers Nerd Sniping! The answer is $\frac{4}{\pi} - \frac{1}{2}$. Simple explanation: http://www.mbeckler.org/resistor_grid/ Mathematical derivation: http://www.mathpages.com/home/kmath668/kmath668.htm - 13 +1, but it would be even better to outline the solution in the post so that people don't have to click a link to see how it's done. – David Zaslavsky♦ Dec 20 '10 at 2:12 4 The stuff on that math link is pretty complicated... Too much for mere inhuman lifeforms such as me. – muntoo Dec 20 '10 at 6:39 1 Yeah, it took me a few readings to figure out how it was done. (That's what makes it "fun" :-P) – David Zaslavsky♦ Dec 20 '10 at 7:22 2 @Sklivvz: regardless, we should have an explanation and not just a link in the answer. (For your answer as-is I think I may have been too quick to click the upvote button) – David Zaslavsky♦ Dec 20 '10 at 9:15 3 @kalle: Kirchhoff's law is what I mentioned in my comment above. You'll obtain an infinite-dimensional matrix and you'll have to compute it's determinant. Or you can use various dualities that connect resistor network with models in statistical physics. Nevertheless, I very much doubt any possible method is in any way easy. You'll definitely need to do Fourier transform or non-trivial integrals (as in the Sklivvz's link) at some point to obtain the results. So you are saying that you obtained something simple that can beat these established methods? I can't say I don't doubt you ;-) – Marek Dec 20 '10 at 18:05 show 8 more comments This is a classical problem. The trick is to use symetry whenever it's possible and to connect the symmetric points since they have the same potentiel. First diagonals, etc... - 1 No, this is why it snipes, it isn't reducible by symmetry like the adjacent vertices resistance. – Ron Maimon Jul 23 '12 at 14:41 2 Don't bother trying, the value is transcendental, it can't reduce by symmetry to a finite grid, this would necessarily give a rational answer. The answer is the inverse of the lattice Laplacian on non-adjacent points, which can be done in k-space, you get a sinusoidal factor in the numerator and denominator, and only for the resistance between adjacent vertices do the two factors cancel after symmetrization. – Ron Maimon Jul 23 '12 at 16:13 @Shaktyai If you wish to delete this or any other of your own answers, use the "delete" text that sits under the left hand side of the answer text. – dmckee♦ Jul 24 '12 at 18:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457307457923889, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/19449/simple-but-wrong-argument-for-the-generality-of-positive-beta-functions
# Simple (but wrong) argument for the generality of positive beta-functions In the introduction (page 5) of Supersymmetry and String Theory: Beyond the Standard Model by Michael Dine (Amazon, Google), he says (Traditionally it was known that) the interactions of particles typically became stronger as the energies and momentum transfers grew. This is the case, for example, in quantum electrodynamics, and a simple quantum mechanical argument, based on unitarity and relativity, would seem to suggest it is general. Of course, he then goes on to talk about Yang-Mills theory and the discovery of negative beta-functions and asymptotic freedom. But it is the mention of the simple but wrong argument that caught my attention. So, does anyone know what this simple argument is? And how is it wrong? - 3 I think this is very appropriate here, and in fact this is exactly the kind of question I'd like to see more of (advanced but not research; should have a well-defined answer). But... I don't know the answer. – wsc Jan 13 '12 at 14:42 1 @Ron: Thanks for the comments Ron. I think I might put down a bounty on this question to try to get a more explicit answer. – Simon Feb 19 '12 at 9:44 3 @Simon: I'll do it. – Ron Maimon Feb 20 '12 at 13:21 3 I wrote to Michael Dine. It's an argument about the spectral representation but not what anyone has said so far. He says he'll forward the details a few days from now. – Mitchell Porter Feb 21 '12 at 5:40 1 Ron: Thanks for pointing your rep points on the line (I was going to wait until the weekend). I hope that @Mitchell gets a reply soon. – Simon Feb 25 '12 at 5:11 show 11 more comments ## 3 Answers Michael Dine's response, quoted with permission: I now have to think back, but the argument in QED is based on the spectral representation ("Kallen-Lehman representation"). The argument purports to show that the wave function renormalization for the photon is less than one (this you can find, for example, in the old textbook of Bjorken and Drell, second volume; it also can be inferred from the discussion of the spectral function in Peskin and Schroder). This is enough, in gauge theories, to show that the coupling gets stronger at short distances. The problem is that the spectral function argument assumes unitarity, which is not manifest in a covariant treatment of the gauge theory (and not meaningful for off-shell quantities). In non-covariant gauges, unitarity is manifest, but not Lorentz invariance, so the photon (gluon) renormalization is more complicated. In particular, the Coulomb part of the gluon ($A^0$) is not a normal propagating field. - Thanks Mitchell, I forgot about this until Manishearth reminded me in the comments above. It would be nice if the answer was more detailed, but I'm happy with Michael Dine's response. (Your bounty will arrive in ~24 hours) – Simon Apr 11 '12 at 7:04 This is a temporary answer in order to store the generous bounty that Ron offered. When a proper answer to this question is given, I will transfer the 500 rep points (assign an equal bounty) to that answer. Going by the totalitarian principle of quantum mechanics / quantum field theory, since this move is not explicitly forbidden, it must be compulsory. - 3 What an answer! Full bounty! – Ron Maimon Feb 27 '12 at 14:47 I may be wrong, but the following remark at http://alumnus.caltech.edu/~callaway/trivpurs.pdf (PHYSICS REPORTS 167, No. 5 (1988) 241—320) may be relevant: "Then [2.1] it follows that if $[x, y] = \triangle_+(y-x)$ possesses properties implied by the Garding—Wightman axioms, then... the associated field theory is a generalized free field, i.e., it is a trivial theory." Reference [2.1] there is B. Simon, The $P(\varphi)^2$ Euclidean (Quantum) Field Theory (Princeton, 1974). By the way, you can just ask the author, M. Dine. Sometimes asking the author is the only way to sort out what (s)he wrote:-). I remember I found a book containing a result I had recently obtained myself, but without any proof. I e-mailed one of the two authors of the book and asked for the relevant reference. It took me a couple of months, but eventually he advised me that they meant something different from the result that I obtained, so my result was new:-) - This is not an accurate statement of the remark. The theorem is that if the commutator is equal to the free commutator than the theory is free. This is both a trivial result (made to sound nontrivial) and unrelated to the beta function properties. The rigorous literature is superficially impressive in this field, but all of it is essentially less than worthless. But no downvote--- I appreciate the sincere effort. – Ron Maimon Feb 21 '12 at 1:05 @Ron Maimon: Could you please explain that? Do you think the statement in the Callaway's article is wrong? Because I don't see how the assumption in the statement is "if the commutator is equal to the free commutator". As for relevance to the beta function properties, I guess if the beta function is positive, the charge is screened, and the theory is non-interacting. – akhmeteli Feb 21 '12 at 3:01 Apologies, I wrote the first comment without reading the paper you linked. The paper is not rigorous nonsense, it's a good review of triviality in scalar field theory, with a rigorous subsection. But the result you quote is just saying that if a theory obeys Wicks theorem then it's a free field theory. The result I thought you were talking about is a somewhat deeper: if the two point function is exactly free, than the theory is free. Neither have anything obvious to do with the beta-function, but they are positive spectral weight results, and so they are Kallan-isms. – Ron Maimon Feb 21 '12 at 15:04 just because the beta-function is positive doesn't mean that the theory is trivial. The beta-function could be positive with a strong coupling fixed point, or even a fixed point at relatively weak coupling. Not every growing coupling has to blow up to infinity. Its a different question. But at least now I know what you were thinking, it makes sense why you would give this answer. – Ron Maimon Feb 21 '12 at 15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446486234664917, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/96185?sort=newest
## Which formulae of Euler is Fröhlich referring to? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In A. Fröhlich's article Local Fields in Algebraic Number Theory, the following claim is made: if $R$ is a Dedekind domain with field of fractions $K$, $L$ is a finite separable extension of $K$ and $S$ is the integral closure of $R$ in $L$, and $x$ is an element of $S$ with minimal polynomial $g$, then, "by Euler's formulae", $$\text{tr}_{L/K}(x^i/g'(x)) \in R$$ for each $0 \leq i \leq n-1$, where $n=\text{deg } g$. Which formulae of Euler are being referred to? The claim can be proven by the Lagrange interpolation formula; in fact the given quantity is $1$ if $i=n-1$, and $0$ for $0 \leq i < n-1$. However, I have no idea what proof Fröhlich has in mind. I also cannot resist pointing out the humor in appealing to Euler's "formulae" without further precision. Perhaps the formulae in question are well-known, and I am the only one who has not been invited to the party? In any case, more details would be greatly appreciated! Thank you. - 3 See Lemma 2 (Euler) on p.56 of Serre's Local Fields. – Chandan Singh Dalawat May 7 2012 at 5:56 7 Serre makes a similar comment in "Writing mathematics badly" (it's a lecture, google it to find a video), that if you want to make sure your article or lecture's title contains no information at all then call it "On a theorem of Euler". – Gunnar Magnusson May 7 2012 at 8:31 1 This is also in the proof of Prop. 2 on p. 58 of Lang's Algebraic Number Theory. – KConrad May 7 2012 at 14:06 ## 1 Answer I believe the reference is to this formula of Euler (see here): If $P(x)/Q(x)$ is a rational function and $ax+b$ is a simple factor of $Q(x)$, then the coefficient of $1/(ax+b)$ in the partial fraction decomposition of $P/Q$ is given by $$\lim_{x\to \frac{-b}{a}} \frac{a P(x)}{Q'(x)}.$$ To see how this applies here, proceed as Serre does in Local Fields. That is, write (in a suitable extension of $L$) $$\frac{1}{g(X)} = \sum_{k=1}^n \frac{a_k}{X-x_k} \qquad (*)$$ where the $x_k$ are the conjugates of $x$. Then a formal application of Euler's formula gives $$a_k = \lim_{X \to x_k} \frac{1}{g'(X)} = \frac{1}{g'(x_k)}.$$ Now expand both sides of (*) as power series in $1/X$ and compare coefficients. - Wonderful! Thanks for writing that out, Faisal. – Bruno May 7 2012 at 18:09 No problem! – Faisal May 8 2012 at 16:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229533076286316, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/142203-var-covar-linear-lease-square-estimates.html
# Thread: 1. ## var/covar of Linear Lease Square estimates Goal: Find, $Var(\hat B_0)$ and $Covar(\hat B_1, \hat B_0)$ $($Note: $y_i = B_0 + x_i B_1 +e_i,$ with $e_i$ ~ $N(0,\sigma ^2)$ So, everything is fixed, but $e_i$ and $y_i.$ Now I found unbiased estimates $\hat B_0$ and $\hat B_1.$ Also, I found the variance of $\hat B_1.)$ $\hat B_0 = \bar y - {\bar x} \hat B_1$ and, $\hat B_1 = \frac{\sum x_i y_i -{\bar y} \sum x_i}{\sum x_i ^2 - {\bar x} \sum x_i}$ Work: $E[\hat B_0 ^2] = E[(\bar y - {\bar x}\hat B_1)^2] = E[\bar y ^2] - 2{\bar x}E[{\bar y}\hat B_1] + \bar x E[\hat B_1^2]$ Each part is easy enough, I just don't know how to start $E[{\bar y}\hat B_1]....?$ Lastly, how to approach $E[\hat B_0\hat B_1]....?$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230383634567261, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/377/which-algorithms-are-used-to-factorize-large-integers/383
Which algorithms are used to factorize large integers? Even if RSA decided to cancel the Factoring Challenge, it seems that some teams keep working on it. According to Wikipedia, RSA-768 has been factored in late 2009. What are the current large integer factorization algorithms and what the mathematical principles behind them? What are the ways for improvement (faster algorithms, better implementations...)? - 1 – ir01 Aug 11 '11 at 21:14 If you want to know more about factorization algorithm aspect of your problem, you may want to ask on Math.SE" would be perfectly acceptable – ir01 Aug 11 '11 at 21:28 3 Answers The three main general-purpose algorithms for factorization are the quadratic sieve (QS), the elliptic curve method (ECM) and the number field sieve (NFS). On Complexity The running time of these algorithms is expressed with the L-notation: $L_n[a,c]$ means that the asymptotic complexity of factoring a number $n$ is $O(e^{(c+o(1))(\log n)^a(\log \log n)})^{1-a}$. We recognize "$\log n$" as "the size of the number $n$" so the main parameter to look at is the "$a$", but, for a given "$a$", "$c$" must not be neglected. Also, this is an asymptotic complexity, valid when $n$ is "big enough" and there is no telling whether the everyday "$n$" values (e.g. RSA keys) are "big enough" for the expression to be a precise estimate of the actual factorization effort. Last but not least, running time is only about CPU consumption, it does not cover memory usage, and memory is the bottleneck for 1024-bit integers. Quadratic Sieve A description of QS can be found in chapter 3 of the Handbook of Applied Cryptography. It is more thoroughly detailed in "A Course in Number Theory and Cryptography", a highly recommendable book for whoever is interested in such subjects (contrary to the Handbook, this one is not free, but it is well worth its price). The main idea of QS is to try to find two integers which are square roots of the same value modulo $n$. If we consider $n = pq$ (with $p$ and $q$ big primes), then most integers which have a square root modulo $n$ actually have four square roots; this can be seen with the Chinese Remainder Theorem (CRT), which roughly states that when you compute modulo $n$, you are actually computing modulo $p$ and modulo $q$ at the same time. If $z$ is a quadratic residue modulo $n$ (it is the square of some integer modulo $n$), then $z$ is also a quadratic residue modulo $p$ and modulo $q$. Modulo $p$, $z$ has two square roots (if $u$ is a square root of $z$, then so is $-u$), and $z$ also has two square roots modulo $q$, yielding four possible combinations through the CRT. The four square roots of $z$ are $u$, $-u$, $v$ and $-v$ for two values $u$ and $v$. So if we find $x$ and $y$ such that $x^2 = y^2 \mod n$, then $x$ and $y$ are two square roots of the same value modulo $n$. The bad case is then $x = ±y \mod n$, because this yields no information: this is when $x$ and $y$ are $u$ and $-u$, or $v$ and $-v$. However, there is a 1/2 probability that $x$ and $y$ are $±u$ and $±v$, respectively. At that point, a simple GCD between $n$ and $x+y$ will yield $p$ or $q$. In QS, we set two bounds $A$ and $B$, and we look for $B$-numbers in a set $S$ of integers. A $B$-number is a number which is $B$-smooth, i.e. such that all its prime divisors are smaller than $B$. The set $S$ consists in the integers $t^2-n$, for values of $t$ which are between $\sqrt{n}$ and $\sqrt{n}+A$. So the values in $S$ are "small" integers which are each equal to the square modulo $n$ of a "bigger" integer. Suppose that we found two $B$-smooth integers $s_1$ and $s_2$, such that $s_1 = gh^5$ for some small primes $g$ and $h$ (smaller than $B$), and $s_2 = g^3h^3$. Neither is an "obvious square" (i.e. a square in the plain integers, not computing modulo $n$); but $s_1s_2 = g^4h^6$, which is the square of $g^2h^3$ (square in the plain integers, but this then also holds true modulo $n$). However, $s_1$ and $s_2$ are also squares modulo $n$ of some $t_1$ and $t_2$, respectively, so $s_1s_2$ is a square of $t_1t_2$ modulo $n$. This yields two values $x = t_1t_2$ and $y = g^2h^3$ such that $x^2 = y^2 \mod n$, exactly what we were looking for. The gist of QS is the generalization of this process. We accumulate many $B$-smooth values from $S$, in the hope that we will find a product of such $B$-smooth values where each prime divisor will have an even exponent, because this then yields an "obvious square" which we can equate to a "non-obvious square" obtained from the way the set $S$ was defined. Finding $B$-smooth integers in the set $S$ involves "sieving", which in this case can be thought of as an offspring of Eratosthenes' Sieve. We "lay out" a sequence of values from $S$, then we "mark" all multiples of $r$ for every prime $r$ smaller than $B$; if one of the values has accumulated enough marks, then it is a product of "many" small primes, and thus a good candidate for being $B$-smooth. The two parts of the algorithm are thus sieving, which can be distributed over many machines, each looking for $B$-smooth integers in a relatively small range (for big integers, we are still talking about using a few gigabytes of RAM in each machine). Then, all $B$-smooth integers are accumulated in a big matrix: one row per $B$-smooth value $s$, one column per prime $r$ smaller than $B$. The slot at row $s$ and column $r$ is a 1 if the factorization of $s$ includes $r$ with an odd exponent; otherwise, this is a 0. The matrix reduction step tries to find sums of rows (in $\mathbb{Z}_2$) which yield an all-zero row (summing two rows is equivalent to multiplying the corresponding $s$ values, and an all-zero row means that the product will have even exponents for all small primes, hence an obvious square). The sieving will work in a reasonable time if $B$ is sufficiently big (it is easier to find $B$-numbers if you allow bigger "small primes"). On the other hand, the matrix reduction step will have good probability of finding full relations (products of $s$ values which are obvious squares) only if we have found more than $B$ such $B$-numbers; and the final matrix will have size $B^2$, which can become unbearably huge. So there is a trade-off. It has been proven (given some "reasonable" assumptions) that the optimal choice leads to a running complexity of $L_n[1/2,1]$. A variant is known as the Multiple Polynomial Quadratic Sieve, in which $S$ may accept other kinds of integer, beyond the "$t^2-n$" format. See the main paper from Silverman. Factoring with Elliptic Curves The ECM factorization algorithm relies on the following idea: we compute points on an elliptic curve, using values modulo $n$ as if $n$ was prime (which it is not), fervently hoping that things will go sour at some point: we want to hit a value which is not invertible modulo $n$, because then a simple GCD will yield a non-trivial factor of $n$. An Elliptic Curve is the set of points $(X,Y)$, where $X$ and $Y$ are from a field, such that a given cubic equation holds, usually $Y^2 = X^3 + aX + b$ for some constants $a$ and $b$. On such a set of points, we can define a group law; this requires a "neutral point", an extra conventional point called "point at infinity" which does not have $X$ and $Y$ coordinates in the field. The EC group law is easily computable, with formulas which imply a division (pay attention, this is the interesting point for ECM): to add point $(X_1,Y_1)$ to point $(X_2,Y_2)$, we must divide some value with $X_2 - X_1$. When we add two points with the same $X$ coordinate but not the same $Y$, we get the point at infinity (which somehow explains its name: we are "dividing by zero"). Since we can add points together, we can define the multiplication of a point by an integer: to multiply $P$ by $f$, we repeatedly add $P$ to itself ($f-1$ times). This can be done efficiently with a double-and-add algorithm. If we work with a finite field, then the curve is a finite group, and this implies that when adding $P$ to itself we necessarily end up with the point at infinity. Actually, any point $P$ as an order, which is an integer $d$ such that $fP$ is the point at infinity whenever $f$ is a multiple of $d$. Integers modulo $n$ are not a field. However, if $n = pq$, then, when we compute with curve points with coordinates modulo $n$, we are actually computing points over two curves simultaneously: the curve in the field of integers modulo $p$, and the curve in the field of integers modulo $q$. This is what the Chinese Remainder Theorem is all about. So the ECM works like this: • We choose random constants $a$ and $b$ modulo $n$, and a random point on the curve. • We repeatedly multiply that point with integers $r^x$ for small primes $r$, and exponents $x$, such that $r^x$ is no bigger than a given bound $B$. • We hope that at some point we will try to do a division modulo $n$ by a value $X_2-X_1$ which will turn out not to be invertible, at which point we will have won (a GCD of $n$ and that value will yield a non-trivial factor of $n$). The ECM relies on the hope that the point on the curve modulo $p$ (but not modulo $q$) will have a $B$-smooth order; thus, multiplying by all the $r^x$ will reach modulo $p$ (but not modulo $q$) the point at infinity, i.e. a "division by zero". Since we work modulo $n$, we will notice that division by zero as a case of "inversion modulo $n$ does not work". The boundary $B$ is configured so that we do not spend too much time on a given curve. If we exhaust all small primes lower than $B$, then we try again with another curve (other random $a$ and $b$ constants). There are variants which can give a boost to the performance of ECM; see this report for details. The running complexity on a "balanced" integer (a RSA modulus $n = pq$ where $p$ and $q$ have the same size) is similar to that of the quadratic sieve. However, the complexity of ECM primarily depends on the size of the smallest factor of $n$, not the size of $n$, so this is the algorithm of choice for attacking big "unbalanced" integers (e.g. integers which have not been specially generated as RSA modulus). Number Field Sieve The Number Field Sieve is a very complex algorithm which I will not attempt to detail here, if only because I am quite sure that I would not get it right. As an extreme over-simplification, NFS is like QS with polynomials instead of integers everywhere. It relies on quite of lot of number theory. However, it still has the two basic steps: • a sieving step, which can be easily distributed over many machines (each having a hefty but not implausible amount of RAM); • a matrix reduction step, which can not be easily distributed over many machines, and which involves applying a theoretically simple operation (Gauss-Jordan reduction) to a matrix of humongous size. An extra initialization step is needed to choose the parameters; it is often known as "polynomial selection" and requires quite a lot of thinking and non-negligible CPU work. The running time of NFS on random integers (known as the General Number Field Sieve) is $L_n[1/3,c]$ for a constant $c = (64/9)^{1/3}$. It is faster than QS and ECM for integers larger than about 350 bits; all factorization records beginning with RSA-130 (430 bits) up to and including the current RSA-768 (768 bits) used GNFS. There is a variant called SNFS (Special Number Field Sieve) which is applicable only to integers which have a special format (and thus not applicable to RSA key cracking), but which has a better complexity ($L_n[1/3]$ with $c = (32/9)^{1/3}$, which means that SNFS can potentially factor integers twice bigger than GNFS). SNFS was used to factor a 1024-bit integer which was a divisor of $2^{1039}-1$. For big integers, the bottleneck in factorization is the matrix reduction step, which requires terabytes of very fast RAM and cannot be easily distributed. Rumor has it that the polynomial selection for a 1024-bit "general" integer factorization has begun, but if the sieving step appears doable (though it will take several years), nobody knows yet how the matrix reduction step will be performed (even accounting for what five or ten years of technological advances will bring us). Quantum Computers Shor's algorithm easily factorizes very big integers. Its only drawback is that it works only on a quantum computer, which does not exist (yet). On Scientific Progress We may note that QS, ECM and NFS are all algorithms from the 1980s. No new, efficient algorithm has been discovered in the last 20 years. However, many optimizations have been discovered in the meantime, leading to algorithm variants (e.g. MPQS, ECM with "stage 2"...) which are widely faster than their original description, so there has been substantial progress -- but description of this progress requires going into the technical details of each algorithm. - For numbers over about 115 (decimal) digits, the best algorithm currently known in the General Number Field Sieve (GNFS – sometimes just called the Number Field Sieve, though there's also a Special Number Field Sieve for factoring numbers of a special form). The GNFS, unfortunately, is an exceedingly complex algorithm, and I don't know of any online tutorial that gives enough detail to even begin to implement it (most give about a rather vague summary of a sentence or two). While it doesn't have (even close to) enough to implement the algorithm yourself, the Wikipedia entry for the GNFS has links to a few working implementations. Be aware, however, that factoring a 115+ digit number with the GNFS requires a lot of RAM – more than most people have available, and it accesses the data randomly enough that virtual memory is pretty useless for it. For somewhat smaller numbers, the next choice is the multiple polynomial quadratic sieve (MPQS). This isn't exactly a trivial algorithm either, but the Wikipedia entry (for one example) looks like it's probably sufficient to implement it (and it also has links to some implementations). - You may already be aware of this but have a look at the following article relating to different integer factorization algorithms. It is a bit outdated, however, it should give you an idea of the different algorithms available, and the theory behind them. Integer Factorization Algorithms by C. Barnes, 2004 Note that if you do a Google search, you should be able to find the PDF of this article. You may also want to have a look at the following link. I think it more directly answers your question about the fastest integer factorization algorithm, as well as providing references to other integer factorization algorithms. http://mathworld.wolfram.com/PrimeFactorizationAlgorithms.html -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 194, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390535354614258, "perplexity_flag": "head"}
http://mathoverflow.net/questions/65483/classical-invariant-theory-absolute-rational-invariants-and-gl2-orbits
## Classical invariant theory: absolute rational invariants and $GL(2)$-orbits ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a question concerning classical invariant theory. Consider binary $n$-forms (i.e. all homogeneous polynomials of degree $n$ of two variables) over the field of complex numbers. Clearly, the group $GL(2,C)$ acts on the space of all such forms by changes of the variables. A classical relative invariant is a polynomial function $I$ in the coefficients of the form such that under the $GL(2,C)$ action the value of $I$ changes only by multiplication by $\det C$ to some power $k$ ($k$ is called the weight of $I$). One can now form rational absolute invariants by taking ratios of relative invariants of equal weights. My question is: $GL(2,C)$-orbits of what forms can be distinguished by such rational absolute invariants? How about forms with non-zero discriminant, for example? I have found some classical results by Clebsch of the 19th century and a result by Olver of 1990, but they do not quite give the result that I want. Also, Geometric Invariant Theory seems to deal only with $SL(2,C)$-actions. For $SL(2,C)$-actions the orbits can be distinguished just by polynomial invariants, but this is a completely different situation. In some cases (e.g. for quintics) I can prove what I need, but I am wondering if there is perhaps a general result. - ## 1 Answer The space of non-degenerate binary forms is an affine variety, since it is the complement of a hypersurface in an affine space. The reductive group $GL_2(\mathbb{C})$ acts on it with finite stabilizers, so the quotient is again affine, and its elements are distinguished by regular functions, which lift to functions of the form $\frac{f}{\triangle^k}$ on the space of binary forms. Here $f$ is a polynomial, $\triangle$ is the discriminant and $k$ is a non-negative integer. If, on the other hand, we allow two roots to coincide, the quotient will be projective, and there will be no functions on it at all. - Thank you for that, but does $GL_2({\Bbb C})$ really act with finite stabilizers? For example, for the quadratic form $xy$ all the maps $x\mapsto c x$, $y\mapsto 1/c y$ are in the stabilizer. – Alexander Isaev May 30 2011 at 6:17 When the degree is greater than 2, it does. – algori May 30 2011 at 6:36 The easiest way to see this is this: if we projectivize the space of non-degenerate forms, we get an unordered configuration space of $\mathbb{P}^1$. Now, a subgroup of $PGL_2(\mathbb{C})$ of positive dimension can not preserve a subset with more than 2 elements. – algori May 30 2011 at 6:44 I have thought about what you said, and I now agree. I am wondering if you let me mention this fact in one of my papers. Can I refer to "personal communications" with you? If you do not mind my doing that, what is your name? Thank you again for your answer. – Alexander Isaev May 30 2011 at 23:00 Dear Alexander -- of course I don't mind if you mention this conversation in a paper. Re my name: I will shortly send you an email. – algori May 30 2011 at 23:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252679347991943, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/73269-prime-number-problem.html
# Thread: 1. ## Prime number problem Let a be a non zero integer and p prime. Show p| $a^{2}$ => p|a. This is deeply perplexing me as all I can work out is gcd(p, $a^2$) = p. Because p is prime, p has only 1 and p as its divisors. p divides a^2 so p is a divisor of a^2. p>1 so gcd(p,a^2) is p. If it was equal to 1 this would make my job easier but I can't work this out. any help would be appeciated 2. Originally Posted by slevvio Let a be a non zero integer and p prime. Show p| $a^{2}$ => p|a. This is deeply perplexing me as all I can work out is gcd(p, $a^2$) = p. Because p is prime, p has only 1 and p as its divisors. p divides a^2 so p is a divisor of a^2. p>1 so gcd(p,a^2) is p. If it was equal to 1 this would make my job easier but I can't work this out. any help would be appeciated Suppose $p|a^2$ and $p$ is prime. If $p\not|a$ then, since $p$ is prime, $\gcd(a,p)=1$. By Gauss' theorem, we deduce from $p|a\cdot a$ and $\gcd(a,p)=1$ that $p|a$, which is a contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9735539555549622, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/02/10/polarization-of-electromagnetic-waves/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## Polarization of Electromagnetic Waves Let’s look at another property of our plane wave solutions of Maxwell’s equations. Specifically, we’ll assume that the electric and magnetic fields are each plane waves in the directions $k_E$ and $k_B$, repectively: $\displaystyle\begin{aligned}E(r,t)&=\hat{E}(k_E\cdot r-ct)\\B(r,t)&=\hat{B}(k_B\cdot r-ct)\end{aligned}$ We can take these and plug them into the vacuum version of Maxwell’s equations, and evaluate them at $(r,t)=(0,0)$: $\displaystyle\begin{aligned}k_E\cdot\hat{E}'(0)&=0\\k_E\times\hat{E}'(0)&=c\hat{B}'(0)\\k_B\cdot\hat{B}'(0)&=0\\k_B\times\hat{B}'(0)&=-\frac{1}{c}\hat{E}'(0)\end{aligned}$ The first equation says that $\hat{E}'(0)$ is perpendicular to $k_E$, but the second equation implies, in part, that $\hat{B}'(0)$ is also perpendicular to $k_E$. Similarly, the third and fourth equations say that both $\hat{E}'(0)$ and $\hat{B}'(0)$ are perpendicular to $k_B$, meaning that $k_E$ and $k_B$ either point in the same direction or in opposite directions. We can always pick our coordinates so that $k_E$ points in the direction of the $z$-axis and $\hat{E}'(0)$ points in the direction of the $x$-axis; then $\hat{B}'(0)$ points in the direction of the $y$-axis. It’s then straightforward to check that $k_B=k_E$ rather than $k_B=-k_E$. Of course, it’s possible that $\hat{E}'(0)$ — and thus $\hat{B}'(0)$ also — is zero; in this case we can just pick some different time at which to evaluate the equations. There must be some time for which these values are nonzero, or else $\hat{E}$ and $\hat{B}$ are simply constants, which is a pretty vacuous solution that we’ll just subtract off and ignore. The upshot of this is that $E$ and $B$ must be plane waves traveling in the same direction. We put this back into our assumption: $\displaystyle\begin{aligned}E(r,t)&=\hat{E}(k\cdot r-ct)\\B(r,t)&=\hat{B}(k\cdot r-ct)\end{aligned}$ and then Maxwell’s equations imply $\displaystyle\begin{aligned}k\cdot\hat{E}'&=0\\k\times\hat{E}'&=c\hat{B}'\\k\cdot\hat{B}'&=0\\k\times\hat{B}'&=-\frac{1}{c}\hat{E}'\end{aligned}$ where these are now full functions and not just evaluations at some conveniently-chosen point. And, incidentally, the second and fourth equations are completely equivalent. Now we can see that $\hat{E}'$ and $\hat{B}'$ are perpendicular at every point. Further, whatever component either vector has in the $k$ direction is constant, and again we will just subtract it off and ignore it. As the wave propagates in the direction of $k$, the electric and magnetic fields move around in the plane perpendicular to $k$. If we pick our $z$-axis in the direction of $k$, we can write $\hat{E}=\hat{E}_x\hat{i}+\hat{E}_y\hat{j}$ and $\hat{B}=\hat{B}_x\hat{i}+\hat{B}_y\hat{j}$. Then the second (and fourth) equation tells us $\displaystyle\hat{E}_x'\hat{j}-\hat{E}_y'\hat{i}=c\hat{B}_x'\hat{i}+c\hat{B}_y'\hat{j}$ That is, we get two decoupled equations: $\displaystyle\begin{aligned}\hat{E}_x'&=c\hat{B}_y'\\\hat{E}_y'&=-c\hat{B}_x'\end{aligned}$ This tells us that we can break up our plane wave solution into two different plane wave solutions. In one, the electric field “waves” in the $x$ direction while the magnetic field waves in the $y$ direction; in the other, the electric field waves in the $y$ direction and the magnetic field waves in the $-x$ direction. This decomposition is the basis of polarized light. We can create filters that only allow waves with the electric field oriented in one direction to pass; generic waves can be decomposed into a component waving in the chosen direction and a component waving in the perpendicular direction, and the latter component gets destroyed as the wave passes through the Polaroid filter — yes, that’s where the company got its name — leaving only the light oriented in the “right” way. As a quick, familiar application, we can make glasses with a film over the left eye that polarizes light vertically, and one over the right eye that polarizes light horizontally. Then if we show a quickly-alternating series of images, each polarized with the opposite axis, then they will be presented to each eye separately. This is the basis of the earliest modern stereoscopic — or “3-D” — glasses, which had the problem that if you tilted your head the effect was first lost, and then reversed as your neck’s angle increased. If you’ve been paying attention, you should be able to see why. ## 4 Comments » 1. [...] other important thing to notice is what this tells us about our plane wave solutions. If we take such an electromagnetic wave propagating in the direction and with the electric field [...] Pingback by | February 17, 2012 | Reply 2. It’s was great fun (for me, anyway) to take two polarized sun glasses and lay the lens of one over the lens of the other and rotate until they turn dark, although I haven’t kept up with the latest in sun glass technology. I’m not sure if any of them only use polarization. It would seem like a bad way to block UV A or B. Comment by Hunt | February 18, 2012 | Reply 3. That’s not just polarization. A quick search turns up the ANSI standards being about the amount of transmittance within certain frequency bands. Polarized sunglasses are mainly to reduce glare and offer little UV protection. Still, the rotation experiment is a neat one for teaching kids. Unfortunately it won’t work with most current 3D glasses. Comment by | February 18, 2012 | Reply 4. [...] charge, the magnetic field has units of force per unit charge per unit velocity. Further, from our polarized plane-wave solutions to Maxwell’s equations, we see that for these waves the magnitude of the electric field is [...] Pingback by | February 24, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 45, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321483969688416, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/roots+complex-numbers
# Tagged Questions 1answer 119 views ### Root of a quadratic equation that has modulus $1$ Let us suppose $\alpha \in \mathbb C$ and $|\alpha|=1$ and $\alpha$ satisfies a monic quadratic equation. Then prove that $\alpha^{12} =1$. Show me the right way to solve this. Thanks in advance. 3answers 96 views ### Solve $\sin(z) = z$ in complex numbers Show that $\sin(z) = z$ has infinitely many solutions in complex numbers. Little Picard theorem should help, but using big Picard theorem is undesirable. Thanks a lot! 1answer 29 views ### Complex numbers and absolute values If i have equation: \begin{align} P = \left|\psi\right|^2 \end{align} where $P$ is a probability and we know there is no negative probability. This means $P$ must belong to $\mathbb{R}$. If i want ... 2answers 34 views ### Roots of cubic polynomial lying inside the circle Show that all roots of $a+bz+cz^2+z^3=0$ lie inside the circle $|z|=max{\{1,|a|+|b|+|c| \}}$ Now this problem is given in Beardon's Algebra and Geometry third chapter on complex numbers. What might ... 1answer 39 views ### Complex solutions to $a = (z+b)^n$ I have tried the whole afternoon trying to figure out how to approach an equation of the form $a = (z+b)^n$, more specifically the equation: $1 = (z+1)^4$. Is there a general approach to equations of ... 2answers 53 views ### Comparing square roots of negative numbers If we have for instance $\sqrt{-25}$, that is, a square root of $-25$, I know the answer can be $5i$ (Is $-5i$ also correct? Sorry not professional in mathematics). My main question here is how to ... 1answer 35 views ### Is the Fujiwara bound the most precise bound on maximum absolute value of complex roots of real polynomials? Is the Fujiwara bound the most precise bound on maximum absolute value of complex roots of real polynomials ? Or does it exist some improved version for this special case of real polynomials ? 2answers 146 views ### geometric interpretation of quadratic equation with complex coefficients When an equation has real coefficients and non-negative discriminant, the geometric meaning of it's roots is intersection of the function with the x-axis. I know how to get roots of quadratic ... 2answers 449 views ### Find all roots of $\,(x + 1)(x + 2)(x + 3)^2(x + 4)(x + 5) = 360$ The question is to find all complex roots of $$(x + 1)(x + 2)(x + 3)^2(x + 4)(x + 5) = 360$$ and it is meant to be solved by hand. Is there any quick way to solve this using some trick that I'm not ... 4answers 98 views ### Can a cubic that crosses the x axis at three points have imaginary roots? I have a cubic polynomial, $x^3-12x+2$ and when I try to find it's roots by hand, I get two complex roots and one real one. Same, if I use Mathematica. But, when I plot the graph, it crosses the ... 4answers 85 views ### For $\sqrt[3]{-1+i}$, is $r$ (when put in polar form) $\sqrt[6]{2}$? And when you put that into the nth root form... It becomes $2^{1/18}\cos\theta + 2^{1/18}\sin\theta$? $n$th root form given is: $\sqrt[n]r\cdot\cos(\theta+2\pi k)n$ 2answers 53 views ### Multiple root in a polynomial I'm doing some old multiple tests. It seems I'm pretty stuck around the topic off complex numbers, could someone elaborate how to: Show that 1 is a multiple root of 2nd degree in p$p(x)=x^3-x^2-x+1$ 3answers 124 views ### How to find the roots of $x³-2$? I'm trying to find the roots of $x^3 -2$, I know that one of the roots are $\sqrt[3] 2$ and $\sqrt[3] {2}e^{\frac{2\pi}{3}i}$ but I don't why. The first one is easy to find, but the another two roots? ... 1answer 92 views ### Complex n-th root question Let $m$ and $n\neq0$ be any two integers.Show that $z^{m/n}=\left(z^{1/n}\right)^m$ has $n/(n,m)$ distinct values, where $(n,m)$ is the greatest common divisor of $n$ and $m$. Prove that the sets of ... 1answer 48 views ### Understanding a theorem of Marden's on the moduli of zeros of polynomials My question is concerning Theorem 3.2 in this paper of Marden's. The gist of the theorem is stated below. Theorem 3.2. Every polynomial of the form f(z) = \sum_{j=0}^{n} (b_j - ... 2answers 184 views ### Number of Complex Roots of a Complex Polynomial This is related to the question I asked regarding finding the complex roots of $z^3+\bar{z}=0$. It turned out that there were 5 complex roots, but because the equation was of degree 3 I was only ... 6answers 462 views ### Complex roots of $z^3 + \bar{z} = 0$ I'm trying to find the complex roots of $z^3 + \bar{z} = 0$ using De Moivre. Some suggested multiplying both sides by z first, but that seems wrong to me as it would add a root ( and I wouldn't know ... 6answers 488 views ### Show that $z^6 + 5z^4 - z^3 + 3z$ has at least two real roots given that all roots are distinct. Show that $z^6 + 5z^4 - z^3 + 3z$ has at least two real roots given that all roots are distinct. Also, show that $|3z - z^3 + 5z^4| < |z^6|$ when $|z| > 3$. I can see that 0 is a real ... 1answer 6k views ### Where to find information on shadow functions? I happen to give some private lessons to an IB (International Baccalaureate) student. He asked me for help with writing some kind of a project on a set topic, given some materials (containing the ... 2answers 167 views ### Complex Polynomial transformation I'm studying for an exam and professor gave us to create a little program that automatically does a transformation for a polynomial with complex coefficients, I don't have many problems doing the ... 2answers 1k views ### How can I find the roots of a quadratic function? Bascially we are trying to find the roots of a quadratic equation, and 'apparently' there is a theorem for this, but every one that I have found so far mentions that the degree of the polynomial is ... 1answer 220 views ### Bound the complex roots of a polynomial above We consider $P(z)=a_{0}+a_{1}z+\cdot+a_{n-1}z^{n-1}+a_{n}z^n$, with $a_{0},\ldots,a_{n-1},a_{n} \in \mathbb{C}$ and $a_{n}\neq0$. Let $R=\max_{0\leq k\leq n-1}\left | \frac{a_k}{a_n} \right |$ and ... 1answer 233 views ### Why are primitive roots of unity the only solution to these equations? I was led by this question to the following problem: Find $n$ complex numbers $\lambda_1\dots\lambda_n\in\mathbb{C}$ that satisfy \begin{align} \sum_i\lambda_i & =0\\ \sum_i\lambda_i^2 ... 1answer 513 views ### Using the fifth roots of unity to find the roots of $(z+1)^5=(z-1)^5$ The question I am working on starts of with: Find the five fifth roots of unity and hence solve the following problems I have done that and solved several questions using this, however ... 1answer 117 views ### Complex Logs and Roots of Unity I need to find all the solutions to the following using logarithms: $(e^z-1)^3=1$ where z is a complex number. I am told that using roots of unity I can break this equation down but I must be missing ... 3answers 364 views ### Visualization of complex roots for quadratics I read that if a parabola has no real roots, then its complex roots can be visualized by graphing the same parabola ($ax^2 + bx + c$) with $-a$ and then finding the roots of that, then using those ... 2answers 248 views ### Fastest way to find (natural) roots of a value on the unit circle _EDIT_ I'd like to do this to $d$ digits of precision. I wonder what the fastest way to get roots of a value on the unit circle is. More specifically, if I have a fraction of naturals, $p/q$, and ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424395561218262, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=7e97be60796f377da06595941388d013&p=4183078
Physics Forums ## Finding muzzle velocity knowing the angle, height shot from & horizontal displacement I need help with my hw, bad. I've been screwing off in my physics class for awhile and I can't get afterschool help because I have sports. I need to pick my grade up in this class to keep playing...so here I am. I need a good walkthrough badly, i don't know a whole lot with physics, but a decent amount and I'm begginggg someone to help guide me through this at least a little bit... I need to figure out the initial velocity (muzzle velocity) of a ball after it got shot knowing the angle, height, and distance traveled. He gave me these two formulas: "Δx=Vxt" and "Δy=Vyit+1/2ayt^2". The projectile went: 0.98m Launched horizontally: (0=0°). Why "0=" and not just 0°? Guess the 0 refers to the x axis? Launched from the height: 0.79m PLEASE HELP getting me started with this. It'll be muchhh appreciated...I have quite a bit to catch up on and this is the assignment I'm starting with. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor Could it have been "θ=0°"? That would make more sense. θ is a common symbol for angle measure. Not much to do with the problem, though. So anyways... Lets see if these hints help you. 1) Δx and Δy in these formulas refer to distances traveled between time 0 and t. So what are the Δx and Δy for this problem? 2) The projectile is launched horizontally. That tells you something about initial velocity. What variable can you replace or get rid of in your formulas based on this information? Once you have these things, just plug everything you know into formulas and see if any of them can be solved for any of the unknowns. By the way, normally, on this forum, you are expected to show as much of the work as you've managed on the problem before asking for help. Since you are saying that you are completely stuck, the above will hopefully act as the guided way to do the same. yeah it was theta not 0, I looked at it wrong my bad. But yeah you did help a bit. 1) Δx would be 0.98m & Δy would be 0.79m? 2) Beign launched horizontally the initial velocity would be 0 since it's at rest before being shot. I'm really confused though so I could be wrong. Now how do I figure out the acceleration (ay)...? Would you mind doing a little math for me on here so I can see what you end up doing and figure it out that way...? Recognitions: Science Advisor ## Finding muzzle velocity knowing the angle, height shot from & horizontal displacement If there are no other forces acting on projectile, vertical acceleration is just acceleration due to gravity, which is about -9.8m/s². So long as the problem is set up on Earth, this will always be the same. You should try to keep your signs consistent. Since projectile started at height of 0.79m and ended up at height of 0m, Δy = -0.79m. (Final minus initial.) I understand that much, now how do i go about solving this for t? Would you mind doing it for me and showing the work on here...? I'd loveee you. I'm not just trying to just get it over with, im gunna review your work and figure it out. That's if you do it, no pressuree. So, here's what i need worked out: "-0.79m=(0)t+1/2(-9.8m/s^2)t^2". If you do it, thanks a ton man... Bump... Recognitions: Science Advisor Quote by Jvells So, here's what i need worked out: "-0.79m=(0)t+1/2(-9.8m/s^2)t^2". If you do it, thanks a ton man... You can perform the same operation both sides of an equal sign, and equality remains valid. First of all, you can just get rid of 0*t, because that's just 0, and 0 + something is just that something. Next, you can multiply by (-1) on both sides, which will get rid of the minus sign. Finally, you want to take a square root of both sides. Keep in mind that $\sqrt{Number*t^2} = \sqrt{Number}*t$. That should leave you with a simple linear equation which, I hope, you know how to solve for t. By the way, for future, you might want to learn how to solve quadratic equations. Only so much of physics is physics. The rest is algebra, and quadratic equations will come back. So will simple systems of linear equations. T=0.40 seconds Rounded to two sig digs. Solved the other equation for a final muzzle velocity of 2.5 m/s. Looks right to me, what do you say mann? Absolutely love you for helping, i get it. Bump...sorry Recognitions: Science Advisor Yeah, these look good. Thread Tools | | | | |------------------------------------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Finding muzzle velocity knowing the angle, height shot from & horizontal displacement | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474557042121887, "perplexity_flag": "middle"}
http://gambasdoc.org/help/comp/gb.opengl.glu/glu/lookat?it&v3
2.0 3.0 > comp > gb.opengl.glu > glu > lookat Precedente  Successivo  Modifica  Rinomina  Undo  Search  Amministrazione Documentazione Attenzione! Questa pagina non è tradotta.  Vedi versione in inglese Glu.LookAt (gb.opengl.glu) `Static Sub LookAt ( EyeX As Float, EyeY As Float, EyeZ As Float, CenterX As Float, CenterY As Float, CenterZ As Float, UpX As Float, UpY As Float, UpZ As Float )` Define a viewing transformation. ### Parameters eyeX, eyeY, eyeZ Specifies the position of the eye point. centerX, centerY, centerZ Specifies the position of the reference point. upX, upY, upZ Specifies the direction of the up vector. ### Description Glu.LookAt creates a viewing matrix derived from an eye point, a reference point indicating the center of the scene, and an UP vector. The matrix maps the reference point to the negative z axis and the eye point to the origin. When a typical projection matrix is used, the center of the scene therefore maps to the center of the viewport. Similarly, the direction described by the UP vector projected onto the viewing plane is mapped to the positive y axis so that it points upward in the viewport. The UP vector must not be parallel to the line of sight from the eye point to the reference point. Let $\mathit{F}=\left(\begin{array}{c}\mathit{centerX}-\mathit{eyeX}\\ \mathit{centerY}-\mathit{eyeY}\\ \mathit{centerZ}-\mathit{eyeZ}\end{array}\right)$ Let UP be the vector $\left(\mathit{upX},\mathit{upY},\mathit{upZ}\right)$. Then normalize as follows: $\mathit{f}=\frac{\mathit{F}}{∥\mathit{F}∥}$ ${\mathit{UP}}^{″}=\frac{\mathit{UP}}{∥\mathit{UP}∥}$ Finally, let $\mathit{s}=\mathit{f}×{\mathit{UP}}^{″}$, and $\mathit{u}=\mathit{s}×\mathit{f}$. M is then constructed as follows: $\mathit{M}=\left(\begin{array}{cccc}\mathit{s}\left[0\right]& \mathit{s}\left[1\right]& \mathit{s}\left[2\right]& 0\\ \mathit{u}\left[0\right]& \mathit{u}\left[1\right]& \mathit{u}\left[2\right]& 0\\ -\mathit{f}\left[0\right]& -\mathit{f}\left[1\right]& -\mathit{f}\left[2\right]& 0\\ 0& 0& 0& 1\end{array}\right)$ and Glu.LookAt is equivalent to ```glMultMatrixf(M); glTranslated(-eyex, -eyey, -eyez); ``` ### Vedi anche Glu.Perspective, Gl.Frustum See original documentation on OpenGL website
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6237617135047913, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/102475-help-finding-volumes-rotated-curves.html
# Thread: 1. ## Help with finding volumes of rotated curves Hey I was recently introduced to the disc, cylindrical shell, and washer methods and I don't understand them fully mostly because it's difficult for me to imagine the shapes created when rotating curves and i dont know which method to use. Here is the problem: Consider the solid obtained by rotating the region bounded by the given curves about the x-axis. Find the volume V of this solid. I decided to use the disc method and set up my integral like so: $\pi\int(16-4x^2)^2dx$ With -2 and 2 for the limits of integration...solving this yields 64 $\pi$ but i cant get the correct volume. 2. 1) I always try to do it both ways, just to emphasize the concepts in my mind. 2) That is not $64\pi$. Please demonstrate your work. 3) You have it set up exactly correctly. Try the integral again. 4) And the other way: $2\pi \int_{0}^{16}y \left[ 2\sqrt{\frac{16-y}{4}}\right] \;dy$. See if you get the same answer. 5) Please learn to observe an exploit symmetries. The limits of your integral could have been [0,2] with the result multiplied by 2. 3. Thank you for the helpful reply i see what i did, or rather, didn't do.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456909894943237, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21683/number-of-integral-solutions-to-multi-variable-polynomials
## Number of integral solutions to multi-variable polynomials ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question follows the article discussed here ## Problem Suppose we're trying to bound the number of integral solutions to a system of multi-variable polynomials, say $$\sum_{i=1}^n x_i^t = \sum_{i=1}^n y_i^t,$$ where each $x_i,y_i \in \mathbb N$ and for each $t < c$ for some constant $c$. If we do not put any constrains on the solution, there are infinitely many possible solutions even when $n=C=1$. So if we put some constrains on {$x_i,y_i$} like $x_i,y_i \in$ {$0,1,\ldots,n$}, then how many possible solutions can we get? Naively there are $O(n^n)$ choices, but it seems highly unlikely that there are many solutions to the system of equations. Is there any exist bound on the number of solutions, say $O(n^k)$ for fixed $k$ or even better bounds? Are there some well-known approaches to bound the number of solutions of an equation? ## Motivation This question arose when I'm trying to come up with some reasonable constrains with the equation in Prouhet-Tarry-Escott Problem. It seems like if we restrict the maximum value of variables, there aren't many solutions to the equation. I tried to add more constrains to get rid of the already few solutions, but it seems that there is no direct way making the solution set empty, that is, no possible solutions under such constrains. So I turn to find some existing bounds for the equation, but sadly nothing occurred. Can it be still hard to find such results, or there are some theorems like the Fundamental Theorem of Algebra, concerning the number of solutions to a multi-variable equation? Any information is useful. Thank you all! ## Edited According to Felipe Voloch (Thanks!), the general approach to the question is the Hardy-Littlewood method, which considers the number of solutions to an equal-power Diophantine equation. But it seems that the method gives a lower bound on the number of solutions (is this correct?), rather than an upper bound. Or there are some ways to give upper bounds by the same method? One more question: How about further restricting the solutions to be prime numbers? Does this make any difference? - In its most general form, this problem is undecidable; en.wikipedia.org/wiki/… . Any substantive answer to a question of this type therefore has to be very sensitive to the type of equations considered. – Qiaochu Yuan Apr 17 2010 at 18:58 @Qiaochu: He has a very specific equation in mind, sums of equal powers. I don't think undecidability is relevant here. – Felipe Voloch Apr 17 2010 at 21:26 The question is stated in quite some generality. I think information about what level of generality is appropriate is relevant. – Qiaochu Yuan Apr 17 2010 at 21:44 Thank you for all your comments! If I cannot provide any bounds on n (since the number of terms is generated by a combinatorial problem, which cannot guarentees the size of n), but I can restrict the solutions lie in prime numbers < n log n (since there are about n prime numbers in this range), and the solution must satisfies the equation above SIMULTANEOUSLY for every t<c for a constant c, say, c=100, can we do better than O(n^n)? – Hsien-Chih Chang Apr 18 2010 at 8:52 ## 1 Answer People studying Waring's problem via the Hardy-Littlewood method often consider this kind of problem. You could start by looking at Vaughan's book, "The Hardy-Littlewood method". - The original problem itself is a part of Waring's problem: to estimate the number of solutions to $\sum_{i=1}^nx_i^t=N$ as $N\to\infty$. So, any monograph with emphasise on Waring's problem could be a good source. – Wadim Zudilin Apr 17 2010 at 23:48 Thank you for this wonderful method!! I'm trying to read it but since I'm not familiar with this topic, it may take me for a while to understand the context. But still thanks very much for the reference! @Wadim: Since I have to estimate the number of solutions when N is small (say O(n^t) for some constant t), is this method still works? – Hsien-Chih Chang Apr 18 2010 at 8:56 I found out that in Chapter 6 of the book you provide solved the problem for a single equation fixing the exponent t, and the work by Davenport can be extended to a system of equations. Thank you very much!!! – Hsien-Chih Chang Apr 20 2010 at 15:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404065012931824, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/108754-resolving-vector-components-2.html
Thread: 1. Thanks, that works fine when the questions are simple but when they have three or more forces acting upon an object I get confused as to how to draw the components and which one is horizontal and which one is vertical. For example how would you solve the following question. Q) The diagram shows a horizontal force of magnitude 30 N acting on a block of mass 2 kg, which is at rest on a plane inclined at 50 degrees to the horizontal. Find the magnitude and direction of the frictional force on the block. Attached Thumbnails 2. Originally Posted by unstopabl3 Thanks, that works fine when the questions are simple but when they have three or more forces acting upon an object I get confused as to how to draw the components and which one is horizontal and which one is vertical. For example how would you solve the following question. Q) The diagram shows a horizontal force of magnitude 30 N acting on a block of mass 2 kg, which is at rest on a plane inclined at 50 degrees to the horizontal. Find the magnitude and direction of the frictional force on the block. It's the same . There are a few components in this question . The weight component , mgsin theta acting downwards , the force component acting upwards , and the frictional force acting downwards . $F \cos \theta = F_r + mg\sin \theta$ $30\cos 50 = F_r+(2)(9.81)\sin 50$ $F_r=4.254$ N Attached Thumbnails 3. Could you please explain step by step how you solved each component? I still don't get it :/ Also why is the friction force acting downwards? Shouldn't it be upwards considering it's a slop and the block has a tendency to falls downwards thus the friction in the opposite direction? Please bare with me, this component thing along with friction and weight components is just not getting through me Please explain to me with diagrams on how you find each component for such questions. Much appreciated! 4. Originally Posted by unstopabl3 Could you please explain step by step how you solved each component? I still don't get it :/ Also why is the friction force acting downwards? Shouldn't it be upwards considering it's a slop and the block has a tendency to falls downwards thus the friction in the opposite direction? Please bare with me, this component thing along with friction and weight components is just not getting through me Please explain to me with diagrams on how you find each component for such questions. Much appreciated! once again , take a look at the diagram i attached . How would relate Fx with F ? The force exerted on the mass is acting upwards so the frictional force acts downwards . Just a simple example would be using the sand paper . When you rub it forward , the resistance is backwards and vice versa . For the weight component , you will need to resolve it also . W always acts downwards (exactly downwards) regardless of how the mass is placed . Attached Thumbnails 5. I'll give it a try and let you know.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485560655593872, "perplexity_flag": "head"}
http://mathoverflow.net/questions/90130?sort=oldest
## Groups quasi-isometric to reducible nonuniform lattices ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is known that a finitely group $G$ is quasi-isometric to a nonuniform irreducible lattice $\Lambda$ in a semisimple Lie group if and only if $G$ and $\Lambda$ are commensurable (see references in this survey of Farb). Question. What is known about groups quasi-isometric to reducible nonuniform lattices in semisimple Lie groups? As usual in this business "semisimple" means "noncompact, connected, semisimple, with finite center". - If you replace "semisimple Lie group" by "automorphism group of a product of trees", very exotic phenomena may occur; e.g. $F_2\times F_2$ is a cocompact lattice, and so are the Burger-Mozes groups (which are simple). Clearly they are not commensurable. (BTW, this example also shows that linearity is not a q.-i. invariant). – Alain Valette Mar 3 2012 at 20:11 Alan makes a very good point: You cannot allow more than one factor in your Lie group which is locally isomorphic to $SL(2, {\mathbb R})$. Otherwise, your lattice $\Gamma$ will have two factors commensurable to $F_2$ and one can hardly make any conclusions. However, once you exclude (more than two) $SL(2, {\mathbb R})$ factors and higher rank factors, the Burger-Moses phenomenon does not occur and you get QI rigidity for $\Gamma$. – Misha Mar 3 2012 at 21:26 Thank you, Alain and Misha. From what you say it seems even in the presence of two $SL(2,\mathbb R)$ factors one still can draw some conclusions; after all not all groups are lattices in the product of trees. – Igor Belegradek Mar 4 2012 at 5:31 ## 2 Answers Here is a partial answer: Suppose $\Gamma = \Gamma_1 \times \dots \times \Gamma_n$ and all the $\Gamma_i$ are irreducible lattices in $G_i$, where each $G_i$ has real rank at least two. It has been a long time, and I do not remember all the details, but I think it may be true that any quasi-isometry from a product of such lattices $\Gamma_1 \times \dots \times \Gamma_n$ to itself preserves the factors (up to permutation). I am looking at Lemma 10.3 of my paper in JAMS from 1998 http://www.math.uchicago.edu/~eskin/sl3z.ps. It is stated for irreducible lattices, but that does seem to be used in the proof. Of course I could be missing something. If self quasi-isometries are indeed factor preserving, then one has the same classification as for irreducible lattices. One more comment: the reason my proof fails when you have a factor $\Gamma_i$ in a real rank one group $G_i$ is that I quote Lubotsky-Mozes-Raghunathan which does not work in that case. - I do not see how Lemma 10.3 implies that the factors are preserved. Proposition 10.1 does imply that, but it uses irreducibility. – Igor Belegradek Mar 4 2012 at 17:21 The way I read Lemma 10.3 is as follows: Suppose $G = G_1 \times G_2$. If two points $x$ and $y$ in the thick part of $G/Gamma$ have the same projection to $G_1$, then (up to a bounded error) their images have the same projection to $G_1$. Is this enough for factor preserving? – Alex Eskin Mar 5 2012 at 0:01 I looked over most of the paper. It seems that Lemma 10.3 is not needed; in fact the proof of the main theorem 0.2 carries over to the case where all G_i have higher rank with virtually no modifications. (The stuff in section 10 is not used in the proof of theorem 0.2). I am not sure what happens if you have a product with both rank one and higher rank factors. That case seems open, but I think it should be doable. (You should also ask Kevin Wortman). – Alex Eskin Mar 5 2012 at 13:55 Many thanks! I just wanted to know the answer, as the issue came up in the paper I am writing, but I am not planning to work on this further. Sounds like a good project for a student. – Igor Belegradek Mar 5 2012 at 16:22 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Igor, I think it is still (mostly) unknown. Suppose that $\Gamma$ is a product of non-uniform irreducible lattices $\Gamma_i$. If all factors $\Gamma_i$ are lattices in rank 1 Lie groups then quasi-isometries preserve the product structure according to our paper [1] Kapovich, Kleiner, Leeb, Quasi-isometries and the de Rham decomposition, Topology 37 (1998), no. 6, 1193–1211. The reason is that in this case each $\Gamma_i$ contains quasi-geodesics with exponential divergence, so it is of Type I in the sense of [1]. Once you know this, you are in business because the factors $\Gamma_i$ are QI rigid. However, if you allow factors which are non-uniform lattices of rank $\ge 2$, then, conjecturally, they have linear divergence. Special cases of this conjecture are proven in [2] Drutu, Mozes, Sapir, Divergence in lattices in semisimple Lie groups and graphs of groups. Trans. Amer. Math. Soc. 362 (2010), no. 5, 2451–2505. Thus, such non-uniform lattices (at least conjecturally) are of neither type I nor II (in the sense of [1]), so [1] does not apply and, at this point (I think) no other technique is available to handle quasi-isometries of products. However, you should check with Kevin Wortman, since in his work on S-arithmetic lattices and lattices in algebraic groups over functional fields he had to handle similar issues. Thus, there is a chance that QI rigidity for reducible lattices is implicit in his work. Another possible approach would be to generalize [1] using the fact that "higher-dimensional" exponential divergence is now known for non-uniform lattices of higher rank. - Thank you, this is very helpful. – Igor Belegradek Mar 4 2012 at 5:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460997581481934, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/61527/is-it-true-that-forall-g-leq-aut-mathbbp1-pgl-2-mathbbc-the-map-ma?answertab=votes
# Is it true that $\forall G\leq Aut(\mathbb{P}^1)=PGL_2(\mathbb{C})$ the map $\mathbb{P}^1\rightarrow \mathbb{P}^1/G$ is defined over $\mathbb{Q}$? Is it true that for every finite $G\leq Aut(\mathbb{P}^1_{\mathbb{C}})=PGL_2(\mathbb{C})$ the morphism $\mathbb{P}^1_{\mathbb{C}}\rightarrow \mathbb{P}^1_{\mathbb{C}}/G$ descends as a morphism (not nec. with the group actions) to $\mathbb{Q}$? My intuition is going haywire here. For a while I think it's true, and then I think it's not. Do you have a decisive answer? - Doesn't this follow from the explicit form of the invariant subalgebras of those groups (as given, among many other places, in Klein's book on the icosahedron or Dolgushev's notes on the McKay correspondence)? – Mariano Suárez-Alvarez♦ Sep 3 '11 at 3:41 I'm not aware of this literature, and I only vaguely remember hearing about the McKay correspondence. Are you saying it is true? – Nicole Sep 3 '11 at 3:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388160109519958, "perplexity_flag": "head"}
http://sumidiot.wordpress.com/2009/11/06/linear-fractional-transformations/
∑idiot's Blog The math fork of sumidiot.blogspot.com Linear Fractional Transformations a.k.a. Möbius Transformations, are a type of function. I’ll talk about them as functions from the complex plane to itself. Such functions are given by a formula $\dfrac{az+b}{cz+d}$ where $a,b,c,d$ are complex values. If $c$ and $d$ are both 0, this isn’t much of a function, so we’ll assume at least one isn’t 0. I’d like to talk about what these functions do, how to have some hope of picturing them as transformations $\mathbb{C}\to\mathbb{C}$. To do this, let’s consider some easy cases first. If $c=0$ (and so by assumption $d\neq 0$), then we may write the function $\frac{a}{d}z+\frac{b}{d}$, or simply $a'z+b'$ for some complex values $a',b'$. This is now a linear (some might say affine) transformation of the complex plane. Think about it as the composite $z\mapsto a'z\mapsto a'z+b$, where the first map multiplies by $a'$, and the second adds $b'$. Multiplying by a complex value $a'$ is the same as scaling by the real value $|a'|$ (the “norm” of $a'$, distance from $a'$ to the origin) and then rotating by the “argument” of $a'$. If you think about $a'$ as a point $(r,\theta)$ in polar coordinates, then the argument of $a'$ is $\theta$ (or so), and so multiplication by $a'$ is multiplication by the real value $r$ (which is just a stretching (or shrinking) of the complex plane away from (toward) the origin if $r>1$ (resp. $0\leq r<1$)) and then rotation by the angle $\theta$. The second transformation in the composite, “add $b'$“, just shifts the entire plane (as a “rigid transformation”) in the direction of $b'$. So the case when $c=0$ is just a linear transformation, which aren’t too difficult to picture. Another important case is $1/z$, so the coefficients are $a=0,b=1,c=1,d=0$. To talk about what this does, let’s first talk about “inversion” with respect to a fixed circle. Let $C$ be a circle with radius $r$, in the plane, and $z$ any point in the plane. Let $O$ denote the center of the circle and $d$ the distance from $O$ to $z$. The inversion of $z$, with respect to $C$, is the point on the line through $O$ and $z$ (in the direction of $z$ from $O$) whose distance from $O$ is $d'=r^2/d$. This means that points near the center of $C$ are sent far away, and vice versa. Points on $C$ are unchanged. Technically I guess we should say that this function isn’t defined at $O$, but people like to say it is taken to “the point at infinity” and, conversely, that inversion takes $\infty$ to $O$. These things can be made precise. You might notice that doing inversion twice in a row gets you right back where you started. It also turns out that If $C'$ is another circle in the plane, not passing through $O$, then the inversion of all of its points is another circle. If $C'$ passes through $O$, then the inversion works out to be a line. Since doing inversion twice is the identity, inversion takes lines to circles through $O$. If you’re thinking about the comments about $\infty$ above, this makes sense because every line “goes to $\infty$“, and so the inversion of a line will go through the inversion of $\infty$, which I said should be $O$. All of this talk about inversion was to describe the function $1/z$. This function is the composite of inversion with respect to the unit circle centered at the origin followed by a reflection across the horizontal axis (real line). Don’t believe me? The equation $d'=r^2/d$ defining the relationship between distances when doing the inversion can be re-written as $dd'=r^2$. If we’re doing inversion with respect to a unit circle, then $dd'=1$. This means that when we multiply $z$ with its inversion with respect to the unit circle, call it $z'$, the result will be a point with norm 1 (i.e., a point on the unit circle). Next up, multiplying $z$ by $z'$ produces a point whose angle from the positive real axis (which I called the argument before, the $\theta$ from polar coordinates) is the sum of the angles for $z$ and $z'$. Since we did the reflection across the horizontal axis, the argument for $z'$ is precisely the negative of the argument for $z$, meaning their sum (the argument of their product) is 0. So $zz'$ is a point on the unit circle making an angle of 0 with the positive real line, i.e., $zz'=1$. That makes $z'=1/z$, as promised. Let’s get back to the general setup, with the function $\dfrac{az+b}{cz+d}$ and let’s assume $c\neq 0$ (since we already handled the case $c=0$, it’s just a linear transformation). For some notational convenience, let me let $\alpha=-(ad-bc)/c^2$. Consider the following composite: $\begin{array}{rcl} z & \xrightarrow{w\mapsto w+\frac{d}{c}} & z+\dfrac{d}{c} \\ {} & \xrightarrow{w\mapsto \frac{1}{w}} & \dfrac{c}{cz+d} \\ {} & \xrightarrow{w\mapsto \alpha w+\frac{a}{c}} & \dfrac{\alpha c}{cz+d}+\dfrac{a}{c} \end{array}$ If you check all of these steps, and then play around simplifying the final expression, then you obtain the original formula above. So we can think of any linear fractional transformation as a composite of some linear functions and an inversion, and we know how to picture all of those steps. That’s maybe enough for today. It’s certainly enough for me for today. Before I go, I’ll leave you with a video that might be helpful, and is pretty either way. Like this: Tags: linear fractional, mablowrimo, mobius transformation This entry was posted on November 6, 2009 at 8:14 pm and is filed under Play. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. One Response to “Linear Fractional Transformations” 1. LFTs and Ford Circles « ∑idiot's Blog Says: November 7, 2009 at 8:16 pm | Reply [...] ∑idiot's Blog The math fork of sumidiot.blogspot.com « Linear Fractional Transformations [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 82, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261208176612854, "perplexity_flag": "head"}
http://agtb.wordpress.com/2012/02/17/john-nashs-letter-to-the-nsa/
# Turing's Invisible Hand Feeds: Posts Comments ## John Nash’s Letter to the NSA February 17, 2012 by Noam Nisan The National Security Agency (NSA) has recently declassified an amazing letter that John Nash sent to it in 1955.  It seems that around the year 1950 Nash tried to interest some US security organs (the NSA itself was only formally formed only in 1952) in an encryption machine of his design, but they did not seem to be interested.  It is not clear whether some of his material was lost, whether they ignored him as a theoretical professor, or — who knows — used some of his stuff but did not tell him.  In this hand-written letter sent by John Nash to the NSA in 1955, he tries to give a higher-level point of view supporting his design: In this letter I make some remarks on a general principle relevant to enciphering in general and to my machine in particular. He tries to make sure that he will be taken seriously: I hope my handwriting, etc. do not give the impression I am just a crank or circle-squarer.  My position here is Assist. Prof. of Math.  My best known work is in game theory (reprint sent separately). He then goes on to put forward an amazingly prescient analysis anticipating computational complexity theory as well as modern cryptography.  In the letter, Nash takes a step beyond Shannon’s information-theoretic formalization of cryptography (without mentioning it) and proposes that security of encryption be based on computational hardness — this is exactly the transformation to modern cryptography made two decades later by the rest of the world (at least publicly…).  He then goes on to explicitly focus on the distinction between polynomial time and exponential time computation, a crucial distinction which is the basis of computational complexity theory, but made only about a decade later by the rest of the world: So a logical way to classify enciphering processes is by t he way in which the computation length for the computation of the key increases with increasing length of the key. This is at best exponential and at worst probably at most a relatively small power of r, $ar^2$ or $ar^3$, as in substitution ciphers. He conjectures the security of a family of encryption schemes.  While not totally specific here, in today’s words he is probably conjecturing that almost all cipher functions (from some — not totally clear — class) are one-way: Now my general conjecture is as follows: for almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering, the mean key computation length increases exponentially with the length of the key, or in other words, the information content of the key. He is very well aware of the importance of this “conjecture” and that it implies an end to the game played between code-designers and code-breakers throughout history.  Indeed, this is exactly the point of view of modern cryptography. The significance of this general conjecture, assuming its truth, is easy to see.  It means that it is quite feasible to design ciphers that are effectively unbreakable.  As ciphers become more sophisticated the game of cipher breaking by skilled teams, etc., should become a thing of the past. He is very well aware that this is a conjecture and that he cannot prove it.  Surprisingly, for a mathematician, he does not even expect it to be solved.  Even more surprisingly he seems quite comfortable designing his encryption system based on this unproven conjecture.  This is quite eerily what modern cryptography does to this day: conjecture that some problem is computationally hard; not expect anyone to prove it; and yet base their cryptography on this unproven assumption. The nature of this conjecture is such that I cannot prove it, even for a special type of ciphers.  Nor do I expect it to be proven. All in all, the letter anticipates computational complexity theory by a decade and modern cryptography by two decades.  Not bad for someone whose “best known work is in game theory”.  It is hard not to compare this letter to Goedel’s famous 1956 letter to von Neumann also anticipating complexity theory (but not cryptography).  That both Nash and Goedel passed through Princeton may imply that these ideas were somehow “in the air” there. ht: this declassified letter seems to have been picked up by Ron Rivest who posted it on his course’s web-site, and was then blogged about (and G+ed) by Aaron Roth. Edit: Ron Rivest has implemented Nash’s cryptosystem in Python.  I wonder whether modern cryptanalysis would be able to break it. Posted in Uncategorized | 41 Comments ### 41 Responses 1. on February 18, 2012 at 2:35 am | Reply Amit C That is awesome. 2. Reblogged this on My Blog. 3. on February 18, 2012 at 7:20 am | Reply Anonymous unbelievable. comparable to von neumann 4. just amazing. a mixture of godel + von neumann 5. “A beautiful mind” indeed. Peace. 6. Reblogged this on Kalpesh Padia’s Blog and commented: Clearly John Nash was way ahead of his time… Schizophrenic, but super smart.. Respect! 7. on February 18, 2012 at 12:39 pm | Reply David Morris Does that mean that NSA had this before the english guy Clifford Cocks invented it at GCHQ in 1972: At GCHQ, Cocks was told about James H. Ellis’ “non-secret encryption” and further that since it had been suggested in the late 1960s, no one had been able to find a way to actually implement the concept. Cocks was intrigued, and invented, in 1973, what has become known as the RSA encryption algorithm, realising Ellis’ idea. GCHQ appears not to have been able to find a way to use the idea, and in any case, treated it as classified information, so that when it was reinvented and published by Rivest, Shamir, and Adleman in 1977, Cocks’ prior achievement remained unknown until 1997. (From the Clifford Cocks article on Wikipedia) or were NSA still stuck in the “no one had been able to find a way to actually implement the concept”. “That both Nash and Goedel passed through Princeton may imply that these ideas were somehow “in the air” there.” I love the thought that an idea can live “in the air”, be dropped, almost forgotten, hinted at, rediscovered and finally resolved in an institution like Princeton. 8. on February 18, 2012 at 6:51 pm | Reply Anonymous Re: David Morrris You’re referring to the invention of asymmetric (public key) cryptography right? Does Nash’s letter have anything at all to do with public key cryptography? 9. on February 18, 2012 at 7:44 pm | Reply Dr. Kenneth Noisewater So does this invalidate any patents due to prior art? • This. All hell will brake loose in 3…2..1.. • on February 19, 2012 at 4:36 am | Reply Anonymous Not unless it was made available to the public 10. Some caution is needed here. As always when interpreting historic writings, one is naturally tempted to use a modern perspective, based on knowing the current state of affairs. In addition, it is tempting to attribute such phenomenal foresightedness to a well-established genius. After reading the letter, it seems clear to me that Nash *did* foresee important ideas of modern cryptography. This is great and deserves recognition. However, it seems also very clear that he did not foresee (in fact: could not even imagine) modern complexity theory. Why else would he say that he does not think that the exponential hardness of the problem could ever be proven? It is true that the computational hardness of key tasks in modern cryptography is an unsolved problem, but we have powerful tools to prove hardness results in many other cases. So if Nash had anticipated complexity theory as such, then his remark would mean that he would also have foreseen these difficulties. To foresee this, however, he would not only have to understand the development of complexity theory in great detail, but also to anticipate the principles of today’s encryption mechanisms. It seems reasonable to assume that he would have shared the latter in his letter if he had really had this insight. Overall, it seems clear that a prediction of complexity theory or its current incapacity with respect to cryptographic problems cannot be found in this text, which does by no means diminish the originality of the remarks on cryptography. One could also grant him a certain mathematical intuition that some computational problems could be inherently hard to solve, although I don’t see any hint that he believes that such hardness could ever become a precise mathematical property. What he suggests is really close to the pragmatic approach of modern cryptography, but not to modern complexity theory. • on February 20, 2012 at 1:46 am | Reply Greg “t is true that the computational hardness of key tasks in modern cryptography is an unsolved problem, but we have powerful tools to prove hardness results in many other cases.” Not really— we’ve only proven things to be hard if and only if P!=NP. This is useful, but we’ve still not actually proven anything to be hard at all— only proven that there are a set of things which if any of them turn out to be easy a whole bunch of other things must be ‘easy’ too. • “we’ve only proven things to be hard if and only if P!=NP.” Regarding the class NP you are of course right. But our modern tools do not end at NP. For example, we know that ExpTime is strictly harder than P, and we have shown many problems to be hard for ExpTime. At Nash’s time, ExpTime and NP would largely be synonyms (I have not traced back the exact history of these notions, but at least the letter seems to mix both concepts). • on February 20, 2012 at 8:58 pm Alex Ogier Just to clarify, Markus is separating Complexity Theory, where we have plenty of solid hardness results, from Cryptography, where basically every assumption of hardness boils down to an unproven conjecture. The point being that this letter provides no evidence that Nash foresaw any of the structure that would allow proofs of hardness for any problem, and since this structure is foundational in complexity theory, it is reasonable to conclude that he didn’t foresee complexity theory in any meaningful way. In other words, while Nash saw the negative side of complexity theory — that mathematics would find it difficult to reason about the reversibility of one-way functions — he didn’t have any particular insight into the positive side of complexity theory, that there would exist structure to classify the computational difficulty of many other problems 11. It’s interesting that, to conceal a message, you make it look like noise, and to get a message through a noisy channel, you do the same thing. • Like when you hear the sounds (or see the motions of) words and decode them. (I know too little to understand whether that is accurate or dumb.) 12. on February 19, 2012 at 5:47 am | Reply Philonus Atio Regarding ” I wonder whether modern cryptanalysis would be able to break it.” I broke it in about 1 hour and I’m no expert in cryptanalysis. It is weak. • on February 22, 2012 at 10:40 pm | Reply John Smith Hey, I would be very interested in knowing how you did it. • on February 24, 2012 at 2:24 am Phionus Atio Sure. Here is a brief outline of strategy for how I broke it. Nash’s machine is essentially a linear feedback shift register (LFSR). http://en.wikipedia.org/wiki/Linear_feedback_shift_register The LFSR operates as a weak pseudo-random number generator which is exclusive-ORed (XOR) with the plaintext to produce ciphertext. All you need to do is predict the output of the generator. It cycles in less than 256 bits (i.e. the period). I used ‘known plaintext attacks’ to analyze the behavior of the LFSR. I created an equivalent LFSR using a variation on the Berlekamp-Massey algorithm. Nash’s machine is weak and easy to predict because it is linear. The rest is left as an exercise for the reader (there are a few minor wrinkles). Happy cracking. 13. Reblogged this on Vcjha's Blog. 14. “It seems reasonable to assume that he would have shared the latter in his letter if he had really had this insight.” No, it’s fundamentally irrational to assume that. You’re correct that the letter needs to be evaluated within the context on his time; it also needs to be evaluated with author’s purpose in mind. There is nothing in that letter that even hints that his purpose was to “foresee” anything or to give a complete theoretical overview of the subject he was discussing. The author’s purpose is to send a practical note to a government agency in order to generate interest in his ideas because he thinks he can help the country with them. His concern is that no one at the NSA will take him seriously, a concern that in hindsight seems well-founded. It’s grossly unfair in that context that you then come along and criticize him decades later for not being complete enough. It’s a handwritten letter for crying out loud, not a phd thesis. • “it’s fundamentally irrational” … “It’s grossly unfair in that context that you then come along and criticize him” I think this discussion should not be that emotional I am far from criticizing Nash for writing a letter decades ago. All I am saying is that the letter does not seem to provide evidence for him anticipating computational complexity theory as suggested in the original post. You could be right that his letter does not provide the best basis for judging this (since there can be many reasons for him to not write all that he knew in all detail). Then all we can do is to wait for more conclusive historic material to appear. 15. on February 20, 2012 at 3:41 am | Reply Geoffrey Watson This is very interesting as a historical document, but not sure that it supports the interpretations being put on it. It is not noteworthy that people working in the field anticipate ideas that only later become standard theories (look at the history of any advance in maths or science). From Goedel’s 1956 letter and this Nash letter (with its reference to Prof. Huffman working towards similar objectives) a reasonable historical conclusion is that these ideas were just going around in the usual way. The “conjecture” is a pretty muddled bit of thinking. It presumably means that Nash thinks that there are such exponentially hard cyphers, but to convert “almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering” into a prescient anticipation of complexity theory is a big ask. 16. Markus Kroetzsch and Geoffrey Watson make important points in their earlier posts, and I’d encourage people to read them. It’s hard to know exactly what Nash was thinking from just these letters, but it impressed me as similar to some of my own early thoughts on cryptography — but before I had really invested significant time and come up with good results. In addition to Kroetzsch and Watson’s caveats, I’ll note what appears to be another: Breaking a simple substitution cipher takes a constant amount of time — not dependent on the key size (if one even can think of it as having a variable key size). If anyone thinks I’ve missed something, please chime in. I read the letters fairly quickly and might have missed a hidden gem. Martin Hellman http://www-ee.stanford.edu/~hellman/ 17. on February 20, 2012 at 2:55 pm | Reply ali i want to share my chapter in artificial organs i have another chapter in liver cells any one interested to invite me for his/or her book 18. on February 20, 2012 at 9:21 pm | Reply unruh P, NP are really completely irrelevant for crypto. All crypto systems have a small finite key, and the breaking of them is constant (as Hellman points out for substitution cyphers). A problem could go as constant for key lengths less than 10^(10^10) and exponential therefter. It would be “exponential” as far as P, NP,… were concerned, but for crypto it would be useless and would go as a constant since we are never going to use keys that long. Or the key could go as r^(10^10) and the problem would be considered polynomial, but it would be far far stronger then almost any exponential problem as far as crypto is concerned. That the NP hard problems we have looked at easily happen to also be something like exponential for small key lengths is more the “drunk and the lightpost” than anything having to do with the inherent features of the problem. Thus, even if P=NP, it would make no difference to crypto, unless that proof also showed how all P problems could be reduces to , say, linear P problems with small coefficients. 19. Noam, thank you very much for sharing this information in an interesting, annotated form. Ronald Rivest’s implementation of Nash’s Cryptosystem is indeed quite intricate and very well coded and commented, clear to understand. 20. I’ve made a full HTML transcription of the PDF: http://www.gwern.net/docs/1955-nash 21. It’s comforting to know that in today’s day and age, people of this caliber of genius can just start a blog / Facebook page / YouTube channel about their amazing mathematical ideas and how Ed Harris won’t stop following them around. 22. Reblogged this on "Random" thoughts and commented: Nash and cryptography… 23. Amazing, Reblogged on my blog 24. on March 13, 2012 at 1:23 am | Reply Vinay This is amazing.. John Nash’s work has always been an inspiration. 25. Reblogged this on Luay Baltaji's blog and commented: A fascinating letter from John Nash to the NSA in 1955: can he prove that “conjecture” security is computationally unbreakable? 26. Reblogged this on Code through the Looking Glass. 27. on April 29, 2012 at 3:09 am | Reply Anonymous A nice text, except for this sentence: “Not bad for someone whose ‘best known work is in game theory’”. 28. Awesome! Did he sent it when he was having delusional problems? 29. I drop a comment whenever I appreciate a post on a site or I have something to contribute to the conversation. It’s triggered by the sincerness displayed in the post I looked at. And after this post John Nashs Letter to the NSA Turing’s Invisible Hand. I was excited enough to drop a thought I actually do have some questions for you if it’s okay. Could it be just me or do a few of the responses look as if they are written by brain dead visitors? And, if you are posting at other online social sites, I would like to follow you. Would you list every one of all your community pages like your twitter feed, Facebook page or linkedin profile? 30. on February 1, 2013 at 8:39 pm | Reply Thomas Rivera Great article, great blog! 31. Wow that was unusual. I just wrote an very long comment but after I clicked submit my comment didn’t show up. Grrrr… well I’m not writing all that over again. Anyway, just wanted to say wonderful blog! 32. Do you have a spam problem on this site; I also am a blogger, and I was wondering your situation; many of us have developed some nice practices and we are looking to trade methods with others, be sure to shoot me an email if interested. Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604606628417969, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57670/geometric-brownian-motion-conditional-expected-value
## Geometric Brownian motion conditional expected value ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My question is related to conditional expected value of lognormal variable (more precisely conditinal expected value of geometric brownian motion). If intitial value is $S_0$, after time $\tau$ the distribution of $S_\tau$ will be the following: $$ln(S_\tau)\sim N(ln(S_0)+(\mu-\frac{1}{2}\sigma^2)\tau,\sigma^2\tau)$$ I am interested in the conditional expected value of $S_\tau$, if $S_\tau$ is above X. $$E(S_\tau|S_\tau>X)=?$$ A possible solution for this (where $\Phi$ is standard normal cdf): $$E(S_\tau|S_\tau>X)=S_0 * exp(r \cdot \tau) * \frac{\Phi(d_1)}{\Phi(d_2)}$$ where $$d_2=\frac{-ln(X/S_0)-(\mu-\frac{1}{2}\sigma^2)\tau}{\sigma * \sqrt{\tau}}$$ $$d_1=d_2+\sigma * \sqrt{\tau}$$ Is there a "nicer", more compact closed form solution of conditional expected value? For example with only one $\Phi()$ function? - You already have an answer, and I very much doubt that there is a "nicer" form for the ratio of two standard normal cdfs at different points. Also, this does not look like a research level question to me, so I voted to close. – George Lowther Mar 7 2011 at 20:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177606701850891, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/11/12/group-invariants/?like=1&source=post_flair&_wpnonce=671450efa2
# The Unapologetic Mathematician ## Group Invariants Again, my apologies. What with yesterday’s cooking, I forgot to post this yesterday. I’ll have another in the evening. Let $V$ be a representation of a finite group $G$, with finite dimension $d_V$. We can decompose $V$ into blocks — one for each irreducible representation of $G$: $\displaystyle V\cong\bigoplus\limits_{i=1}^kV^{(i)}\otimes\hom_G(V^{(i)},V)$ We’re particularly concerned with one of these blocks, which we can construct for any group $G$. Every group has a trivial representation $V^\mathrm{triv}$, and so we can always come up with the space of “invariants” of $G$: $\displaystyle V^G=V^\mathrm{triv}\otimes\hom_G(V^\mathrm{triv},V)$ We call these invariants, because these are the $v\in V$ so that $gv=v$ for all $g\in G$. Technically, this is a $G$ module — actually a $G$-submodule of $V$ — but the action of $G$ is trivial, so it feels slightly pointless to consider it as a module at all. On the other hand, any “plain” vector space can be considered as if it were carrying the trivial action of $G$. Indeed, if $W$ has dimension $d_W$, then we can say it’s the direct sum of $d_W$ copies of the trivial representation. Since the trivial character takes the constant value $1$, the character of this representation takes the constant value $d_W$. And so it really does make sense to consider it as the “number” $d_W$, just like we’ve been doing. We’ve actually already seen this sort of subspace before. Given two left $G$-modules ${}_GV$ and ${}_GW$, we can set up the space of linear maps $\hom(V,W)$ between the underlying vector spaces. In this setup, the two group actions are extraneous, and so we find that they give residual actions on the space of linear maps. That is we have two actions by $G$ on $\hom(V,W)$, one from the left and one from the right. Now just like we found with inner tensor products, we can combine these two actions of $G$ into one. Now we have one left action of $G$ on linear maps by conjugation: $(g,f)\mapsto g\cdot f$, defined by $\displaystyle[g\cdot f](v)=gf(g^{-1}v)$ Just in case, we check that $\displaystyle\begin{aligned}\left[g\cdot(h\cdot f)\right](v)&=g\left[h\cdot f\right](g^{-1}v)\\&=g(hf(h^{-1}(g^{-1}v)))\\&=(gh)f((h^{-1}g^{-1})v)\\&=(gh)f((gh)^{-1}v)\\&=\left[(gh)\cdot f\right](v)\end{aligned}$ so this is, indeed, a representation. And what are the invariants of this representation? They’re exactly those linear maps $f:V\to W$ such that $\displaystyle gf(g^{-1}v)=f(v)$ for all $v\in V$ and $g\in G$. Equivalently, the condition is that $\displaystyle gf(v)=f(gv)$ and so $f$ must be an intertwinor. And so we conclude that $\displaystyle\hom(V,W)^G=\hom_G(V,W)$ That is: the space of linear maps from $V$ to $W$ that are invariant under the conjugation action of $G$ is exactly the space of $G$-morphisms between the two $G$-modules. ## 3 Comments » 1. I didn’t know about that other blog! Thanks for the link. Comment by xammer | November 12, 2010 | Reply 2. [...] Onto Invariants Given a -module , we can find the -submodule of -invariant vectors. It’s not just a submodule, but it’s a direct [...] Pingback by | November 13, 2010 | Reply 3. [...] turns out that we can view the space of tensors over a group algebra as a subspace of invariants of the space of all tensors. That is, if is a right -module and is a left -module, then is a [...] Pingback by | November 15, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175372123718262, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38400/rigid-body-moment-of-inertia-problem
# Rigid body/moment of inertia problem I have a homework assignment about rigid body dynamics. Take a disc of radius $r=2m$ with uniform mass density $\rho=1$ $kg/m^2$ in the x-y plane, resting in an inertial frame. At some instant, a force of $F = (0,0,1)kN$ is applied at the point $A=(0,r,0)$. What's the acceleration of the point $A$ at that instant? This should be straightforward computation, but this is new to me and I think I'm making a fundamental mistake somewhere. Could you help me find it? In particular I'm not sure I'm treating the angular velocity $\omega$ correctly. I can assume the equations of motion for a rigid body, as well as the equation for the acceleration of an arbitrary point of the body given the acceleration of the center of mass and the angular acceleration. The disc is just a circle of homogenous mass $\rho = 1$, so $$M = \rho \int_{R=0}^r\int_{\theta=0}^{2\pi} R d\theta dR = \left.\frac{\rho 2\pi R^2}{r}\right|_{R=0}^2 = \left.\pi R^2\right|_{R=0}^r = 4\pi~\mbox{kg}$$ where though the the problem gives $r=2$ I'm just going to leave it as symbols. Without integration, I suppose we might have noticed that $M = \rho \pi r^2$. Now we compute the tensor of inertia, $$\begin{pmatrix} \int_V \rho x_2^2 +x_3^2 dv & -\int_V \rho x_1 x_2 dv& -\int_V \rho x_1x_3dv \\ -\int_V \rho x_1 x_2dv & \int_V \rho x_1^2 +x_3^2dv & -\int_V \rho x_2 x_3dv \\ -\int_V \rho x_1 x_3 dv& -\int_V \rho x_2 x_3dv & \int_V \rho x_1^2 +x_2^2dv \end{pmatrix}$$ Since our object is symmetric about the x-y axes, we can eliminate the cross-moments. Since our object has no mass in the z direction, we can eliminate $x_3$ in the tensor, so \begin{align*} I &= \begin{pmatrix} \int_V \rho x_2^2 dv & 0& 0 \\ 0 & \int_V \rho x_1^2 dv & 0 \\ 0 & 0 & \int_V \rho x_1^2 +x_2^2dv \end{pmatrix} \\ &= \begin{pmatrix} \rho \int r^3 sin^2\theta drd\theta & 0& 0 \\ 0 & \rho \int r^3 cos^2\theta drd\theta & 0 \\ 0 & 0 & \rho \int r^3 dr d\theta \end{pmatrix} \\ &= \begin{pmatrix} \frac{\rho}{2} \int r^3 (1-\cos(2\theta)) drd\theta & 0& 0 \\ 0 & \rho \int r^3 (1+\cos(2\theta)) drd\theta & 0 \\ 0 & 0 & \frac{\pi \rho r^4}{2} \end{pmatrix} \\ &= \begin{pmatrix} \frac{\rho}{2}\left[ \frac{2\pi r^4}{4} - \frac{1}{2} \sin(4\pi)\right] & 0& 0 \\ 0 & \frac{\rho}{2}\left[ \frac{2\pi r^4}{4} + \frac{1}{2} \sin(4\pi)\right] & 0 \\ 0 & 0 & \frac{\pi \rho r^4}{2} \end{pmatrix} \\ &= \begin{pmatrix} \frac{\rho \pi r^4}{4} & 0& 0 \\ 0 & \frac{\rho \pi r^4}{4} & 0 \\ 0 & 0 & \frac{\pi \rho r^4}{2} \end{pmatrix} \end{align*} The angular momentum vector should be $\begin{pmatrix} \omega_x,& 0,& 0 \end{pmatrix}^T$, taking the cross product of the force vector and the vector from the center of mass to $A$. Now we plug this in to the equations of motion to solve for the linear and angular acceleration, $a_c$ and $\alpha$. They are, \begin{align*} (Mv_c)' &= S \implies & a_c = \frac{1}{M}\begin{pmatrix} 0,& 0,& F \end{pmatrix}^T \\ (I\omega)' &= M_c = \tilde r \times F \implies & \dot \omega = \frac{r}{I_{11}} e_y\times |F|e_z = \frac{r|F|}{I_{11}} e_x \end{align*} We should now be able to plug this into the equation for angular acceleration (where the angular velocity at the instant of impulse is $0$ so we ignore it), \begin{align*} a_A &= a_c + \alpha \times e_y \\ &= \frac{|F|}{M}e_z + \dot \omega \times(r e_y) \\ &= |F|(\frac{1}{M} + \frac{r^2}{I_{11}} )e_z \\ &= |F|(\frac{1}{\rho \pi r^2} + \frac{4}{\rho \pi r^2} )e_z \end{align*} Which I guess seems natural enough. But by newton's formula we would have \begin{equation*} M a_A = 5 |F| \end{equation*} But does it make sense that the force should be $5F$? Not really... - ## 1 Answer I'm afraid your result is just right, although you are making things overly complicated by considering the full three dimensional rigid body dynamics... The center of mass will start accelerating in the direction of the force with $a=F/M$, plus the whole disk will start turning around a diameter, with an angular acceleration of $\alpha = F r/I$. You have already calculated $I= \rho \pi r^4/4 = M r^2/4$. So the acceleration of your point will be $a+r\alpha = \frac{F}{M}(1+4)=5\frac{F}{M}$. It may help understand what's going on if you think of what's happening to the point at the other side of the disc, $(0,-r,0)$ in your case, you'll notice that it is accelerating in the opposite direction, $a-r\alpha = \frac{F}{M}(1-4)=-3\frac{F}{M}$. While the disc will also start moving in the direction of the force as a whole, it is also rotating around its center of mass, and the acceleration due to rotation is four times larger than that due to translation. So it is the rotation that is making the acceleration of your point five times larger than you expected. - 1 Your accel formulas include a 1/r that should have cancelled out. – Art Brown Sep 26 '12 at 17:54 Yep, corrected that, thanks! – Jaime Sep 27 '12 at 5:36 Ha- there couldn't possibly have been a better resolution to my question! It's definitely more fun to learn something about physics than just compute the wrong number. Thanks! – Albert Sep 27 '12 at 5:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8511393666267395, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/33181/relation-among-anomaly-unitarity-bound-and-renormalizability
# Relation among anomaly, unitarity bound and renormalizability There is something I'm not sure about that has come up in a comment to other question: Why do we not have spin greater than 2? It's a good question--- the violation of renormalizability is linked directly to violation of unitarity, which was exploited by Weinberg (surprise, surprise) to give an upper bound of something like 800 GeV on the fundamental Higgs mass from the W's and Z's unitarization. The breakdown of renormalizability is a wrong one-loop propagator correction to the gauge boson, and it leads to a violation of the ward-identity which keeps the non-falling-off part of the propagator from contributing. It's in diagrammar (I think), it's covered in some books, you can ask it as a question too. I know what unitarity bound is the user talking about, but I don't know what is the violation of the Ward identity that he mentions. I guess that it is the global $SU_L(2)$ symmetry but I have never seen anything relating the unitarity bound and this anomaly. The general issue is the following: Assume a Yang-Mills term and the coupling of the charged vector field to a fermionic conserved current under a global symmetry. Then one adds an explicit mass term to the vector field so that one breaks the gauge symmetry by hand, but not the global part that gives the conserved current (the gauge symmetry that goes to the identity in the boundary entails constraints). Then, according to the user (at least what I understood), when one takes into account loops effect the global part is also broken. Therefore, the mass term is breaking the redundancy part of the symmetry by hand (at the classical level) and it is also breaking the global part at the quantum level. I would be grateful if somebody is able to clarify this to me. References are also welcome. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521707892417908, "perplexity_flag": "head"}
http://rjlipton.wordpress.com/2009/08/27/stockmeyers-approximate-counting-method/
## a personal view of the theory of computation by On the complexity of counting exactly and approximately Larry Stockmeyer was one of the great theorists, who worked on areas as diverse as lower bounds on logical theories, computational complexity, distributed computing, algorithms, and many other areas. He passed away in 2004, and is terribly missed by all who knew him. See Lance Fortnow’s wonderful tribute for many details on Larry’s contributions. Today I want to talk about one of Larry’s great results: an approximate counting method. I have always loved this result. Larry was quiet, but beneath his quiet exterior he had a tremendous sense of humor. He was also one of the most powerful problem solvers that I had ever had the pleasure to work with. Larry was also extremely accurate—if Larry said that X was true, you could bet the house that X was true. He was amazing at finding loose ends in an argument, and usually then finding a way to tie them up. I once proved a result on the complexity of univariate polynomials, and I showed a draft of the paper to Larry. A few hours later he came into my office—I was visiting IBM Yorktown at the time—with my paper all marked up. Larry then started to give me very detailed comments about the substance of the paper. After a while I started to get nervous; I was starting to wonder whether or not the main result of my paper was correct or not. I finally asked Larry directly: “is the theorem correct?” He said, “oh yes, but here you need to argue that ${\dots}$” I was quite relieved. Larry and I went on to work together on other results on the complexity of polynomials. A year later, again at IBM Yorktown, we had two visitors George Sacerdote and Richard Tenney, who came to IBM and gave a very technical talk on their solution to the Vector Addition System (VAS) reachability problem. It was a long, almost two hour talk, that was attended by the most of the IBM theory group. The reachability problem had been claimed before, and I had worked hard on the problem in the past—with no success. So I was very interested to see if they had really solved the problem. When the talk was over, we all started to leave the conference room and head to the IBM cafeteria for lunch. I stood right at the door and did an “exit poll”—as each theorist left the room I asked them what they thought about the proof. Did they believe it? Did it look like it might work? Most said that it sounded plausible. A few were even more positive: one member said that he was convinced that the proof would work. Personally, I thought that I had not heard anything new, but nor had I heard anything wrong. I was on the fence. I then asked the last to exit the room, Larry, what he thought about the proof. Larry said, “I did not understand a word that they said.” Larry was right on the money. The proof was wrong, but it took a lot of work, by many people, to eventually find the holes in their attempted proof. See my post for more about the VAS problem. Let’s turn to Larry’s result on approximate counting. Approximate Counting Suppose that ${C(x)}$ is a circuit with inputs ${x=x_{1},\dots,x_{n}}$ of size polynomial in ${n}$. Then, a natural question is: How many ${x}$ satisfy ${C(x)=1}$? This is well known to be a ${\#}$P complete problem, and computing the exact answer is certainly at least as hard as NP. What Larry looked is how hard is it to approximately determine the number of ${x}$ so that ${C(x)=1}$? Let ${S = \{x \mid C(x)=1\}}$. A key observation is that there is an amplifier lurking here: if we can in general determine ${|S|}$ to within a factor of ${2}$ in polynomial time, then we can determine it to within factor ${1+\frac{1}{n^{c}}}$ for any constant ${c}$ also in polynomial time. This can be proved by a simple amplification argument. The idea is this: create a new circuit ${D(x,y) = C(x) \wedge C(y)}$ where ${x}$ and ${y}$ are disjoint inputs. Then, ${D(x,y)}$ has ${|S|^{2}}$ inputs that it accepts. If you know ${|S|^{2}}$ to a factor of ${2}$, then you know ${|S|}$ to a factor of ${\sqrt 2}$. An ${m}$-fold version of this will yield a approximation of $\displaystyle 2^{\frac{1}{m}} \approx 1 + O(\frac{1}{m}).$ I have just discussed amplifiers, and this is a perfect example of the power of even a simple repetition amplifier. Thus, the key problem is to determine ${|S|}$ to within a factor of ${2}$. Larry proves, Theorem: There is a random algorithm that determines ${|S|}$ to within a factor of ${2}$ in polynomial time provided it has access to an NP-oracle. Note, there is no way to remove the need for the oracle, without a breakthrough, since it is easy to construct an NP-hard problem that either has no solutions or many solutions. Thus, determining the answer to within a factor of ${2}$ without the oracle would imply that P=NP. Sketch of Larry’s proof Larry’s proof uses some ideas that had been used earlier by Mike Sipser, but he added several additional insights. See his paper for the details, but I will give a quick overview of how the counting method works. Let ${C(x)}$ be the circuit and let ${S}$ be the set of inputs ${x}$ so that ${C(x)=1}$. Suppose that our goal is really modest: we want to know if ${|S|}$ is really large, or really small. Take a random hash function ${h:\{0,1\}^{n} \rightarrow \{0,1\}^{m}}$ where ${m}$ is much smaller than ${n}$. Then, check to see if there are two distinct ${x}$ and ${y}$ so that they form a “collision”, $\displaystyle h(x) = h(y), \ \ \ \ \ (1)$ and both are in ${S}$. If ${|S|}$ is small then, it is likely that (1) will have no solutions, but if ${|S|}$ is large, then (1) is likely to have solutions. The key to make this work is careful analysis of the probabilities of collisions for the given hash functions. This can be worked out to prove the theorem. Note, the detection of a collision requires an oracle call: “are there two distinct ${x}$ and ${y}$ such that ${h(x)=h(y)=1}$ and both are in ${S}$”? Counting and Finite Automata I love finite state automata (FSA)—as you probably know. The following theorem is not as well known as it should be, in my opinion at least: Theorem: Suppose that ${M}$ is a ${m}$-state deterministic finite state automaton. Then, the number of length ${n}$ inputs that are accepted by ${M}$ can be computed exactly in polynomial time in ${n}$ and ${m}$. Thus, for FSA, the problem of counting the number of accepting computations is easy. A proof sketch is the following—unfortunately I cannot find a paper to point to here, any help would be appreciated. Perhaps its a folklore theorem, since I have known it forever. Let ${M'}$ be a new automata that simulates ${M}$ on all inputs of length ${n}$, and has the property that the state diagram is acyclic. Essentially, ${M'}$ replaces each state ${s}$ of ${M}$ by the state ${(s,i)}$ where ${i=0,\dots,n}$. For example, if ${a \rightarrow b}$ was a transition on input ${0}$, then $\displaystyle (a,i) \rightarrow (b,i+1)$ is a transition for input ${0}$ for all ${i}$. We have simply unrolled the automaton ${M}$ to avoid any loops: this cannot be done in general, but is fine if we are only concerned with inputs of a fixed length ${n}$. The algorithm then inductively labels each state ${(a,i)}$ with the number of length-${i}$ inputs that reach this state. To label a state ${(b,i+1)}$, we take each state ${(a,i)}$ with arcs to ${(b,i+1)}$, and add the number of ${(a,i)}$ times the number of arcs from ${a}$ to ${b}$. The number of accepting computations of length ${n}$ is then the sum of the numbers for ${(b,n)}$ with ${b}$ an accepting state of the original ${M}$. Open Problems Can we improve the result of Stockmeyer on approximate counting? For specific problems there are better results known of course, but I wonder can we improve his counting argument? Of course if P=NP, then approximate counting would be in polynomial time, but can it be in NP? Ken Regan, who helped with this post, points out an interesting connection between Larry’s theorem and quantum computation—he promises to post a comment on it. Another natural question that I think might be worth working on is this: pick a complexity class that could be weaker than polynomial time and see what the cost of approximate counting is for that class. I have given a simple example, above, where exact counting is easy. There is some quite nice work on approximate counting for certain simple classes of circuits—I will leave that for another day. ### Like this: from → History, P=NP, People, Proofs 14 Comments leave one → 1. Vince August 27, 2009 7:45 am For the FSA counting, why not just compute the generating function (in constant time) and then use Taylor series or run the recursion? 2. August 27, 2009 1:21 pm An interesting related question is to count the number of inputs of length n that an NFA with m states accepts. This is a #P-hard problem. However, one can obtain a (1+\eps) approximation in n^{O(log n/\eps)} time. This is in a paper of Gore etal which also shows how to count the number of strings of length n accepted by a context free grammar. ftp://ftp.cis.upenn.edu/pub/kannan/newj.ps.gz I think it is still an open problem to improve the quasi-polynomial time to a polynomial time. • rjlipton * August 27, 2009 1:25 pm Thanks. I did not know this result. Thanks again for the pointer. Would be cool to improve the bound. 3. August 27, 2009 6:44 pm Here’s the connection in brief. BQP is the feasible quantum computing class analogous to BPP, but unlike BPP it contains (the decision version of) factoring and other interesting stuff. Also unlike BPP, it is not known to be contained in the polynomial hierarchy. You can reduce a BPP question to approximating a single #P function, but with BQP what you get is the difference of two #P functions, call them f and g. Approximating f and g separately does not help you approximate f – g when f – g is near zero! BQP does give you a little help: When the answer to the BQP question on an input x is yes, you get that f(x) – g(x) is close to the square root of 2^m, where the counting predicates defining f and g have m binary variables after you substitute for x. (There are no absolute-value bars; “magically” you always get f(x) > g(x). Under common representations of quantum circuits for BQP, m becomes the number of Hadamard gates.) When the answer is no, the difference is close to 0. Doing Stockmeyer approximation on f and g separately still fails basically because 2^{m/2} is not a polynomial fraction of 2^m. The open question is what other counting and approximation tricks might come into play… 4. August 29, 2009 11:49 am On “counting accepted n-length strings in an m-state DFA is in P”: Yes, it is folklore. Even better, the problem is in NC. In the class #L, of functions that report the number of accepting paths of a logspace-bounded machine, and hence expressible as the determinant of a related integer matrix. Why? Unroll the DFA into an acyclic graph on n-length inputs. Each node has out-degree 2. Now let an NLOG machine trace out a path in this graph, choosing an out-edge nondeterministically, and accepting if the final state reached is an accepting state. I’m not sure what would be a good reference: I believe (courtesy Eric Allender) that the unrolling idea goes back to Neil Jones in the 1970s. Certainly, the idea has been used implicitly or explicitly in almost all papers dealing with #L or GapL. Related result: instead of counting inputs, what if we just want to test if a particular input is accepted? ie Membership testing. If the NFA is also part of the input, this is NLOG-complete. (As opposed to when the NFA is fixed; this would be NC^1 complete. Also, as opposed to when the NFA is in the input but is in fact a DFA; this would be DLOG-complete.) And counting accepting paths on one particular input: if the NFA is part of the input, it is #L-complete, if the NFA is fixed, this is computable in #NC^1 and hence in deeterminstic logspace. • rjlipton * August 29, 2009 12:06 pm Thanks for the post(s). I have known the result for a long time, and cannot remember how first heard it. So thanks for the comments, again. 5. August 29, 2009 11:51 am mistyped URL 6. August 29, 2009 12:53 pm About 10 years ago, I found a false proof that P does not equal NP, and it took me a few days to find my bug (I was pretty excited for a while!). Even though I had already found my bug, I showed the false proof, just for fun, to various famous members of our theory community, to see if they could spot the bug (I did tell them that there existed a bug, and that their challenge was to find it). Only two of them were able to find the bug, and they both found it pretty quickly – Larry Stockmeyer and Moshe Vardi. Amusingly enough, the bug was, of all things, a misuse of Fagin’s Theorem! Maybe the other people I showed the proof to just trusted that I surely wouldn’t misuse my own theorem. 7. September 17, 2009 4:58 am hello, thanks for the great quality of your website, each time i come here, i’m amazed. I would like to suggest you to discover the real black hattitude. you’ll find a lot of tricks related to the black hattitude, You can buy some black hattitude, rent black hattitude, steal black hattitude, or find the ultimate black hattitude on our sites ofblack hattitude. regards, Mike Litoris black hattitude you’ll find here also some good black hattitude ### Trackbacks %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 81, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96707683801651, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/213354-can-radius-found.html
# Thread: 1. ## Can the radius be found? I hope the image is clear enough There are 3 segmented circles, the centres are vertically above each other. The vertical distance between each is 'h'. Top circle has radius R, centre circle radius (R+d) and bottom circle radius (R+2d). All coloured triangles are right-triangles. The white and purple triangles are the same. Given that a, b, c and h are known, is it possible to obtain equations for 'R' and 'd' in terms of these known quantities? I feel sure that it should be possible i.e. I known the unmarked side of the red triangle is $\sqrt{R^2-a^2}$ etc but am struggling to get anywhere with it. Any help would be greatly appreciated. Please say if you require any further info. Attached Thumbnails 2. ## Re: Can the radius be found? Hi tmoria! You do have a constraint. Since the segments a, b, and c are in the same plane and bounded by 2 lines, you get the constraint: a-2b+c=0. So let's suppose that a and b are known, then we require that: $c=2b-a \qquad (1)$ Let's call the unnamed line segments A, B, and C. Then we can set up the system of equations: $A-2B+C=0$ $A^2+a^2=R^2$ $B^2+b^2=(R+d)^2$ $C^2+c^2=(R+2d)^2$ We have 4 equations with 5 unknowns. So let's pick A ourselves, then we can solve the system getting: $R=\sqrt{a^2 + A^2}$ $d=\frac{b-a}{a}R$ $B=\frac b a A$ $C=\frac c a A$ A solution like this is easiest to calculate with Wolfram|Alpha. 3. ## Re: Can the radius be found? Originally Posted by ILikeSerena ...Since the segments a, b, and c are in the same plane and bounded by 2 lines..., you get the constraint: a-2b+c=0. Hi ILikeSerena, Many thanks for the effort you put in. Unfortunately I forgot to delete the two lines at the end of the segments as they are not straight lines, but slightly curved which does not show in the image. The amended image is included below. I apologize for you wasting your time Lines MN and OP are straight.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401314854621887, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/76454/list
Return to Answer 2 Added explanation of satisfiability relation; added 120 characters in body EDIT: Reading various comments to the original question and to other answers, I see that something more may need to be said about the satisfaction relation, even though it is standard textbook material. To say that a first order sentence $\phi$ is true, or that it belongs to $\mathrm{Th}(\mathbb N)$, means that it is satisfied by $\mathbb N$, where satisfiability is defined inductively. For example, $\exists x: \phi(x)$ is satisfied by $\mathbb N$ if there exists $x\in \mathbb N$ such that $\phi(x)$ is satisfied by $\mathbb N$. Further details may be found here. Now, you might complain that in order to "make sense" of the satisfiability relation, you have to "make sense" of $\mathbb N$. However, you don't have to believe in $\mathbb N$ as some kind of platonically existing thing in order to correctly manipulate sentences about $\mathbb N$. Any sufficiently powerful set-theoretic meta-theory will suffice to carry out the definition of $\mathbb N$ and the satisfaction relation. ZFC is the standard choice but you could use something else if you prefer. A way to assert the existence of $\mathbb N$ in the first-order language of set-theory is as follows: $$\exists x:(\emptyset \in x \wedge \forall y\in x: (y\cup\lbrace y\rbrace\in x))$$Here I've used various abbreviations, e.g., $\emptyset\in x$ expands formally to $\exists z : (z\in x \wedge \neg \exists w: (w\in z))$. Similar but more complicated formalizations can be produced for "set of first-order sentences of arithmetic" and "$\mathbb N$ satisfies $\phi$." As long as you know the axioms and rules of inference for ZFC, you can verify that the existence of $\mathrm{Th}(\mathbb N)$ is provable in ZFC. (Note: This is NOT the same as saying that every true sentence of arithmetic is provable in ZFC, which is absolutely false!) And once you have $\mathrm{Th}(\mathbb N)$, you can simply interpret "x is true" as $x\in \mathrm{Th}(\mathbb N)$. In particular, there is nothing mysterious about truth; it is just a mathematical concept formalizable in ZFC like any other mathematical concept. 1 There's a general "trick" for handling all issues of this sort. Take any mathematical theorem that a platonist regards as meaningful. Formalize it as a formal theorem T in ZFC. The formalist will now accept the sentence, "ZFC proves T." Here, the only potentially confusing concept is that of truth. But to say that some first-order sentence of arithmetic is true just means that it is satisfied by the structure $\mathbb N$. The satisfaction relation, like all ordinary mathematics, is readily defined set-theoretically, as you can see in any textbook on logic. So the nonexistence of the algorithm in question can be expressed as a first-order sentence of set theory, and the formalist will agree that this sentence is a theorem of ZFC. For some kinds of finitistic statements, the formalist doesn't have to do this little dance of translating "true" into formal set-theoretic terms and replacing "T" with "ZFC proves T." For example, in the sentence, "It is true that ZFC proves T," the formalist can use his "native" understanding of the word "true" and doesn't have to convert "ZFC proves T" into an arithmetic statement S and use the set-theoretic definition of truth to get a set-theoretic assertion whose ZFC-theoremhood he can agree with. But the little dance is always available as an option.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598518013954163, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1844/how-to-attack-a-general-polyalphabetic-cipher?answertab=votes
# How to attack a general polyalphabetic cipher? I am able to decrypt vigenere cipher text using the index of coincidence and chi test. However out of interested how do you go about attacking ciphertext that was encrypted using a mix alphabet shifted 26 times? Also what about a ciphertext that has been encrypted with 26 random alphabets? so each line in the tableau is random. Googling seems to bring up the basic vigenere, either an example of a link to a resource would be good. Thanks - ## 2 Answers A general polyalphabetical cipher is just a combination of several general monoalphabetical ciphers, each applied on every $n$-th letter of the message. So the first thing is to find out what $n$ is (i.e. the key length). For this we can use the index of coincidence just like for Vigenere. Then we can split the message into $n$ parts (columns), and try to break each of it as an individual monoalphabetic cipher, starting with frequency analysis. (We'll have to correlate information from the individual columns to do bigram frequency analysis, too.) - This is what i expected, however if you have n splits of the ciphertexts which were encrypted with a general monoalphabetic cipher, carrying standard frequency analysis on each section seems like a right pain as even when you got the correct key it would have to be tested with the original ciphertext. Is there no simpler solution? as if you had a key lengt of 17 say, that would be 17 random pieces of ciphertext to decrypt, how would you ever know you got the right answer? unless you got a majority of them right? – Lunar Feb 13 '12 at 18:19 @Lunar: If you've got enough ciphertext, you can mostly guess each column of the tableau just by looking at single letter frequencies. The rest will then just be fine tuning. – Ilmari Karonen Feb 17 '12 at 17:08 Also what about a ciphertext that has been encrypted with 26 random alphabets? so each line > in the tableau is random. If you mean that you use a repeating sequence of 26 random unique characters as the key, then you break it in much the same way as any other Vigenere cipher - n, in this case, is 26. If you mean a random key at least as long as the message using 26 random characters, then you won't be able to differentiate successful and unsuccessful decryptions as the message is information-theoretically secure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473095536231995, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/79612/an-interesting-simple-sequence-surprised-to-find-little-material
## An interesting, simple, sequence - surprised to find little material. [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've been considering this sequence: $$1,2,3,6,12,24,48,96,192,...$$ I've generated the sequence from the rule $$V_n=\sum_{0\leq i \lt n} V_i$$ $$V_0=1; V_1=2V_0=V_0+V_0$$ What interests me most, is that this sequence - with its rule requiring the sum of a finite, but unbounded, number of components is remarkably similar to a sequence with a local generation rule requiring doubling of the preceding value. In fact, given a reversed finite leftmost subsequence, an arbitrarily long prefix could suggest that the sequence was the reverse of one generated by a local rule (i.e. $V_n=2V_{n-1}|n\ge1$) and the discrepancy would only become apparent at the penultimate value. It strikes me that this observation should be relevant to all empirical study... as it demonstrates how two fundamentally different underlying models can generate identical values for an infinite number of tests... and that, unless the single critical comparison (between $V_1$ and $V_2$) is made, an inappropriate model can appear to be supported. Obviously, there are variants on this theme with different values for $V_0$ and $V_1$ - and each stabilises by $V_4$ to match a local doubling rule... and that $V_0=0$ the result is a constant sequence ($V_i=V_1|i\neq0$)... and that even when I chose $V_1\ge V_0$ I see a similar 'anomaly'. I'm interested to discover other sequences which, when reversed, can appear to have arisen from a different recurrence relations for an arbitrarily large prefix. For example, are there neat sequences that have two equivalent recurrence relations only for elements after the fifth or later value? - ## 2 Answers Of course, yes. Take some fraction $\frac{f}{g}$ ($f$ and $g$ are polynomials) and build its recurrent sequence. Further, take $\frac{f}{g}+h$ ($h$ is a polynomial of degree 4) and do the same. - I think I follow what you're suggesting... though you're approaching the question from a different angle. The distinction I'd draw is that where your second generator polynomial is (f/g)+h, it is syntactically obvious that only the first Order(h) terms will differ from (f/g). What I found interesting about the sequence is that it could be generated from two syntactically distinct rules - one requiring finitely many operations, the other a number linear in i for each V_i. Maybe I should have asked what (simple) recurrence relations have generator functions of the form (f/g) and (f/g)+h? – aSteve Oct 31 2011 at 12:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Search in OEIS Returns 3 results. Most likely apart from the initial term it is A042950 G.f.: (2-x)/(1-2*x) a(n)=2*a(n-1), n>1; a(0)=2, a(1)=3. - I'd found the OEIS page - I should have included a link to it... However, from comments about A042950, its most interesting property (in my view - i.e. that a fold operation over prefixes generates a sequence very similar, but not identical to, one with only a local rule) does not seem to be discussed. Significantly, I can't see a way to find other (simply generated) sequences in OEIS that have a similar property. – aSteve Oct 31 2011 at 13:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402374625205994, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/39944-confusing-homework-questions-help-print.html
# Confusing homework questions HELP!! Printable View • May 28th 2008, 10:19 PM Math Phobe Confusing homework questions HELP!! 1)Farmer John has 1200 feet of fencing and wishes to use it to fence a rectan-gular plot divided into two subplots as in the gure below. What are the dimensions of the plot with maximum area? 2) Find the absolute maximum and absolute minimum values of f (x) = 3x4 + 4x3 −12x2 + 7 on the interval [ −1; 2]. • May 28th 2008, 10:32 PM angel.white Quote: Originally Posted by Math Phobe 1)Farmer John has 1200 feet of fencing and wishes to use it to fence a rectan-gular plot divided into two subplots as in the gure below. What are the dimensions of the plot with maximum area? 2) Find the absolute maximum and absolute minimum values of f (x) = 3x4 + 4x3 −12x2 + 7 on the interval [ −1; 2]. For one, I cannot see the images. For 2, you are looking for maximums and minimums. You know that wherever these occur, they must be either higher or lower than the areas around them, thus they must look like a hill or valley. Since the top of a hill, and the bottom of a valley are flat, you are looking for places where your slope is equal to zero. So find the derivative (because this gives the formula to find the slope), and set it equal to zero (because this will tell you where the slope is zero, meaning where the slope is "flat" or where it does not change, or where it is horizontal). Find the values for x which return this, then test the areas around it to determine whether it is a "hill" or "valley" (meaning maximum or minimum") then test the two endpoints (because they could be higher or lower, even though they may not be a hill or valley. and the highest of these values will be the maximum, and the lowest will be the minimum. • May 28th 2008, 10:44 PM Math Phobe Oh i know what ur saying but i don't know how to do the problem or start the problem at all that's why am asking it on here am totally confused about where to start! and what image are you taking about the image for the 1st question? if so that's a rectangular • May 28th 2008, 11:05 PM angel.white Quote: Originally Posted by Math Phobe Oh i know what ur saying but i don't know how to do the problem or start the problem at all that's why am asking it on here am totally confused about where to start! and what image are you taking about the image for the 1st question? if so that's a rectangular For the first question I am talking about where it says "gure below" (presumably "figure below"), if it is there, I cannot see it. Perhaps others can see it, and can aid you. For the second, the derivative is $f\prime (x) = 4*3x^{4-1} + 3*4x^{3-1} -2*12x^{2-1} + 0*7x^{0-1}$ $f\prime (x) = 12x^3 + 12x^2 -24x$ Now we set it equal to zero: $f \prime (x)=0=12x^3+12x^2-24x$ and factor $0 = 12x(x^2 + x -2)$ factor again $0 = 12x(x +2)(x-1)$ Now you can see that where x = 0, x=-2, x=1, the derivative will be zero. Can you take it from here? (don't forget to take the domain into account). • May 28th 2008, 11:28 PM Math Phobe do you by any chance know which tab to use to draw the image. I need thin line to create the box for the 1st question. All times are GMT -8. The time now is 10:08 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266799092292786, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/19831/how-can-there-be-a-path-to-ground-with-thick-shoes-and-a-carpet?answertab=active
# How can there be a path to ground with thick shoes and a carpet? I'm connecting a test light to one pin of an halogen lamp. When I touch the metallic part on the back of the test light, the light glows, as it is supposed to. However, I have thick shoes and I am standing on a carpet. How could I possibly provide a path to ground? I have tried: • to stand on a plastic sheet -- the light still glows with the same intensity • to touch a radiator with the other hand -- the light is much more intense • to interpose my shoe between the metallic part of the test light and me -- the light does not glow • to interpose a paper sheet -- the light still glows Thank you for any response - I'm not sure (hence a comment rather than an answer) but I think the path to ground is through the air. Your body provides a large surface area, making it easier for charge carriers (electrons? Ions? I don't know) to pass through the air to the conductive objects in the room. – Nathaniel Jan 22 '12 at 12:02 Perhaps you could test this hypothesis by attaching a large piece of tin foil to the contact instead of your body... – Nathaniel Jan 22 '12 at 12:03 My guess is that you form part of a capacitor that would saturate after a while. Have you tried long enough to see if the light diminishes after a while? – anna v Jan 22 '12 at 12:25 @Nathaniel I tried what you suggested and it worked, so thanks for your suggestion – Fiat Lux Jan 22 '12 at 12:52 @annav I think you would be right if my source were a DC supply, but it is alternated current, so I believe twistor59 is right – Fiat Lux Jan 22 '12 at 12:53 ## 1 Answer The pin you are touching has an alternating current power supply. One pin of the test light is connected to that A/C source. The other pin is connected to yourself. There will be a path to ground since you are capacitively coupled to the earth. An A/C current will flow through this capacitor. The size of your body helps in generating sufficient capacitance for this effect. You are a conductor (like one plate of a capacitor). The air, or shoe leather etc is the dielectric, and the earth is like the other plate of the capacitor. In the other scenario, putting the shoe between your hand and the test light contact insulates you from the test light, but does not produce sufficient capacitance to conduct the A/C since the surface area of the test light (equivalent to one of the capacitor's plates) contact is far too small to produce a significant capacitance to yourself (equivalent to the other plate via the dielectric of the shoe. - I think you are right, but could you explain how it is that the surface area of the test light is too small to produce a significant capacitance to myself, while there is a significant capacitance between myself and the Earth? I think I am very small compared to the Earth – Fiat Lux Jan 22 '12 at 12:58 The amount of current a capacitor will pass is proportional to the capacitance. The capacitance depends on the geometry, in particular it goes up with increasing area of plates. It's proportional to this area for a flat plate capacitor. When you do the "hold the shoe" thing there are two capacitors in series : testlightpin-shoe-you, and you-air-ground. – twistor59 Jan 22 '12 at 14:57 If the effective area for you-air-ground is 1$m^2$ say, and the testlightpin-shoe-you is 1$mm^2$ say, then (ignoring the dielectric properties), the ratio of the capacitances is 1:1000000. Now the series capacitance is totally dominated by the smaller one (1/C = 1/C1+1/C2), which is the testlight-pin-you one by a factor of a million. (Goodness knows how accurate the estimates are, but you get the general idea !) – twistor59 Jan 22 '12 at 15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372034072875977, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/19446-need-help-sequencing.html
# Thread: 1. ## need help in sequencing Need help in Sequencing 1. What is the next term in the sequence 15, 6, 21, 27, 48 _____? How do you get to that answer? I'm confused. 2. What type of a sequence is number 7? Thank you, mpr 2. Originally Posted by mpr Need help in Sequencing 1. What is the next term in the sequence 15, 6, 21, 27, 48 _____? How do you get to that answer? I'm confused. we can define this sequence recursively. $a_1 = 15$, $a_2 = 6$, $a_n = a_{n - 1} + a_{n - 2}$, for $n \ge 3$, $n \in \mathbb{N}$ so the next term is 48 + 27 = 75 2. What type of a sequence is number 7? Thank you, mpr umm, you mind telling us what sequence was in number 7? 3. Surely all realize that this is a totally meaningless question. There are many, many answers that can be augured for. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511138200759888, "perplexity_flag": "middle"}
http://www.maths.usyd.edu.au/u/pubs/publist/preprints/2010/molev-27.html
## Combinatorial bases for covariant representations of the Lie superalgebra $\mathfrak{gl}_{m|n}$ ### A. I. Molev #### Abstract Covariant tensor representations of $\mathfrak{gl}_{m|n}$ occur as irreducible components of tensor powers of the natural $(m+n)$-dimensional representation. We construct a basis of each covariant representation and give explicit formulas for the action of the generators of $\mathfrak{gl}_{m|n}$ in this basis. The basis has the property that the natural Lie subalgebras $\mathfrak{gl}_m$ and $\mathfrak{gl}_n$ act by the classical Gelfand-Tsetlin formulas. The main role in the construction is played by the fact that the subspace of $\mathfrak{gl}_m$-highest vectors in any finite-dimensional irreducible representation of $\mathfrak{gl}_{m|n}$ carries a structure of an irreducible module over the Yangian $Y(\mathfrak{gl}_n)$. One consequence is a new proof of the character formula for the covariant representations first found by Berele and Regev and by Sergeev. This paper is available as a pdf (300kB) file. Friday, October 8, 2010
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959060311317444, "perplexity_flag": "head"}
http://leepike.wordpress.com/2010/01/24/10-9/?like=1&source=post_flair&_wpnonce=27cac980df
# 10 to the -9 Posted: January 24, 2010 | Author: Lee Pike | Filed under: Fault Tolerance, Hardware | $10^{-9}$, or one-in-a-billion, is the famed number given for the maximum probability of a catastrophic failure, per hour of operation, in life-critical systems like commercial aircraft.  The number is part of the folklore of the safety-critical systems literature; where does it come from? First, it’s worth noting just how small that number is.  As pointed out by Driscoll et al. in the paper, Byzantine Fault Tolerance, from Theory to Reality, the probability of winning the U.K. lottery is 1 in 10s of millions, and the probability of being struck by lightening (in the U.S.) is $1.6 \times 10^{-6},$ more than a 1,000 times more likely than $10^{-9}.$ So where did $10^{-9}$ come from?  A nice explanation comes from a recent paper by John Rushby: If we consider the example of an airplane type with 100 members, each flying $3000$ hours per year over an operational life of 33 years, then we have a total exposure of about 107 flight hours. If hazard analysis reveals ten potentially catastrophic failures in each of ten subsystems, then the “budget” for each, if none are expected to occur in the life of the fleet, is a failure probability of about $10^{-9}$ per hour [1, page 37]. This serves to explain the well-known $10^{-9}$ requirement, which is stated as follows: “when using quantitative analyses. . . numerical probabilities. . . on the order of $10^{-9}$ per flight-hour. . . based on a flight of mean duration for the airplane type may be used. . . as aids to engineering judgment. . . to. . . help determine compliance” (with the requirement for extremely improbable failure conditions) [2, paragraph 10.b]. [1] E. Lloyd and W. Tye, Systematic Safety: Safety Assessment of Aircraft Systems. London, England: Civil Aviation Authority, 1982, reprinted 1992. [2] System Design and Analysis, Federal Aviation Administration, Jun. 21, 1988, advisory Circular 25.1309-1A. (By the way, it’s worth reading the rest of the paper—it’s the first attempt I know of to formally connect the notions of (software) formal verification and reliability.) So there a probabilistic argument being made, but let’s spell it out in a little more detail.  If there are 10 potential failures in 10 subsystems, then there are $10 \times 10 = 100$ potential failures.  Thus, there are $2^{100}$ possible configurations of failure/non-failure in the subsystems.  Only one of these configurations is acceptable—the one in which there are no faults. If the probability of failure is $x,$ then the probability of non-failure is $1 - x.$  So if the probability of failure for each subsystem is $10^{-9},$ then the probability of being in the one non-failure configuration is $\displaystyle(1 - 10^{-9})^{100}$ We want that probability of non-failure to be greater than the required probability of non-failure, given the total number of flight hours.  Thus, $\displaystyle (1 - 10^{-9})^{100} > 1 - 10^{-7}$ which indeed holds: $\displaystyle (1 - 10^{-9})^{100} - (1 - 10^{-7})$ is around $4.95 \times 10^{-15}.$ Can we generalize the inequality?  The hint for how to do so is that the number of subsystems ($100$) is no more than the overall failure rate divided by the subsystem rate: $\displaystyle \frac{10^{-7}}{10^{-9}}$ This suggests the general form is something like Subsystem reliability inequality: $\displaystyle (1 - C^{-n})^{C^{n-m}} \geq 1 - C^{-m}$ where $C,$ $n,$ and $m$ are real numbers, $C \geq 1,$ $n \geq 0,$ and $n \geq m.$ Let’s prove the inequality holds.  Joe Hurd figured out the proof, sketched below (but I take responsibility for any mistakes in it’s presentation).  For convenience, we’ll prove the inequality holds specifically when $C = e,$ but the proof can be generalized. First, if $n = 0,$ the inequality holds immediately. Next, we’ll show that $\displaystyle (1 - e^{-n})^{e^{n-m}}$ is monotonically non-decreasing with respect to $n$ by showing that the derivative of its logarithm is greater or equal to zero for all $n > 0.$  So the derivative of its logarithm is $\displaystyle \frac{d}{dn} \; e^{n-m}\ln(1-e^{-n}) = e^{n-m}\ln(1-e^{-n})+\frac{e^{-m}}{1-e^{-n}}$ We show $\displaystyle e^{n-m}\ln(1-e{-n})+\frac{e^{-m}}{1-e^{-n}} \geq 0$ iff $\displaystyle e^{-m}\left(e^{n}\ln(1-e^{-n}) + \frac{1}{1-e^{-n}}\right) \geq 0$ and since $e^{-m} \geq 0,$ $\displaystyle e^{n}\ln(1-e^{-n}) + \frac{1}{1-e^{-n}} \geq 0$ iff $\displaystyle e^{n}\ln(1-e^{-n}) \geq - \frac{1}{1-e^{-n}}$ Let $x = e^{-n}$, so the range of $x$ is $0 < x < 1.$ $\displaystyle\ln(1-x) \geq - \frac{x}{1-x}$ Now we show that in the range of $x$, the left-hand side is bounded below by the right-hand side of the inequality. $\displaystyle \lim_{x \to 0} \; \ln(1-x) = 0$ and $\displaystyle - \frac{x}{1-x} = 0$ Now taking their derivatives $\displaystyle \frac{d}{dx} \; \ln(1-x) = \frac{1}{x-1}$ and $\displaystyle \frac{d}{dx} \; - \frac{x}{1-x} = - \frac{1}{(x-1)^2}$ Because $\displaystyle x-1 \geq - (x-1)^2$ in the range of $x$, our proof holds. The purpose of this post was to clarify the folklore of ultra-reliable systems.  The subsystem reliability inequality presented allows for easy generalization to other reliable systems. Thanks again for the help, Joe! ### 13 Comments on “10 to the -9” 1. Colin Percival says: Much simpler way of doing the math: The probability that something has failed is less than or equal to the expected number of things which have failed. If you have 100 events which occur, each with probability 10^-9, the average number of them which are occuring at any point in time is 100 * 10^-9 = 10^-7; so you immediately have that the probability that one or more is occuring is less than or equal to 10^-7. No logarithms required. (Also, using this argument you don’t need to make the assumption that failures are independent of each other, which you implicitly do.) • Lee Pike says: Thanks for the note. You note that If you have 100 events which occur, each with probability 10^-9, the average number of them which are occuring at any point in time is 100 * 10^-9 = 10^-7; so you immediately have that the probability that one or more is occuring is less than or equal to 10^-7. I believe you are computing the expected value here—for example, if I flip a fair coin three times, I expect to see heads 3 * 0.5 = 1.5 times. The probability of one or more heads is computed by $1 - 0.5^3$. • Ganesh Sittampalam says: The expected number of events is sum(n * P(there are exactly n events)) i.e. since the number of events is discrete, it is 0 * P(exactly 0 events) + 1 * P(exactly 1 event) + 2 * P(exactly 2 events) + … This is greater than 1 * P(exactly 1 event) + 1 * P(exactly 2 events) + … which is equal to P(>=1 event) 2. Ganesh Sittampalam says: Did you mean n=m for the base case, not n=0? The LHS goes to 0 if n=0, and anyway we’re not interested in the case where each component is *more* unreliable than we want the system to be. 3. Social comments and analytics for this post… This post was mentioned on Twitter by donsbot: Lee Pike’s post on one-in-a-billion critical system failures, http://leepike.wordpress.com/2010/01/24/10-9/… 4. Neil Brown says: It’s not clear to me that (1 – e^{-n})^{e^{n-m}} >= 1 – e^{-m} holds immediately if n = 0. Substituting in, e^{-n} becomes e^0 becomes 1, and the left-hand side of the inequality collapses to 0. This leaves 0 >= 1 – e^{-m}, which rearranges to e^{-m} >= 1, then -m >= 0, and finally m <= 0. Judging by the previous part of your post, m is typically positive, which suggests a problem. What have I missed? • Lee Pike says: Neil, Ganesh—typo! Thanks for the catch. I changed the post so that $n > 0$. 5. [...] 10 to the -9 , or one-in-a-billion, is the famed number given for the maximum probability of a catastrophic failure, per hour of [...] [...] 6. Oscar Boykin says: You only proved that f(n)=(1 – e^{-n})^{e^{n-m}} is non-decreasing. As a consequence, f(n) >= f(m) = 1 – e^{-m} IF n >= m. Otherwise, if n <= m the proof breaks and you have the reverse: f(n) = 1-rx for 0 <= x = 1? Which can be proven (similarly) by noting that f(x) = (1-x)^r – 1 + rx, then f’(x) >= 0 for all 0 <= x = 0 -> f(x) >= 0 (by mean value theorem). With the above, x = C^{-n}, r = C^{-m+n}, and you recover your result (only if C^n >= C^m which is to say r>=1). • Lee Pike says: Thanks—another typo. I think I’ve fixed the assumptions now. 7. Alain Cremieux says: The probability of winning the U.K. lottery is 1 in 10s of millions, and the probability of being struck by lightening (in the U.S.) is 1.6 \times 10^{-6}, about a 1,000 times more likely. 1 in 10s of millions is 10^(-7) so the probability of being struck by lightning should be 6 times more likely. • Lee Pike says: Sorry for the ambiguity—I meant that $10^{-6}$ is 1,000 times more than $10^{-9}.$ I’ve updated the post to make it clear. 8. Oscar Boykin says: My original comment got munched up somehow. My point was that this whole thing is identical to: $(1-x)^r \geq 1 - rx$ for $0 \leq x \leq 1$, which you can prove slightly more directly (no logs) by the same method: show that $f(x) = (1-x)^r -1 + rx$ is non-decreasing when $x=1$ and note $f(0) = 0.$ Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195611476898193, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Linear_map
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Linear transformation (Redirected from Linear map) In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. In other words, it "preserves linear combinations". In the language of abstract algebra, a linear transformation is a homomorphism of vector spaces. Contents ## Definition and first consequences Formally, if V and W are vector spaces over the same ground field K, we say that f : V → W is a linear transformation if for any two vectors x and y in V and any scalar a in K, we have $f(x+y)=f(x)+f(y) \,$ (additivity) $f(ax)=af(x) \,$               (homogeneity). This is equivalent to saying that f   "preserves linear combinations", i.e., for any vectors x1, ..., xm and scalars a1, ..., am, we have $f(a_1 x_1+\cdots+a_m x_m)=a_1 f(x_1)+\cdots+a_m f(x_m).$ Occasionally, V and W can be considered as vector spaces over different ground fields, and it is then important to specify which field was used for the definition of "linear". If V and W are considered as spaces over the field K as above, we talk about K-linear maps. For example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. ## Examples • If A is an m × n matrix, then A defines a linear transformation from Rn to Rm by sending the column vector x ∈ Rn to the column vector Ax ∈ Rm. Every linear transformation between finite-dimensional vector spaces arises in this fashion; see the following section. • The integral yields a linear map from the space of all real-valued integrable functions on some interval to R • Differentiation is a linear transformation from the space of all differentiable functions to the space of all functions. • If V and W are finite-dimensional vector spaces over the field F, then functions that map linear transformations f : V → W to dimF(W)-by-dimF(V) matrices in the way described in the sequel are themselves linear transformations. ## Matrices If V and W are finite-dimensional and bases have been chosen, then every linear transformation from V to W can be represented as a matrix; this is useful because it allows concrete calculations. Conversely, matrices yield examples of linear transformations: if A is a real m-by-n matrix, then the rule f(x) = Ax describes a linear transformation Rn → Rm (see Euclidean space). Let $\{v_1, \cdots, v_n\}$ be a basis for V. Then every vector v in V is uniquely determined by the coefficients $c_1, \cdots, c_n$ in $c_1 v_1+\cdots+c_n v_n.$ If f : V → W is a linear transformation, $f(c_1 v_1+\cdots+c_n v_n)=c_1 f(v_1)+\cdots+c_n f(v_n),$ which implies that the function f is entirely determined by the values of $f(v_1),\cdots,f(v_n).$ Now let $\{w_1, \cdots, w_m\}$ be a basis for W. Then we can represent the values of each f(vj) as $f(v_j)=a_{1j} w_1 + \cdots + a_{mj} w_m.$ So the function f is entirely determined by the values of ai,j. If we put these values into an m-by-n matrix M, then we can conveniently use it to compute the value of f for any vector in V. For if we place the values of $c_1, \cdots, c_n$ in an n-by-1 matrix C, we have MC = f(v). It should be noted that there can be multiple matrices that represent a single linear transformation. This is because the values of the elements of the matrix depend on the bases that are chosen. Similarly, if we are given a matrix, we also need to know the bases that it uses in order to determine what linear transformation it represents. ## Forming new linear transformations from given ones The composition of linear transformations is linear: if f : V → W and g : W → Z are linear, then so is g o f : V → Z. If f1 : V → W and f2 : V → W are linear, then so is their sum f1 + f2 (which is defined by (f1 + f2)(x) = f1(x) + f2(x)). If f : V → W is linear and a is an element of the ground field K, then the map af, defined by (af)(x) = a (f(x)), is also linear. In the finite dimensional case and if bases have been chosen, then the composition of linear maps corresponds to the multiplication of matrices, the addition of linear maps corresponds to the addition of matrices, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars. ## Endomorphisms and automorphisms A linear transformation f : V → V is an endomorphism of V; the set of all such endomorphisms End(V) together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field K (and in particular a ring). The identity element of this algebra is the identity map id : V → V. A bijective endomorphism of V is called an automorphism of V. The composition of two automorphisms is again an automorphism, and the set of all automorphisms of V forms a group, the automorphism group of V which is denoted by Aut(V) or GL(V). If V has finite dimension n, then End(V) is isomorphic to the associative algebra of all n by n matrices with entries in K. The automorphism group of V is isomorphic to the general linear group GL(n, K) of all n by n invertible matrices with entries in K. ## Kernel and image If f : V → W is linear, we define the kernel and the image of f by $\ker(f)=\{\,x\in V:f(x)=0\,\}$ $\operatorname{im}(f)=\{\,f(x):x\in V\,\}$ ker(f) is a subspace of V and im(f) is a subspace of W. The following dimension formula is often useful (but note that it only applies if V is finite dimensional): $\dim(\ker( f )) + \dim(\operatorname{im}( f )) = \dim( V ) \,$ The number dim(im(f)) is also called the rank of f and written as rk(f). If V and W are finite dimensional, bases have been chosen and f is represented by the matrix A, then the rank of f is equal to the rank of the matrix A. The dimension of the kernel is also known as the nullity of the matrix. f(x)=a is called a homogeneous equation if a=0, otherwise inhomogeneous. If c is one solution (the so-called "particular solution") then the set of solutions is the set of x that can be written as c plus any solution of the corresponding homogeneous equation (i.e. any element of the kernel). This can be applied e.g. to systems of linear equations, linear recurrence relations, and (systems of) linear differential equations. See e.g. damped, driven harmonic oscillator. ## See also 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8746163845062256, "perplexity_flag": "head"}
http://mathoverflow.net/questions/120681/elementary-question-distinct-elements-in-a-set/120690
Elementary question: distinct elements in a set [closed] Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'd like to know the syntax for describing a number of elements in a set, and that each of them are distinct. e.g. {$x,y,z$} $\in A$ I would like to know how I can succinctly express the following, without having to write it out as such: $x \neq y \;\;\;\; x \neq z \;\;\;\; y \neq z$ - 4 "Brevis esse laboro, obscurus fio." (Horace) – François G. Dorais♦ Feb 3 at 18:33 2 For three elements, $x\neq y\neq z\neq x$ expresses the inequalities in 7 symbols. For four elements, this linear-string approach takes 15. For five, it can be done in 21, and in general, for any odd number $n>1$, it can be done in $n(n-1)+1$ symbols (although I suppose you might run into trouble at $n=27$). Curiously, the OEIS does not (yet) have an entry extending $3,7,15,21$ with both $43$ and $73$ in the proper place. – Barry Cipra Feb 5 at 17:53 4 Answers "Let $x,y,z,$ be pairwise distinct", is perfectly fine. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "Let $x, y, z$ be distinct" is enough. - "Let $\lbrace x,y,z \rbrace$ be a set with exactly three elements." - {x,y,z} $\cong3$ or |{x,y,z}| $=3$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915522038936615, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/8244/what-is-the-name-for-the-following-categorical-property/8250
## What is the name for the following categorical property? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a name for those categories where objects posses a given structure and every bijective morphism determines an isomorphism between the corresponding objects? Examples of categories of that type abound: Gr, Set, ... An specific example of a category where the constraint doesn't hold is given by Top: a morphism there is a continuous function between topological spaces. Now, it is easy to give here a concrete example of a bijective morphism between [0,1) and $\mathbb{S}^{1}$ that fails to be an isomorphism of topological spaces. In fact, much more is known in this case, right? - 5 There's a subtlety in this question. A category doesn't have a notion of bijection unless it's concrete, i.e. has a distinguished forgetful functor, and the same category may have different concretizations. – Qiaochu Yuan Dec 8 2009 at 20:46 *forgetful functor to Set. – Qiaochu Yuan Dec 8 2009 at 20:48 ... notion of bijection unless it's concrete. - Really? – J. H. S. Dec 8 2009 at 20:51 4 Yes, really. This is not an issue about sets versus classes, either. The point is that you can't define the property of a map being a bijection in terms purely of objects and morphisms. In some cases, "left and right cancellable" is a good generalization of "bijection". – David Speyer Dec 8 2009 at 20:56 For example, one definition of an abelian category is an additive category, with kernels and cokernels, such that any map which is left and right cancellable is invertible. – David Speyer Dec 8 2009 at 20:58 show 2 more comments ## 4 Answers The comments on the question point out that it's not really well-posed: the property "bijective" isn't defined for morphisms of an arbitrary category. However, for maps between sets, "bijective" means "injective and surjective". A common way to interpret "injective" in an arbitrary category is "monic", and a common way to interpret "surjective" in an arbitrary category is "epic". So we might interpret "bijective" as "monic and epic". Then JHS's question becomes: is there a name for categories in which every morphism that is both monic and epic is an isomorphism? And the answer is yes: balanced. It's not a particularly inspired choice of name, nor does it seem to be a particularly important concept. But the terminology is quite old and well-established, in its own small way. Incidentally, you don't have to interpret "injective" and "surjective" in the ways suggested. You could, for instance, interpret "surjective" as "regular epic", and indeed that's often a sensible thing to do. But then the question becomes trivial, since any morphism that's both monic and regular epic is automatically an isomorphism. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This isn't quite the question you asked, but does address the notion of ''bijective'' morphisms in categories, so I hope you'll forgive this digression. The examples you've mentioned - Set, Gp, Top - are all concrete, meaning they are equipped with a forgetful functor U to Set. We say a morphism f in a concrete category C is injective if its image Uf is injective, i.e., monic in the category Set. Dually, f is surjective if Uf is surjective. One usually thinks of concrete categories as "sets with structure", so these definitions coincide with the common use of such terminology: e.g., we call a map of spaces surjective when the underlying map of sets is. So we have four adjectives to use for arrows in C: monic, epic, injective, surjective. It's an easy exercise to see that all injections are monic and all surjections are epic. The converse is not true in general, but finding examples of monos that aren't injective and epis that aren't surjective can be tricky, and here's why. Often, particularly in ''algebraic'' examples, the functor U : C → Set has a left adjoint F. When this is the case, it is an easy exercise to see that every mono must be injective. Dually, if U has a right adjoint, then every epi is surjective. So for example, the forgetful functor U : Top → Set has both adjoints, and hence for spaces the notions injective/surjective and monic/epic coincide, at which point Tom's post answers your question. Here are some examples of concrete categories where these concepts differ, all of which can be found in Francis Borceux's Handbook of Categorical Algebra (I think). In the category of divisible abelian groups, the quotient map $\mathbb{Q} \rightarrow \mathbb{Q}/\mathbb{Z}$ is monic, though it's clearly not injective. In the category of monoids, the inclusion $\mathbb{N} \rightarrow \mathbb{Z}$ is epic, though not surjective. In the category of Hausdorff spaces, the epis are continuous functions with dense image, so also need not be surjective. - "every bijective morphism determines an isomorphism" I think you mean that the forgetful functor reflects invertibility. Let $\bf A$ be the "category of objects with structure" and their structure-preserving maps (homomorphisms), $\bf S$ the category of carriers (maybe sets and functions) and $U:{\bf A}\to{\bf S}$ the "forgetful" functor between them. In fact, just let $U:{\bf A}\to{\bf S}$ be any functor you like. Now let $f:X\to Y$ be any morphism of $\bf A$. You are saying that, whenever $U f:U X\to U Y$ is an isomorphism (such as a bijection) then $f$ was already an isomorphism. The forgetful functor from any category of algebras has this property, as more generally does the right adjoint of any monadic adjunction. However, the underlying set functor from the usual category of topological spaces does not, because there are many different topologies that can be put on a set. - As other people commented, the language of categories is richer than the language of sets with structures (Bourbaki structures). There are many categories, where objects don't have an underlying set. However, one can restate the property you formulate as follows: the category C admits a faithful conservative functor to Sets. Then we can interpret the fiber of this functor over a given set S as the set of structures on S and call the functor a forgetful functor. By faithfulness the homs in C will be subsets of those in Sets, and we can say that the homs in C preserve the structure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458695650100708, "perplexity_flag": "middle"}
http://cotpi.com/
## Puzzle of the week Sunday, March 31, 2013 ## Boolean financial advisors Alex and Bob work as financial advisors for the same company. They draw equal salaries from the company. They behave well at the office. Both work on similar assignments. Each assignment requires a yes-no decision. The company uses the decisions made by them to make profits. After the recession hit the company very badly, one of them has to be fired. Both Alex and Bob have worked on almost the same number of assignments in the last ten years. Alex has been consistently taking about 80% decisions correctly every year. Bob, on the other hand, has been taking only about 5% correct decisions every year. Assuming that the performances of Alex and Bob would remain the same in future, who should the company fire to maximize its profits in the years to come? Why? [SOLVED] ## Previous puzzle Sunday, March 3, 2013 ## Composite factorial plus one How many positive integers $$n$$ are there such that $$n! + 1$$ is composite? [SOLVED] ## Random puzzle from the past Sunday, April 1, 2012 ## Coins heads up You are blindfolded and taken into a room with two tables. There are coins scattered on one table. You are told the number of coins which are heads up on this table. The second table is empty. You are allowed to move coins from one table to another or flip them. Before you leave the room, there must be an equal number of coins heads up on each table. How can you do it? [SOLVED]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9657533764839172, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/298647/what-is-binary-operation-is-division-a-binary-operation/298667
# What is Binary Operation — is division a binary operation? I was reading the definition of a binary operation of here The thing I don't understand is how is division a binary operation? If you consider division with pairs in $\mathbb{N}_{>0}$ $\times$ $\mathbb{N}_{>0}$ ,you do not neccesary get an elenment in $\mathbb{N}_{>0}$. e.g (2,3) $\in$ $\mathbb{N}_{>0}$ $\times$ $\mathbb{N}_{>0}$ but 2/3 $\not\in$ $\mathbb{N}_{>0}$ So how is division a binary operation? - Of course you are right. That page is a bit sloppy in speaking about addition, subtraction, multiplication and division on an arbitrary nonempty set. – Andreas Caranti Feb 9 at 10:45 Division is a binary operation on, say, the positive rationals, or the positive reals. But as you note it's not a binary operation on the positive integers. – Gerry Myerson Feb 9 at 11:26 ## 1 Answer As the definition demonstrates, you can only talk about a binary operation on a given set $A$. To say any given operation is a binary operation, you need to specify what the set $A$ is. For your example, division is a binary operation on $\mathbb{Q}\setminus\{0\}$ for example (it is also a binary operation on $\mathbb{R}\setminus\{0\}$), but it is not a binary operation on $\mathbb{N}_{> 0}$, as you point out. As Andreas Caranti mentions in his comment, the following sentence (found on the linked page) is a bit sloppy. "Examples of binary operation on $A$ from $A\times A$ to $A$ include addition $(+)$, subtraction $(-)$, multiplication $(\times)$ and division $(\div)$." They probably should have said something along the lines of: Addition $(+)$, subtraction $(-)$, multiplication $(\times)$ and division $(\div)$ are examples of binary operations (for the appropriate choice of set $A$ in each case). A binary operation on a non-empty set $A$ is a function $f : A\times A \to A$, so technically the set $A$ is specified implicitly by $f$; however, the words addition, subtraction, multiplication, and division do not implicitly specify a particular set. - @Adeeb: After you ask a question here, if you get an acceptable answer, you should "accept" the answer by clicking the check mark ✓ next to it. This scores points for you and for the person who answered your question. If you don't do this, people are less likely to answer your later questions. You can find out more about accepting answers here: How do I accept an answer?, Why should we accept answers?, How does accept rate work?. – Michael Albanese Feb 20 at 7:04 @Adeeb: The above comment is just general advice as I see you have not accepted any of the answers to your questions. If my answer to this question is not acceptable, or you want further clarification, please let me know. – Michael Albanese Feb 20 at 7:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401482343673706, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/tagged/statistical-significance?page=3&sort=newest&pagesize=15
# Tagged Questions Statistical significance refers to the probability that, if, in the population from which this sample were drawn the true effect were 0 (or some hypothesized value) a test statistic as extreme or more extreme than the one gotten in the sample could have occurred. 0answers 81 views ### How to rank the elements for use in Spearman rank correlation? If I have 50 items and I want to compare the top 30. I have several cases to do the ranking , so which one is correct? For simplicity I will replace in the example below 50 items with 5 ( A,B,C,D,E) ... 0answers 41 views ### which test to use if you only have summary statistics? (non-normal) Here's the basic problem: visiters to our website were randomly put into one of 7 different groups, and we want to compare the activity of users in the different groups on a downstream site. The ... 1answer 47 views ### How to prove that test 1 produces more results than test 2? If I have 2 tests applied on a set of data, both tests either produce a result ( assume = 1) or no result ( NA , assume = 0 ). Is there a way to prove that test 1 produce more results than test 2 with ... 2answers 98 views ### Samples that will result in a large p-value Consider a one-sample t-test regarding the population mean time to complete a task with a one-sided alternative hypothesis of Ha: μ < 10 minutes. A random sample of times to complete the ... 2answers 145 views ### Interpreting p-values I am a little confused about what p-values mean under Fisher's signficance testing & Neyman-Pearsons hypothesis testing. Fisher uses p-values as a continuous measure of evidence against a null ... 1answer 102 views ### Population mean and hypothesis testing The national average price of a gallon of regular gasoline is 3.71 dollars. Linda would like to assess if the average gas price in her city is significantly higher than the national average. ... 1answer 33 views ### Learning about a population mean Assuming $H_0$ is true, what is the distribution of the test statistic? t(29) Assuming $H_0$ is true, what is the expected value of the test statistic? 87.70 The sample mean of 87.7 was _ standard ... 1answer 48 views ### Significance testing question If 2 distributions of data have the same mean value but standard deviation of sample 1 is twice the value of sample 2, explain what the difference in standard deviation value would indicate about the ... 0answers 72 views ### How to compare two survival percentages for two different populations? I calculated the 5 year survival of two populations keeping different variable constant each time and got percentages. I wanted to determine if these percentages were significantly different from each ... 2answers 93 views ### How do I find if there is a significant difference in the number of males and females in a single population? Also, how do I find if there is a significant difference in the number of males in two populations? In SPSS please! 1answer 135 views ### Statistical significance in an underpowered study, false positive? So, I'm actually a biologist trying to wrap my head around the idea of power of analysis to help design an experiment with the proper sample size. I understand that power of analysis is used to help ... 0answers 76 views ### What test of independence of sample proportions is appropriate when n is very large and p is very small? I have the (not necessarily veridical) impression that typical textbook independence tests of proportions are intended for use the proportions involved are not super small. In my case n may be 10^8 ... 1answer 89 views ### Choosing a better data-set I have two data-sets for same samples. But they are produced using two different instruments. I want to choose one data-set for further analysis. How can I find/prove which data-set is better? To ... 0answers 37 views ### What is a meaningful signifcance level when considering a pre-test and post-test model to assess learning in a college course? When using a pre-test/post-test assessment to measure college learning, what is a meaningful significance level? I realize that p<.05 is the minimum criterion, yet with a correlated-groups t-test, ... 0answers 70 views ### Can I trust the result of a one-way ANOVA with many treatments each with very few replicates I am doing a one-way ANOVA of a response variable (Y) on a treatment factor (T) of 7 levels. However, for each treatment, I have only 2 observations (or replicates). The ANOVA result shows the the ... 1answer 58 views ### Spearman rank recommended n size Is there a recommended n to test for Spearman's rank correlation? I have 2 lists of around 10,000 items. I read that Spearman's rank correlation is fine between 10-30 , can I use spearman to compare ... 0answers 99 views ### Significance testing for different groups in multiple regression with dummy-coded interaction variables Consider a multiple regression predicting outcome Y using a continuous predictor P and an interaction between ... 0answers 46 views ### Statistical significance of results in conversion data on group of Adwords keywords I am trying to optimize my Google Adwords campaign (online advertizing on Google search), and running into the following 'statistical significance' question. I ran a campaign with many keywords ... 0answers 187 views ### Determining significant difference between two sets of means Apologies if this question may have been asked in a different format somewhere else on the site but I'm not much of a statistician and the terminology confuses me somewhat. I'm currently writing a ... 1answer 66 views ### Statistical significance of ordered binary vectors I have a program to predict some values for people. For validation, I keep track of whether the prediction is correct or not, which gives me a binary vector with a length of about 600. To test if it ... 3answers 160 views ### Is it possible to have a variable significant in multiple regression but not significant in stepwise regression? I have run a stepwise regression and found that some of the selected variables are not significant yet in a multiple regression with all variables included in the model those variables were ... 1answer 74 views ### What type of statistics to use to show correlation between two discrete sets of data? I have two variables, each with the same set of 5 unranked possible values (let's call them A/B/C/D/E), and a set of data such as the following (A/A, A/D, B/B, D/D, E/E, B/A etc), where the first ... 1answer 66 views ### Can I convert a statistical significance into a zscore using a normal distribution? Suppose I want to turn statistical significance of 0.05 into a z-score of ~1.64 Can I use a normal distribution to convert these? 1answer 119 views ### Standard Error used in Hypothesis Testing and Confidence Interval construction In the excellent Practical Statistics for Medical Research Douglas Altman writes in page 235: "Because the standard error used for calculating the confidence interval differs from that used in ... 1answer 76 views ### What is the explanation for a regressor losing statistical significance when a high leverage point is dropped? I'm currently working on an Econometrics project and I've come to a point where I've dropped a high leverage point as identified by cook's distance and a leverage plot (had observations that were ... 2answers 133 views ### Statistical Difference from Zero I have a set of data that represents periodic readings. The data shows an upward trend but I need to test for a statistical difference from zero. I believe I should use the t test, two tailed but what ... 0answers 20 views ### Enrichment score calculation I have the following ... 0answers 51 views ### Test for whether a change is different in direction for subgroups? I have a (large) population on which I randomly split into groups, and subjected one group to an experimental condition (call it change A) which demonstrated a significant improvement in a target ... 1answer 80 views ### study on multiple outcomes- do I adjust or not adjust p-values? I did a study comparing 2 groups on multiple outcomes/characteristics. I am still learning the ropes when it comes to statistics, so I failed to specify to adjust the p-values. Some of the results ... 1answer 179 views ### Simple multiple choice for test statistic and significance level A study aims to determine if a new tutoring program (method 1) is better than standard preparatory program (method 2) to prepare students for the SAT. It measures the proportion of students who score ... 0answers 30 views ### how to evaluate the importance of different levels of one predictor This question is a little bit different. Say, I have 4 predictors (all categorical variables with multiple levels), and 1 response variable (binary 0/1). I used a logistic regression model, and found ... 1answer 79 views ### Comparing means of three groups When I want to compare the means between two groups I use a t-test. But how do I check whether the means of three groups are equal or statistically different? E.g. compare the average labour ... 1answer 125 views ### How to compare Likert ratings for multiple methods in a usability study I am designing a questionnaire to compare user experience of usage of a two prototypes of a software - prototype A and B. Users will rate prototype A for a task and then prototype B for performing the ... 0answers 81 views ### Which Statistical Test I have a variable (VAR1 )with two possibilities, let's call them V and N. I asked my experiment participants to detect the VAR1 value. My aim is to compare the correctness rate of answers by ... 1answer 230 views ### Finding the z score and p-value of a binomial distribution Emily is a big fan of lady gaga, and 20% of the songs on her ipod are lady gaga songs. Suppose Emily has her ipod on shuffle and repeat mode, which can be assumed to mean that each song to be played ... 2answers 107 views ### Why doesn't this represent a normal approximation to the binomial? Suppose the registrar's office at a college reports 58% of the students live on campus. An intern working in the administration building is unaware of this 58% parameter value. He designs a study in ... 0answers 38 views ### combining binomial and normal values into one significance test I have conducted an experiment in which a participant completes memory tasks (3 levels, within) while exploring a virtual environment (2 levels, within). The performance on these memory tasks is ... 0answers 21 views ### Literature to support effect size over significance In a large sample (N = 3,265,506 for my study) tests are always significant. Is there an article in the literature I can cite to justify using effect size instead of significance testing? 0answers 106 views ### Interpreting significance in hierarchical regression I've run a Hierarchical Multiple Regression to understand the unique predictive value of three different maternal variables (maternal PTSD, maternal depression, and quality of the parent-child ... 1answer 57 views ### How to report significance of a variable that is dependent on two others? Cost per click is defined by the cost per impression and the click through rate. Given a period of time, divide number of clicks by the sum of the cost per impression for all ads in the same time ... 0answers 55 views ### Equations solving and software help for making math patterns I would like to convert the collected data into mathematical equations by introducing for each requirement by a parameter. In this way, I will get some equations in unknowns. We know a method of ... 2answers 49 views ### I want to compare two groups of coin tosses with different number of tosses I have a list of results of two different groups of coin tosses generated by http://www.wolframalpha.com/input/?i=10+coin+tosses. But the number of tosses is different: There are 10 in one group and ... 1answer 108 views ### Finding the test statistic for a majority hypothesis In a survey of 1070 Ann Arbor residents, 59% supported a ban on bicycles on downtown sidewalks in certain areas with high pedestrian traffic. A city administrator wants to determine if a majority of ... 1answer 162 views ### Comparing change in average in two groups: Z-test? I'm about to start up a project (as a high school teacher) that involves comparing the change in average between two different groups. It's an experiment, and the setup is the following: Obtain a ... 2answers 263 views ### How to test (in STATA) whether the gender distribution of employees to jobs differs across two companies? I have data on several companies where some are headed by a male CEO while others by a female CEO. As you can imagine, the jobs within these companies have different gender compositions. What I am ... 0answers 66 views ### Statistical Test for Differences in Groups Based on Multivariate Function/Map I've recently run into an issue where I am trying to test for a statistical difference between two groups, where each element of a group is itself a data object. For any pair of objects, I can ... 1answer 119 views ### Compare means of two groups with a variable that has multiple sub-group I was unable to understand how to compare multiple mean values in each group. Within the subgroup the mean values influence each other (If one changes, the other changes automatically). I collected ... 0answers 98 views ### Frequency data differences test across two groups I am looking into contrasting two groups with regards to frequency differences over a few time points. To make it clearer: I have for group 1, the frequency difference 5; 4; 1; 2; 3 and for group 2: ... 1answer 90 views ### Finding parameter $p$ of the Bernouilli distribution which is exactly 99% significant I ran some experiments first. Afterwards, I looked for the parameter $p_{max}$ for which I can claim the chance a matrix has the ESP-property is $p_{max}$ or lower to a significance of exactly 99%. Is ... 1answer 92 views ### Dropping insignificant predictor in Poisson regression I am doing a Poisson regression analysis and have found that the type of programme a student is enrolled on to does not have a significant effect on the outcome. When it comes to writing out the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207053184509277, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/220395/matlab-how-to-plot-circular-plot-with-mixed-euclidean-and-polar-coordinate-para?answertab=oldest
# Matlab: How to plot circular plot with mixed euclidean and polar coordinate parameters How can I plot the following equations using Matlab?: $$x^2+y^2 = l_1^2+l_2^2+2l_1l_2cos\theta_2$$ and $$(x-l_1cos\theta_1)^2+(y-l_1sin\theta_1)^2=l_2^2$$ I'm guessing I have to fix $l_1$ and $1_2$ to do this. Is that correct? Please excuse me if I posted this question in the wrong place. I didn't see a way to write the equation with latex over at stackoverflow so I came here to ask. Thanks! - – natan Oct 25 '12 at 3:35 @nate thank you very much! – Kashif Oct 26 '12 at 6:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903408944606781, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/22839/what-is-surface-fluid-adhesion-energy?answertab=active
# What is “surface fluid adhesion energy”? This is related to my previous question. Pardon me for asking so many questions recently. My physics knowledge is not that good, and some answers are hard to find. In the question in the link, I asked what was the correct interpretation for $\beta_i$ in the energy formulation: $$\mathcal{F}(S_1,S_2,S_3)= \sum_{1\leq i<j\leq 3}\sigma_{ij}P_\Omega(S_i \cap S_j)+\sum_{i=1}^3 \beta_i P_{\partial \Omega}(S_i)+\sum_{i=1}^3 g \rho_i\int_{S_i} z dV$$ I got an answer which said that the correct interpretation for $\beta_i$ is surface fluid adhesion energy. I tried googling the term, but I did not find anything clear. My question is What is surface fluid adhesion energy and how is this related to the fluid-solid interfacial tension? What references are there on this subject? Here is the context: So, in my hypothesis the container $\Omega$ is partitioned in $S_1,S_2,S_3$, the three fluids, with prescribed volumes $v_i$ and prescribed densities $\rho_i$. I took into account in the formulation of the energy the interfacial tensions, the gravity, and the contact of the fluids with the container $\Omega$ (these are not my ideas; they are taken from other similar mathematical articles). I will denote $P_\Omega(S)$ the perimeter of $S$ situated in the interior of $\Omega$ and $P_{\partial \Omega}(S)$ the perimeter of $S$ which is situated on the boundary of $\Omega$. I will not be very rigorous in what I'm about to write: I will write, for example $P_\Omega(S_i\cap S_j)$ the perimeter of the intersection $S_i\cap S_j$ even if as set theory intersection, this is void. Still, I think that the idea will be clear. - No need to apologise for asking many questions--if they're good questions, we want MORE! (i dunno if your questions are good or bad--im not knowledgeable in this area of physics--but they look good) – Manishearth♦ Mar 26 '12 at 14:17 This could really have been a comment on the answer. The "energies" in this case are really "free energies" (meaning logarithm of probabilities times the temperature in energy units), and adhesion energy is extra probability for making a fluid stick to a surface. This can physically by a binding energy between the surface and the molecules, since, when you are looking microscopically, the log probability is just the ordinary energy. – Ron Maimon Mar 26 '12 at 18:41 @RonMaimon: Does an inequality of the type $|\beta_i-\beta_j| \leq \sigma_{ij}$ can be for this surface adhesion energies? – Beni Bogosel Mar 29 '12 at 7:19 I meant 'can be proved for the surface adhesion energies'. – Beni Bogosel Mar 29 '12 at 8:05 ## 1 Answer Surface fluid adhesion energy is the free energy per unit area of a fluid in contact with a surface. It can be defined by having a given bulk of fluid in contact with a container, and asking how much work does it take to add surface, for example by tilting a non-symmetric container and measuring how much work it takes (very precisely). You can understand this in a microscopic model by having a two-state lattice (an Ising model) with a box boundary, with extra energy for lattice sites which are "1" near the edge. The surface fluid adhesion energy at any temperature is the difference between the free energy of the lattice with an edge from the free energy of the lattice with periodic boundaries. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943734347820282, "perplexity_flag": "middle"}
http://en.wikisource.org/wiki/Page:AbrahamMinkowski1.djvu/8
Page:AbrahamMinkowski1.djvu/8 From Wikisource We interpret the vectors $\mathfrak{E}',\ \mathfrak{H}'$ as the forces acting at moving electric and magnetic unit poles. The vectors $\mathfrak{D,\ B}$ we call, by using the terminology of the "Enzyklopädie der mathematischen Wissenschaften", the "electric and magnetic excitation". It corresponds to the importance of vector $\mathfrak{E}'$, to make that approach for heat, which is developed for the time- and space-unity of moving matter (III) $Q=\mathfrak{JE}'$ At this third main equations, a equation is added as the fourth one, which connects the relative ray with the vectors $<math>\mathfrak{E}'\mathfrak{H}'$: (IV) $\mathfrak{S}'=c[\mathfrak{E}'\mathfrak{H}']$ For the case of rest, this vector passes into the Poynting vector. Eventually we need an approach, which expresses the quantity $P'$ defined in equation (13) and by that the relative stresses, by the vectors $\mathfrak{E'H'DB}$. We put (V) $P'=\mathfrak{E}'(\mathfrak{D}\nabla)\mathfrak{w}+\mathfrak{H}'(\mathfrak{B}\nabla)\mathfrak{w}-\frac{1}{2}[\mathfrak{E'D+H'B}\}\mathrm{div}\mathfrak{w}$ and thus we obtain for the relative stresses: (Va) $\begin{cases} X'_{x}=\mathfrak{E}'_{x}\mathfrak{D}_{x}+\mathfrak{H}'_{x}\mathfrak{B}_{x}-\frac{1}{2}[\mathfrak{E'D+H'B}\}\\ X'_{y}=\mathfrak{E}'_{x}\mathfrak{D}_{y}+\mathfrak{H}'_{x}\mathfrak{B}_{y},\\ X'_{z}=\mathfrak{E}'_{x}\mathfrak{D}_{z}+\mathfrak{H}'_{x}\mathfrak{B}_{z};\\ Y'_{x}=\mathfrak{E}'_{y}\mathfrak{D}_{x}+\mathfrak{H}'_{y}\mathfrak{B}_{x},\\ Y'_{y}=\mathfrak{E}'_{y}\mathfrak{D}_{y}+\mathfrak{H}'_{y}\mathfrak{B}_{y}-\frac{1}{2}[\mathfrak{E'D+H'B}\},\\ Y'_{z}=\mathfrak{E}'_{y}\mathfrak{D}_{z}+\mathfrak{H}'_{y}\mathfrak{B}_{z};\\ Z'_{x}=\mathfrak{E}'_{z}\mathfrak{D}_{x}+\mathfrak{H}'_{z}\mathfrak{B}_{x},\\ Z'_{y}=\mathfrak{E}'_{z}\mathfrak{D}_{y}+\mathfrak{H}'_{z}\mathfrak{B}_{y},\\ Z'_{z}=\mathfrak{E}'_{z}\mathfrak{D}_{z}+\mathfrak{H}'_{z}\mathfrak{B}_{z}-\frac{1}{2}[\mathfrak{E'D+H'B}\}.\end{cases}$ For the case of rest, the known formulas for the fictitious stresses follow from that. The choice of expressions (IV) and (V) appears to be at first sight as totally arbitrary. Yet it is the simplest generalization of the laws valid in resting bodies, which only uses the vectors occurring in the two first main-equations. Incidentally, it follows from (Va): $Y'_{x}-X'_{y}=\mathfrak{D}_{x}\mathfrak{E}'_{y}-\mathfrak{D}_{y}\mathfrak{E}'_{x}+\mathfrak{B}_{x}\mathfrak{H}'_{y}-\mathfrak{B}_{y}\mathfrak{H}'_{x}.$ According to this, the torque of relative stresses is: (Vb) $\mathfrak{R}'=[\mathfrak{DE}']+[\mathfrak{BH}']$ The mechanical principles laid out in the previous paragraph, and the fife main-equations, are the foundations upon which our system of electrodynamics of moving bodies is resting.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916321873664856, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/99395-another-group-theory-problem.html
# Thread: 1. ## [Solved] Another Group Theory problem... This time, it's about proving 2 elements have the same order:- Let G be a group and x be an element of G. The order of x is the least positive number such that $x^n$ = e.. That is, n > 1 satisfies $x^n$ = e and $x^m$ =/= e for all 1<m<n For x in G prove that x and x-1 have the same order . So, I've had a fair whack at the question, but I'm not sure if I'm going in the write direction. $x^{n/2}$. $x^{n/2}$ = $x^n$= e $x.x^{-1}$ = e $x.x^{-1}$ = $x.x^-1$ = $x^{1-1}$ = $x^0$ =1 ---> e (identity is 1) $x^n$ = e $(x^{n})^{-1} = e^{-1}$ $x^{-n} = e^{-1}$ Note: $e^{-1}$ = $e$ Therefore, $x^{-n} = e = x^{n}$ $x^{-n} = x^{n}$ $x^{n}/x^{-n} = e$ $x^{n}.x^{n} = e$ $x^{2n} = 1 = e$ $e = 1 = x^{0}= x^{-n}= x^{n}= x^{2n}$ THUS. If $x^{n}$ = 1 = e $x^{n}.x^{-n}$ = 1 = e $x^{-n}$ = 1 = e $x^{n}$ = $x^{-n}$ Therefore they have the same order. That is huge im sorry, but yeh. I dunno if it's correct. Any pointers? 2. Hi! You formulated the problem well, but very few lines of your solution attempt make sense (what is $x^{n/2}$ ??, why two symbols for an identity element?, what does the symbol "/" mean? ). In the language of groups, there are originally no 'powers'. $x^n$ is just our mere shorthand for $x \cdot x \cdot \ldots \cdot x$ with $n$ occurrences of $x$, $n \in \mathbb{N}$, with agreement that $x^0 = e$. You included the definition of order of an element, so let's use it to solve our problem. Let $n \geq 1$ be the order of $x$. $<br /> (x^{-1})^n \cdot x^n = \underbrace{(x^{-1} \cdot x^{-1} \cdot \ldots \cdot x^{-1})}_{n} \cdot \underbrace{(x \cdot x \cdot \ldots \cdot x)}_{n}=<br />$ $<br /> = \underbrace{x^{-1} \cdot x^{-1} \cdot \ldots \cdot (x^{-1}}_{n} \cdot \underbrace{x) \cdot x \cdot \ldots \cdot x}_{n}=<br />$ $<br /> = \underbrace{(x^{-1} \cdot x^{-1} \cdot \ldots \cdot x^{-1})}_{n-1} \cdot \underbrace{(x \cdot x \cdot \ldots \cdot x)}_{n-1}= \ldots = x^{-1} \cdot x = e<br />$ because group operation is associative. This shows that $(x^{-1})^n = (x^n)^{-1}$ but $(x^n)^{-1} = e^{-1} = e$, so we verified that $(x^{-1})^n = e$ which is the first requirement for $n$ to be the order of $x$. Now let's verify the second requirement. We argue by contradiction. Assume that there is a number $m$, $1 \leq m < n$, such that $(x^{-1})^m = e$. Then, in the same way as in the first step, we show that $(x^{-1})^m \cdot x^m = e$. But $(x^{-1})^m = e$, thus $x^m = e$. And this contradicts our assumption that $n$ is the order of $x$. 3. Thank you for your help. The "x $n/2$" was to denote 0.5x, and the "/" was to denote fractions (except for when it was used as =/= which was my attempt at "does not equal.). Thank you VERY much for the clarification. 4. Originally Posted by exphate Thank you for your help. The "x $n/2$" was to denote 0.5x, and the "/" was to denote fractions (except for when it was used as =/= which was my attempt at "does not equal.). Thank you VERY much for the clarification. Well, even if you are using real numbers, $x^{n/2}$ does not mean the same thing as $0.5x$. But the more serious problem here is that $0.5x$ has no meaning whatsoever in an arbitrary group $G$. As for fractions they could be defined in an appropriate way by defining $x/y$ by $xy^{-1}$, but you should be aware that such a notation is never used in group theory and you would do well to drop it altogether.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 60, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9716138243675232, "perplexity_flag": "head"}
http://medlibrary.org/medwiki/Quasi-arithmetic_mean
# Quasi-arithmetic mean Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: In mathematics and statistics, the quasi-arithmetic mean or generalised f-mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function $f$. It is also called Kolmogorov mean after Russian scientist Andrey Kolmogorov. ## Definition If f is a function which maps an interval $I$ of the real line to the real numbers, and is both continuous and injective then we can define the f-mean of two numbers $x_1, x_2 \in I$ as $M_f(x_1,x_2) = f^{-1}\left( \frac{f(x_1)+f(x_2)}2 \right).$ For $n$ numbers $x_1, \dots, x_n \in I$, the f-mean is $M_f(x_1, \dots, x_n) = f^{-1}\left( \frac{f(x_1)+ \cdots + f(x_n)}n \right).$ We require f to be injective in order for the inverse function $f^{-1}$ to exist. Since $f$ is defined over an interval, $\frac{f\left(x_1\right) + f\left(x_2\right)}2$ lies within the domain of $f^{-1}$. Since f is injective and continuous, it follows that f is a strictly monotonic function, and therefore that the f-mean is neither larger than the largest number of the tuple $x$ nor smaller than the smallest number in $x$. ## Examples • If we take $I$ to be the real line and $f = \mathrm{id}$, (or indeed any linear function $x\mapsto a\cdot x + b$, $a$ not equal to 0) then the f-mean corresponds to the arithmetic mean. • If we take $I$ to be the set of positive real numbers and $f(x) = \log(x)$, then the f-mean corresponds to the geometric mean. According to the f-mean properties, the result does not depend on the base of the logarithm as long as it is positive and not 1. • If we take $I$ to be the set of positive real numbers and $f(x) = \frac{1}{x}$, then the f-mean corresponds to the harmonic mean. • If we take $I$ to be the set of positive real numbers and $f(x) = x^p$, then the f-mean corresponds to the power mean with exponent $p$. ## Properties • Partitioning: The computation of the mean can be split into computations of equal sized sub-blocks. $M_f(x_1,\dots,x_{n\cdot k}) = M_f(M_f(x_1,\dots,x_{k}), M_f(x_{k+1},\dots,x_{2\cdot k}), \dots, M_f(x_{(n-1)\cdot k + 1},\dots,x_{n\cdot k}))$ • Subsets of elements can be averaged a priori, without altering the mean, given that the multiplicity of elements is maintained. With $m=M_f(x_1,\dots,x_k)$ it holds $M_f(x_1,\dots,x_k,x_{k+1},\dots,x_n) = M_f(\underbrace{m,\dots,m}_{k \text{ times}},x_{k+1},\dots,x_n)$ • The quasi-arithmetic mean is invariant with respect to offsets and scaling of $f$: $\forall a\ \forall b\ne0 ((\forall t\ g(t)=a+b\cdot f(t)) \Rightarrow \forall x\ M_f (x) = M_g (x)$. • If $f$ is monotonic, then $M_f$ is monotonic. • Any quasi-arithmetic mean $M$ of two variables has the mediality property $M(M(x,y),M(z,w))=M(M(x,z),M(y,w))$ and the self-distributivity property $M(x,M(y,z))=M(M(x,y),M(x,z))$. Moreover, any of those properties is essentially sufficient to characterize quasi-arithmetic means; see Aczél–Dhombres, Chapter 17. • Any quasi-arithmetic mean $M$ of two variables has the balancing property $M\big((x, M(x, y)), M(y, M(x, y))\big)=M(x, y)$. An interesting problem is whether this condition (together with fixed-point, symmetry, monotonicity and continuity properties) implies that the mean is quasi-arthmetic. Georg Aumann showed in the 1930s that the answer is no in general,[1] but that if one additionally assumes $M$ to be an analytic function then the answer is positive.[2] ## Homogeneity Means are usually homogeneous, but for most functions $f$, the f-mean is not. Indeed, the only homogeneous quasi-arithmetic means are the power means and the geometric mean; see Hardy–Littlewood–Pólya, page 68. The homogeneity property can be achieved by normalizing the input values by some (homogeneous) mean $C$. $M_{f,C} x = C x \cdot f^{-1}\left( \frac{f\left(\frac{x_1}{C x}\right) + \cdots + f\left(\frac{x_n}{C x}\right)}{n} \right)$ However this modification may violate monotonicity and the partitioning property of the mean. ## References 1. Aumann, Georg (1937). "Vollkommene Funktionalmittel und gewisse Kegelschnitteigenschaften". 176: 49–55. 2. Aumann, Georg (1934). "Grundlegung der Theorie der analytischen Analytische Mittelwerte". Sitzungsberichte der Bayerischen Akademie der Wissenschaften: 45–81. • Aczél, J.; Dhombres, J. G. (1989) Functional equations in several variables. With applications to mathematics, information theory and to the natural and social sciences. Encyclopedia of Mathematics and its Applications, 31. Cambridge Univ. Press, Cambridge, 1989. • Andrey Kolmogorov (1930) “On the Notion of Mean”, in “Mathematics and Mechanics” (Kluwer 1991) — pp. 144–146. • Andrey Kolmogorov (1930) Sur la notion de la moyenne. Atti Accad. Naz. Lincei 12, pp. 388–391. • John Bibby (1974) “Axiomatisations of the average and a further generalisation of monotonic sequences,” Glasgow Mathematical Journal, vol. 15, pp. 63–65. • Hardy, G. H.; Littlewood, J. E.; Pólya, G. (1952) Inequalities. 2nd ed. Cambridge Univ. Press, Cambridge, 1952. ## See also Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Quasi-arithmetic mean", available in its original form here: http://en.wikipedia.org/w/index.php?title=Quasi-arithmetic_mean • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.80656898021698, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/60631-trig-word-problem-help-me-please.html
# Thread: 1. ## Trig word problem. Help me please.. A magician standing on a platform subtends an angle of 15degrees at a point on the ground. The angle of elevation of the platform from the same point is 45 degrees. The magician is 180 cm high. FInd the height of the platform. --- 2. Hello, fayeorwhatsoever! We need a good sketch . . . A magician standing on a platform subtends an angle of 15° at a point on the ground. The angle of elevation of the platform from the same point is 45 degrees. The magician is 180 cm high. Find the height of the platform. Code: ``` * A *| * | * | 180 * | * | * * B * * | *15°* | * * | h * * | ** 45° | P * - - - - - * C h``` The magician is: $AB = 180$ The height of the platform is: $h = BC$ The observation point is $P\!:\;\;\angle APB = 15^o,\;\angle BPC = 45^o$ Since $\angle BPC = 45^o,\;\Delta BCP$ is an isosceles right triangle: . $PC = BC = h$ In $\Delta ACP\!:\;\;\tan60^o \:=\:\frac{h+180}{h} \quad\Rightarrow\quad \sqrt{3} \:=\:\frac{h+180}{h}\quad\Rightarrow\quad h \:=\:\frac{180}{\sqrt{3}-1}$ Therefore: . $h \;=\;245.8845... \;\approx\;246\text{ cm}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7904613018035889, "perplexity_flag": "middle"}
http://cms.math.ca/Reunions/ete11/abs/ds
Réunion d'été SMC 2011 Université de l'Alberta, Edmonton, 3 - 5 juin 2011 www.smc.math.ca//Reunions/ete11 Systèmes dynamiques Org: Arno Berger et Hao Wang (Alberta) [PDF] ARNO BERGER, University of Alberta Digits and dynamics - an update  [PDF] This talk will present some recent results concerning the distribution of significant digits and significands, with an emphasis on data generated by dynamical processes (deterministic or random, discrete or continuous). The results are put in perspective by comparison with classical facts. Several intriguing open problems will be mentioned as they pertain to analysis, probability and number theory. CHRISTOPHER BOSE, University of Victoria Rigorous uniform approximation of invariant densities for interval maps  [PDF] Various techniques have been developed for rigorous $L^1-$ approximation of the invariant probability density associated to a nonsingular map $T$ acting on a compact interval of the real line. Different discretization schemes may be used, including piecewise constant (Ulam), linear, quadratic etc. For uniform approximation, only piecewise linear or higher order schemes are applicable. We show how to establish rigorous approximations in this context. Our work is motivated by some escape rate formulae due to Keller and Liverani that are based on pointwise data for the invariant density of the associated closed system. We will explain this background. This is joint work with Wael Bahsoun, School of Mathematics, Loughborough University. YONGFENG LI, Universities Space Research Association Nonlinear Oscillation and Multiscale Dynamics in a Closed Chemical Reaction  [PDF] In this talk, we present the damped nonlinear oscillation and multi-scale dynamics in a closed isothermal chemical reaction system described by the reversible Lotka–Volterra model. This is a three-dimensional, dissipative, singular perturbation to the conservative Lotka–Volterra model, with the free energy serving as a global Lyapunov function. We will show that there is a natural distinction between oscillatory and non-oscillatory regions in the phase space, that is, while orbits ultimately reach the equilibrium in a non-oscillatory fashion, they exhibit damped, oscillatory behaviors as interesting transient dynamics. This is the joint work with Hong Qian and Yingfei Yi. WILLIAM MANCE, The Ohio State University Normal numbers with respect to the Cantor series expansions  [PDF] We will discuss extending the concept of normality to the $Q$-Cantor series expansions by defining two notions that are equivalent for $b$-ary expansions: $Q$-normality and $Q$-distribution normality. Much of the theory of $Q$-normal numbers and $Q$-distribution normal numbers is similar to the classical theory of normal numbers. For example, almost every real number is $Q$-distribution normal and many sets of non-$Q$-normal or non-$Q$-distribution normal numbers are residual sets with full Hausdorff dimension. Surprisingly, $Q$-normality and $Q$-distribution normality are no longer equivalent. We will provide recent constructions that demonstrate this fact. JAMES MULDOWNEY, University of Alberta Lyapunov functions and exponential dichotomies for differential equations  [PDF] This talk will discuss sufficient conditions, as well as necessary conditions, for a system of linear differential equations to have an exponential dichotomy. The criteria are expressed in terms of pairs of associated scalar functions. The approach seems to be amenable to the discussion of the behaviour of non-linear non-autonomous equations near an equilibrium. A question, recently raised by Arno Berger, about an equation whose linearization about an equilibrium has an exponential dichotomy on a compact time interval will be considered. CECILIA GONZALEZ TOKMAN, University of Victoria Semi-invertible Oseledets theorem for compositions  [PDF] Semi-invertible multiplicative ergodic theorems provide the existence of an Oseledets splitting for cocycles of non-invertible linear operators over invertible base. We present a constructive approach to semi-invertible multiplicative ergodic theorems, and give an application to random composition of maps. This is joint work with Anthony Quas. HAO WANG, University of Alberta Global analysis of a stoichiometric producer-grazer model with Holling-type functional responses  [PDF] Cells, the basic units of organisms, consist of multiple essential elements such as carbon, nitrogen, and phosphorus. The scarcity of any of these elements can strongly restrict cellular and organismal growth. During recent years, ecological models incorporating multiple elements have been rapidly developed in many studies, which form a new research field of mathematical and theoretical biology. Among these models, the one proposed by Loladze et al. (Bull Math Biol 62:1137-1162, 2000) is prominent and has been highly cited. However, the global analysis of this nonsmooth model has never been done. In this talk, I will provide the complete global analysis for the model with Holling type I functional response and a bifurcation analysis for the model with Holling type II functional response.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.876544713973999, "perplexity_flag": "middle"}
http://www.citizendia.org/New_Foundations
In mathematical logic, New Foundations (NF) is an axiomatic set theory, conceived by Willard Van Orman Quine as a simplification of the theory of types of Principia Mathematica. Mathematical logic is a subfield of Logic and Mathematics with close connections to Computer science and Philosophical logic. Willard Van Orman Quine (June 25 1908 Akron, Ohio &ndash December 25 2000 (known to intimates as "Van" In Mathematics, Logic and Computer science, type theory is any of several Formal systems that can serve as alternatives to Naive set theory The Principia Mathematica is a 3-volume work on the Foundations of mathematics, written by Alfred North Whitehead and Bertrand Russell Quine first proposed NF in a 1937 article titled "New Foundations for Mathematical Logic"; hence the name. Much of this entry discusses NFU, an important variant of NF due to Jensen (1969) and exposited in Holmes (1998). The type theory TST The primitive predicates of TST, a streamlined version of the theory of types, are equality and membership. In Mathematics, Logic and Computer science, type theory is any of several Formal systems that can serve as alternatives to Naive set theory TST has a linear hierarchy of types: type 0 consists of individuals otherwise undescribed. For each (meta-) natural number n, type n+1 objects are sets of type n objects; sets of type n have members of type n-1. Objects connected by identity must have the same type. The following two atomic formulas succinctly describe the typing rules: $x^{n} = y^{n}\!$ and $x^{n} \in y^{n+1}$ (notation to be improved). The axioms of TST are: • Extensionality: sets of the same (positive) type with the same members are equal; • An axiom schema of comprehension, namely: If $\phi(x^n)\!$ is a formula, then the set $\{x^n \mid \phi(x^n)\}^{n+1}\!$ exists. In Axiomatic set theory and the branches of Logic, Mathematics, and Computer science that use it the axiom of extensionality, or axiom In Axiomatic set theory and the branches of Logic, Mathematics, and Computer science that use it the axiom schema of specification, axiom In Mathematical logic, a well-formed formula (often abbreviated WFF, pronounced "wiff" or "wuff" is a Symbol or string of symbols (a In other words, given any formula $\phi(x^n)\!$, the formula $\exists A^{n+1} \forall x^n [x^n \in A^{n+1} \leftrightarrow \phi(x^n)]$ is an axiom where $A^{n+1}\!$ represents the set $\{x^n \mid \phi(x^n)\}^{n+1}\!$. This type theory is much less complicated than the one first set out in the Principia Mathematica, which included types for relations whose arguments were not necessarily all of the same type. The Principia Mathematica is a 3-volume work on the Foundations of mathematics, written by Alfred North Whitehead and Bertrand Russell In 1914, Norbert Wiener showed how to code the ordered pair as a set of sets, making it possible to eliminate relation types in favor of the linear hierarchy of sets described here. Norbert Wiener ( November 26, 1894, Columbia Missouri – March 18, 1964, Stockholm, Sweden) was an American In Mathematics, an ordered pair is a collection of two distinguishable objects one of which is identified as the first coordinate (or the first entry Quinian set theory Axioms and stratification New Foundations (NF) is obtained from TST by abandoning the distinctions of type. The axioms of NF are: • Extensionality: Two objects with the same elements are the same object; • A comprehension schema: All instances of TST Comprehension but with type indices dropped (and without introducing new identifications between variables). In Logic, extensionality refers to principles that judge objects to be equal if they have the same external properties In Axiomatic set theory and the branches of Logic, Mathematics, and Computer science that use it the axiom schema of specification, axiom By convention, NF's Comprehension schema is stated using the concept of stratified formula and making no direct reference to types. In Axiomatic set theory and the branches of Logic, Mathematics, and Computer science that use it the axiom schema of specification, axiom Stratification has several usages in mathematics In mathematical logic In Mathematical logic, stratification is any consistent assignment of numbers A formula φ is said to be stratified if there exists a function f from pieces of syntax to the natural numbers, such that for any atomic subformula $x \in y$ of φ we have f(y) = f(x) + 1, while for any atomic subformula x = y of φ, we have f(x) = f(y). Stratification has several usages in mathematics In mathematical logic In Mathematical logic, stratification is any consistent assignment of numbers The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function Comprehension then becomes: $\{x \mid \phi \}$ exists for each stratified formula φ. Stratification has several usages in mathematics In mathematical logic In Mathematical logic, stratification is any consistent assignment of numbers Even the indirect reference to types implicit in the notion of stratification can be eliminated. Stratification has several usages in mathematics In mathematical logic In Mathematical logic, stratification is any consistent assignment of numbers Theodore Hailperin showed in 1944 that Comprehension is equivalent to a finite conjunction of its instances,[1] so that NF can be finitely axiomatized without any reference to the notion of type. Comprehension may seem inconsistent with naive set theory, but this is not the case. Naive set theory is one of several theories of sets used in the discussion of the Foundations of mathematics. For example, the impossible Russell class $\{x \mid x \not\in x\}$ is not an NF set, because $x \not\in x$ cannot be stratified. Part of the Foundations of mathematics, Russell's paradox (also known as Russell's antinomy) discovered by Bertrand Russell in 1901 showed that the Ordered pairs Relations and functions are defined in TST (and in NF and NFU) as sets of ordered pairs in the usual way. The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function In Mathematics, an ordered pair is a collection of two distinguishable objects one of which is identified as the first coordinate (or the first entry The usual definition of the ordered pair, first proposed by Kuratowski in 1921, has a serious drawback for NF and related theories: the resulting ordered pair necessarily has a type two higher than the type of its arguments (its left and right projections). Kazimierz Kuratowski ( Warsaw, February 2, 1896 &ndash June 18, 1980) was a Polish Mathematician and Logician Hence for purposes of determining stratification, a function is three types higher than the members of its field. If one can define a pair in such a way that its type is the same type as that of its arguments (resulting in a type-level ordered pair), then a relation or function is merely one type higher than the type of the members of its field. Hence NF and related theories usually employ Quine's set-theoretic definition of the ordered pair, which yields a type-level ordered pair. Willard Van Orman Quine (June 25 1908 Akron, Ohio &ndash December 25 2000 (known to intimates as "Van" Holmes (1998) takes the ordered pair and its left and right projections as primitive. Fortunately, whether the ordered pair is type-level by definition or by assumption (i. e. , taken as primitive) usually does not matter. The existence of a type-level ordered pair implies Infinity, and NFU + Infinity interprets NFU + "there is a type level ordered pair" (they are not quite the same theory, but the differences are inessential). Conversely, NFU + Infinity + Choice proves the existence of a type-level ordered pair. Admissibility of useful large sets NF (and NFU + Infinity + Choice, described below and known consistent) allow the construction of two kinds of sets that ZFC and its proper extensions disallow because they are "too large" (some set theories admit these entities under the heading of proper classes): • The universal set V. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common In Set theory and its applications throughout Mathematics, a class is a collection of sets (or sometimes other mathematical objects that can be unambiguously In Set theory, a universal set is a set which contains all objects including itself Because x = x is a stratified formula, the universal set V = {x | x=x} exists by Comprehension. Stratification has several usages in mathematics In mathematical logic In Mathematical logic, stratification is any consistent assignment of numbers In Set theory, a universal set is a set which contains all objects including itself An immediate consequence is that all sets have complements, and the entire set-theoretic universe under NF has a Boolean structure. In Abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. • Cardinal and ordinal numbers. In NF (and TST), the set of all sets having n elements (the circularity here is only apparent) exists. In Logic, begging the question has traditionally described a type of Logical fallacy (also called petitio principii) in which the proposition Hence Frege's definition of the cardinal numbers works in NF and NFU: a cardinal number is an equivalence class of sets under the relation of equinumerosity: the sets A and B are equinumerous if there exists a bijection between them, in which case we write $A \sim B$. Friedrich Ludwig Gottlob Frege ( 8 November 1848, Wismar, Grand Duchy of Mecklenburg-Schwerin  &ndash 26 July 1925 This article describes cardinal numbers in mathematics For cardinals in linguistics see Names of numbers in English. In Mathematics, given a set X and an Equivalence relation ~ on X, the equivalence class of an element a in X In the field of Mathematics, two sets A and B are equinumerous if they have the same Cardinality, i In Mathematics, a bijection, or a bijective function is a function f from a set X to a set Y with the property Likewise, an ordinal number is an equivalence class of well-ordered sets under the relation of similarity. In Set theory, an ordinal number, or just ordinal, is the Order type of a Well-ordered set. In Mathematics, given a set X and an Equivalence relation ~ on X, the equivalence class of an element a in X In Mathematics, a well-order relation (or well-ordering) on a set S is a Total order on S with the property that every The consistency problem and related partial results The outstanding problem with NF is that it is not known to be relatively consistent to anything. In Mathematics, specifically in Mathematical logic, formal theories are studied as Mathematical objects Since some theories are powerful enough to model NF disproves Choice, and so proves Infinity (Specker, 1953). But it is also known (Jensen, 1969) that the minor(?) modification of allowing urelements (objects lacking members which are distinct from the empty set and from one another) yields NFU, a theory that is consistent relative to Peano arithmetic, with and without added Infinity and Choice. Ronald Björn Jensen (born April 1 1936 is an American mathematician active in Europe primarily known for his work in Mathematical logic and Set theory In Set theory, a branch of Mathematics, an urelement or ur-element (from the German prefix ur-, 'primordial' is an object (concrete In Mathematical logic, the Peano axioms, also known as the Dedekind-Peano axioms or the Peano postulates, are a set of Axioms for the Natural (NFU corresponds to a type theory TSTU, where positive types may contain urelements. ) There are other consistent variants of NF. NFU is, roughly speaking, weaker than NF because in NF, the power set of the universe is the universe itself, while in NFU, the power set of the universe is strictly smaller than the universe (the power set of the universe contains only one empty set, while in NFU the universe is assumed to contain more than one distinct empty set). Specker has shown that NF is equiconsistent with TST + Amb, where Amb is the axiom scheme of typical ambiguity which asserts $\phi \leftrightarrow \phi^+$ for any formula φ, φ + being the formula obtained by raising every type index in φ by one. In Mathematics, specifically in Mathematical logic, formal theories are studied as Mathematical objects Since some theories are powerful enough to model NF is also equiconsistent with the theory TST augmented with a "type shifting automorphism", an operation which raises type by one, mapping each type onto the next higher type, and preserves equality and membership relations (and which cannot be used in instances of Comprehension: it is external to the theory). The same results hold for various fragments of TST in relation to the corresponding fragments of NF. In the same year (1969) that Jensen proved NFU consistent, Grishin proved NF3 consistent. Ronald Björn Jensen (born April 1 1936 is an American mathematician active in Europe primarily known for his work in Mathematical logic and Set theory NF3 is the fragment of NF with full extensionality (no urelements) and those instances of Comprehension which can be stratified using just three types. This theory is a very awkward medium for mathematics (although there have been attempts to alleviate this awkwardness), largely because there is no obvious definition for an ordered pair. In Mathematics, an ordered pair is a collection of two distinguishable objects one of which is identified as the first coordinate (or the first entry Despite this awkwardness, NF3 is very interesting because every infinite model of TST restricted to three types satisfies Amb. Hence for every such model there is a model of NF3 with the same theory. This does not hold for four types: NF4 is the same theory as NF, and we have no idea how to obtain a model of TST with four types in which Amb holds. In 1983, Marcel Crabbé proved consistent a system he called NFI, whose axioms are unrestricted extensionality and those instances of Comprehension in which no variable is assigned a type higher than that of the set asserted to exist. This is a predicativity restriction, though NFI is not a predicative theory: it admits enough impredicativity to define the set of natural numbers (defined as the intersection of all inductive sets; note that the inductive sets quantified over are of the same type as the set of natural numbers being defined). In Mathematics and Logic, impredicativity is the property of a self-referencing Definition. Crabbé also discussed a subtheory of NFI, in which only parameters (free variables) are allowed to have the type of the set asserted to exist by an instance of Comprehension. He called the result "predicative NF" (NFP); it is, of course, doubtful whether any theory with a self-membered universe is truly predicative. Holmes has <date?> shown that NFP has the same consistency strength as the predicative theory of types of Principia Mathematica without the Axiom of Reducibility. The Principia Mathematica is a 3-volume work on the Foundations of mathematics, written by Alfred North Whitehead and Bertrand Russell In Computability theory and Computational complexity theory, a reduction is a transformation of one problem into another problem How NF(U) avoids the set-theoretic paradoxes NF steers clear of the three well-known paradoxes of set theory. A paradox is a true statement or group of statements that leads to a Contradiction or a situation which defies intuition; or inversely That NFU, a {relatively} consistent theory, also avoid the paradoxes increases our confidence in this fact. The Russell paradox: An easy matter; $x \not\in x$ is not a stratified formula, so the existence of $\{x \mid x \not\in x\}$ is not asserted by any instance of Comprehension. Part of the Foundations of mathematics, Russell's paradox (also known as Russell's antinomy) discovered by Bertrand Russell in 1901 showed that the Quine presumably constructed NF with this paradox uppermost in mind. Cantor's paradox of the largest cardinal number exploits the application of Cantor's theorem to the universal set. In Set theory, Cantor's paradox is the Theorem that there is no greatest Cardinal number, so that the collection of "infinite sizes" is itself This article describes cardinal numbers in mathematics For cardinals in linguistics see Names of numbers in English. Note in order to fully understand this article you may want to refer to the Set theory portion of the Table of mathematical symbols. In Set theory, a universal set is a set which contains all objects including itself Cantor's theorem says (given ZFC) that the power set P(A) of any set A is larger than A (there can be no injection (one-to-one map) from P(A) into A). Note in order to fully understand this article you may want to refer to the Set theory portion of the Table of mathematical symbols. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common In Mathematics, given a set S, the power set (or powerset) of S, written \mathcal{P}(S P ( S) Now of course there is an injection from P(V) into V, if V is the universal set! The resolution requires that we observe that | A | < | P(A) | makes no sense in the theory of types: the type of P(A) is one higher than the type of A. The correctly typed version (which is a theorem in the theory of types for essentially the same reasons that the original form of Cantor's theorem works in ZF) is | P1(A) | < | P(A) | , where P1(A) is the set of one-element subsets of A. Note in order to fully understand this article you may want to refer to the Set theory portion of the Table of mathematical symbols. The specific instance of this theorem that interests us is | P1(V) | < | P(V) | : there are fewer one-element sets than sets (and so fewer one-element sets than general objects, if we are in NFU). The "obvious" bijection $x \mapsto \{x\}$ from the universe to the one-element sets is not a set; it is not a set because its definition is unstratified. In Mathematics, a bijection, or a bijective function is a function f from a set X to a set Y with the property Note that in all known models of NFU it is the case that | P1(V) | < | P(V) | < < | V | ; Choice allows one not only to prove that there are urelements but that there are many cardinals between | P(V) | and | V | . We now introduce some useful notions. A set A which satisfies the intuitively appealing | A | = | P1(A) | is said to be cantorian: a cantorian set satisfies the usual form of Cantor's theorem. Note in order to fully understand this article you may want to refer to the Set theory portion of the Table of mathematical symbols. A set A which satisfies the further condition that $(x \mapsto \{x\})\lceil A$, the restriction of the singleton map to A, is not only cantorian set but strongly cantorian. The Burali-Forti paradox of the largest ordinal number goes as follows. In Set theory, a field of Mathematics, the Burali-Forti paradox demonstrates that naively constructing "the set of all Ordinal numbers quot leads to In Set theory, an ordinal number, or just ordinal, is the Order type of a Well-ordered set. We define (following naive set theory) the ordinals as equivalence classes of well-orderings under similarity. Naive set theory is one of several theories of sets used in the discussion of the Foundations of mathematics. In Mathematics, given a set X and an Equivalence relation ~ on X, the equivalence class of an element a in X In Mathematics, a well-order relation (or well-ordering) on a set S is a Total order on S with the property that every There is an obvious natural well-ordering on the ordinals; since it is a well-ordering it belongs to an ordinal Ω. It is straightforward to prove (by transfinite induction) that the order type of the natural order on the ordinals less than a given ordinal α is α itself. Transfinite induction is an extension of Mathematical induction to well-ordered sets, for instance to sets of ordinals or cardinals. But this means that Ω is the order type of the ordinals < Ω and so is strictly less than the order type of all the ordinals -- but the latter is, by definition, Ω itself! The solution to the paradox in NF(U) starts with the observation that the order type of the natural order on the ordinals less than α is of a higher type than α. Hence a type level ordered pair is two, and the usual Kuratowski ordered pair, four, types higher than the type of its arguments. In Mathematics, an ordered pair is a collection of two distinguishable objects one of which is identified as the first coordinate (or the first entry For any order type α, we can define an order type α one type higher: if $W \in \alpha$, then T(α) is the order type of the order $W^{\iota} = \{(\{x\},\{y\}) \mid xWy\}$. The triviality of the T operation is only a seeming one; it is easy to show that T is a strictly monotone (order preserving) operation on the ordinals. We can now restate the lemma on order types in a stratified manner: the order type of the natural order on the ordinals < α is T2(α) or T4(α) depending on which pair is used (we assume the type level pair hereinafter). From this we deduce that the order type on the ordinals < Ω is T2(Ω), from which we deduce T2(Ω) < Ω. Hence the T operation is not a function; we cannot have a strictly monotone set map from ordinals to ordinals which sends an ordinal downward! Since T is monotone, we have $\Omega > T^2(\Omega) > T^4(\Omega)\ldots$, a "descending sequence" in the ordinals which cannot be a set. Some have asserted that this result shows that no model of NF(U) is "standard", since the ordinals in any model of NFU are externally not well-ordered. We do not take a position on this, but we note that it is also a theorem of NFU that any set model of NFU has non-well-ordered "ordinals"; NFU does not conclude that the universe V is a model of NFU, despite V being a set, because the membership relation is not a set relation. For a further development of mathematics in NFU, with a comparison to the development of the same in ZFC, see implementation of mathematics in set theory. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common This article examines the implementation of mathematical concepts in Set theory. The set theory of the 1940 first edition of Quine's Mathematical Logic married NF to the proper classes of NBG set theory, and included an axiom schema of unrestricted comprehension for proper classes. Willard Van Orman Quine (June 25 1908 Akron, Ohio &ndash December 25 2000 (known to intimates as "Van" In Set theory and its applications throughout Mathematics, a class is a collection of sets (or sometimes other mathematical objects that can be unambiguously In the Foundations of mathematics, Von Neumann–Bernays–Gödel set theory ( NBG) is an Axiomatic set theory that is a Conservative extension In 1942, J. Barkley Rosser proved that Quine's set theory was subject to the Burali-Forti paradox. John Barkley Rosser Sr (1907–1989 was an American Logician, a student of Alonzo Church, and known for his part in the Church-Rosser theorem Rosser's proof does not go through for NF(U). In 1950, Hao Wang showed how to amend Quine's axioms so as to avoid this problem, and Quine included the resulting axiomatization in the 1951 second and final edition of Mathematical Logic. Wang Hao, also Hao Wang ( 20 May 1921 &ndash 13 May 1995) was a Chinese American Logician, Philosopher Models of NFU There is a fairly simple method for producing models of NFU in bulk. Using well-known techniques of model theory, one can construct a nonstandard model of Zermelo set theory (nothing nearly as strong as full ZFC is needed for the basic technique) on which there is an external automorphism j (not a set of the model) which moves a rank Vα of the cumulative hierarchy of sets. In Mathematics, model theory is the study of (classes of mathematical structures such as groups, Fields graphs or even models Zermelo set theory, as set out in an important paper in 1908 by Ernst Zermelo, is the ancestor of modern Set theory. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common In mathematical Set theory, the rank of a set is defined inductively as the smallest Ordinal number greater than the rank of any member of the set @@@ main@@@ - title Hierarchy@@@ keywords structure; sociology; information@@@ review@@@ - We may suppose without loss of generality that j(α) < α. We talk about the automorphism moving the rank rather than the ordinal because we do not want to assume that every ordinal in the model is the index of a rank. In Mathematics, an automorphism is an Isomorphism from a Mathematical object to itself The domain of the model of NFU will be the nonstandard rank Vα. The membership relation of the model of NFU will be • $x \in_{NFU} y \equiv_{def} j(x) \in y \wedge y \in V_{j(\alpha)+1}.$ We now prove that this actually is a model of NFU. Let φ be a stratified formula in the language of NFU. Choose an assignment of types to all variables in the formula which witnesses the fact that it is stratified. Choose a natural number N greater than all types assigned to variables by this stratification. Expand the formula φ into a formula φ1 in the language of the nonstandard model of Zermelo set theory with automorphism j using the definition of membership in the model of NFU. Zermelo set theory, as set out in an important paper in 1908 by Ernst Zermelo, is the ancestor of modern Set theory. In Mathematics, an automorphism is an Isomorphism from a Mathematical object to itself Application of any power of j to both sides of an equation or membership statement preserves its truth value because j is an automorphism. Make such an application to each atomic formula in φ1 in such a way that each variable x assigned type i occurs with exactly N − i applications of j. In Mathematical logic, an atomic formula (also known simply as an atom) is a formula with no deeper Propositional structure that is a formula This is possible thanks to the form of the atomic membership statements derived from NFU membership statements, and to the formula being stratified. Each quantified sentence $(\forall x \in V_{\alpha}.\psi(j^{N-i}(x)))$ can be converted to the form $(\forall x \in j^{N-i}(V_{\alpha}).\psi(x))$ (and similarly for existential quantifiers). In Predicate logic, an existential quantification is the predication of a property or relation to at least one member of the domain Carry out this transformation everywhere and obtain a formula φ2 in which j is never applied to a bound variable. Choose any free variable y in φ assigned type i. Apply ji − N uniformly to the entire formula to obtain a formula φ3 in which y appears without any application of j. Now $\{y \in V_{\alpha} \mid \phi_3\}$ exists (because j appears applied only to free variables and constants), belongs to Vα + 1, and contains exactly those y which satisfy the original formula φ in the model of NFU. $j(\{y \in V_{\alpha} \mid \phi_3\})$ has this extension in the model of NFU (the application of j corrects for the different definition of membership in the model of NFU). This establishes that Stratified Comprehension holds in the model of NFU. To see that weak Extensionality holds is straightforward: each nonempty element of Vj(α) + 1 inherits a unique extension from the nonstandard model, the empty set inherits its usual extension as well, and all other objects are urelements. The basic idea is that the automorphism j codes the "power set" Vα + 1 of our "universe" Vα into its externally isomorphic copy Vj(α) + 1 inside our "universe. " The remaining objects not coding subsets of the universe are treated as urelements. In Set theory, a branch of Mathematics, an urelement or ur-element (from the German prefix ur-, 'primordial' is an object (concrete If α is a natural number n, we get a model of NFU which claims that the universe is finite (it is externally infinite, of course). If α is infinite and the Choice holds in the nonstandard model of ZFC, we obtain a model of NFU + Infinity + Choice. In Mathematics, the axiom of choice, or AC, is an Axiom of Set theory. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common Self-sufficiency of mathematical foundations in NFU For philosophical reasons, it is important to note that it is not necessary to work in ZFC or any related system to carry out this proof. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common A common argument against the use of NFU as a foundation for mathematics is that our reasons for relying on it have to do with our intuition that ZFC is correct. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common We claim that it is sufficient to accept TST (in fact TSTU). We outline the approach: take the type theory TSTU (allowing urelements in each positive type) as our metatheory and consider the theory of set models of TSTU in TSTU (these models will be sequences of sets Ti (all of the same type in the metatheory) with embeddings of each P(Ti) into P1(Ti + 1) coding embeddings of the power set of Ti into Ti + 1 in a type-respecting manner). Given an embedding of T0 into T1 (identifying elements of the base "type" with subsets of the base type), one can define embeddings from each "type" into its successor in a natural way. This can be generalized to transfinite sequences Tα with care. Note that the construction of such sequences of sets is limited by the size of the type in which they are being constructed; this prevents TSTU from proving its own consistency (TSTU + Infinity can prove the consistency of TSTU; to prove the consistency of TSTU+Infinity one needs a type containing a set of cardinality $\beth_{\omega}$, which cannot be proved to exist in TSTU+Infinity without stronger assumptions). Now the same results of model theory can be used to build a model of NFU and verify that it is a model of NFU in much the same way, with the Tα's being used in place of Vα in the usual construction. The final move is to observe that since NFU is consistent, we can drop the use of absolute types in our metatheory, bootstrapping the metatheory from TSTU to NFU. Facts about the automorphism j The automorphism j of a model of this kind is closely related to certain natural operations in NFU. In Mathematics, an automorphism is an Isomorphism from a Mathematical object to itself In Mathematics, an automorphism is an Isomorphism from a Mathematical object to itself For example, if W is a well-ordering in the nonstandard model (we suppose here that we use Kuratowski pairs so that the coding of functions in the two theories will agree to some extent) which is also a well-ordering in NFU (all well-orderings of NFU are well-orderings in the nonstandard model of Zermelo set theory, but not vice versa, due to the formation of urelements in the construction of the model), and W has type α in NFU, then j(W) will be a well-ordering of type T(α) in NFU. In Mathematics, a well-order relation (or well-ordering) on a set S is a Total order on S with the property that every In Mathematics, an ordered pair is a collection of two distinguishable objects one of which is identified as the first coordinate (or the first entry In Set theory, a branch of Mathematics, an urelement or ur-element (from the German prefix ur-, 'primordial' is an object (concrete In fact, j is coded by a function in the model of NFU. The function in the nonstandard model which sends the singleton of any element of Vj(α) to its sole element, becomes in NFU a function which sends each singleton {x}, where x is any object in the universe, to j(x). Call this function Endo and let it have the following properties: Endo is an injection from the set of singletons into the set of sets, with the property that Endo( {x} ) = {Endo( {y} ) | y∈x} for each set x. This function can define a type level "membership" relation on the universe, one reproducing the membership relation of the original nonstandard model. Strong axioms of Infinity In this section we mainly discuss the effect of adding various "strong axioms of infinity" to our usual base theory, NFU + Infinity + Choice. This base theory, known consistent, has the same strength as TST + Infinity, or Zermelo set theory with Separation restricted to bounded formulas (Mac Lane set theory). One can add to this base theory strong axioms of infinity familiar from the ZFC context, such as "there exits an inaccessible cardinal," but it is more natural to consider assertions about cantorian and strongly cantorian sets. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common Such assertions not only bring into being large cardinals of the usual sorts, but strengthen the theory on its own terms. In the mathematical field of Set theory, a large cardinal property is a certain kind of property of Transfinite Cardinal numbers Cardinals with such properties The weakest of the usual strong principles is: • Rosser's Axiom of Counting. The set of natural numbers is a strongly cantorian set. To see how natural numbers are defined in NFU, see set-theoretic definition of natural numbers. Several ways have been proposed to define the Natural numbers using Set theory. The original form of this axiom given by Rosser was "the set {m|1≤m≤n} has n members", for each natural number n". This intuitively obvious assertion is unstratified: what is provable in NFU is "the set {m|1≤m≤n} has T2(n) members" (where the T operation on cardinals is defined by T( | A | ) = | P1(A) | ; this raises the type of a cardinal by one). For any cardinal number (including natural numbers) to assert T( | A | ) = | A | is equivalent to asserting that the sets A of that cardinality are cantorian (by a usual abuse of language, we refer to such cardinals as "cantorian cardinals"). It is straightforward to show that the assertion that each natural number is cantorian is equivalent to the assertion that the set of all natural numbers is strongly cantorian. Counting is consistent with NFU, but increases its consistency strength noticeably; not, as one would expect, in the area of arithmetic, but in higher set theory. NFU + Infinity proves that each $\beth_n$ exists, but not that $\beth_{\omega}$ exists; NFU + Counting (easily) proves Infinity, and further proves the existence of $\beth_{\beth_n}$ for each n, but not the existence of $\beth_{\beth_{\omega}}$. (See beth numbers). In Mathematics, the Infinite Cardinal numbers are represented by the Hebrew letter \aleph ( aleph) indexed with a subscript that runs Counting implies immediately that one does not need to assign types to variables restricted to the set N of natural numbers for purposes of stratification; it is a theorem that the power set of a strongly cantorian set is strongly cantorian, so it is further not necessary to assign types to variables restricted to any iterated power set of the natural numbers, or to such familiar sets as the set of real numbers, the set of functions from reals to reals, and so forth. In Mathematics, given a set S, the power set (or powerset) of S, written \mathcal{P}(S P ( S) The set-theoretical strength of Counting is less important in practice than the convenience of not having to annotate variables known to have natural number values (or related kinds of values) with singleton brackets, or to apply the T operation in order to get stratified set definitions. Counting implies Infinity; each of the axioms below needs to be adjoined to NFU + Infinity to get the effect of strong variants of Infinity; Ali Enayat has investigated the strength of some of these axioms in models of NFU + "the universe is finite". A model of the kind constructed above satisfies Counting just in case the automorphism j fixes all natural numbers in the underlying nonstandard model of Zermelo set theory. The next strong axiom we consider is the • Axiom of strongly cantorian separation: For any strongly cantorian set A and any formula φ (not necessarily stratified!) the set {x∈A|φ} exists. Immediate consequences include Mathematical Induction for unstratified conditions (which is not a consequence of Counting; many but not all unstratified instances of induction on the natural numbers follow from Counting). This axiom is surprisingly strong. Unpublished work of Robert Solovay shows that the consistency strength of the theory NFU* = NFU + Counting + Strongly Cantorian Separation is the same as that of Zermelo set theory + Σ2 Replacement. This axiom holds in a model of the kind constructed above (with Choice) if the ordinals which are fixed by j and dominate only ordinals fixed by j in the underlying nonstandard model of Zermelo set theory are standard, and the power set of any such ordinal in the model is also standard. This condition is sufficient but not necessary. Next is • Axiom of Cantorian Sets: Every cantorian set is strongly cantorian. This very simple and appealing assertion is extremely strong. Solovay has shown the precise equivalence of the consistency strength of the theory NFUA = NFU + Infinity + Cantorian Sets with that of ZFC + a schema asserting the existence of an n-Mahlo cardinal for each concrete natural number n. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common Ali Enayat has shown that the theory of cantorian equivalence classes of well-founded extensional relations (which gives a natural picture of an initial segment of the cumulative hierarchy of ZFC) interprets the extension of ZFC with n-Mahlo cardinals directly. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common A permutation technique can be applied to a model of this theory to give a model in which the hereditarily strongly cantorian sets with the usual membership relation model the strong extension of ZFC. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common This axiom holds in a model of the kind constructed above (with Choice) just in case the ordinals fixed by j in the underlying nonstandard model of ZFC are an initial (proper class) segment of the ordinals of the model. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common Next consider the • Axiom of Cantorian Separation: For any cantorian set A and any formula φ (not necessarily stratified!) the set {x∈A|φ} exists. This combines the effect of the two preceding axioms and is actually even stronger (precisely how is not known). Unstratified mathematical induction enables proving that there are n-Mahlo cardinals for every n, given Cantorian Sets, which gives an extension of ZFC that is even stronger than the previous one, which only asserts that there are n-Mahlos for each concrete natural number (leaving open the possibility of nonstandard counterexamples). Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common This axiom will hold in a model of the kind described above if every ordinal fixed by j is standard, and every power set of an ordinal fixed by j is also standard in the underlying model of ZFC. In Mathematics, given a set S, the power set (or powerset) of S, written \mathcal{P}(S P ( S) Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common Again, this condition is sufficient but not necessary. An ordinal is said to be cantorian if it is fixed by T, and strongly cantorian if it dominates only cantorian ordinals (this implies that it is itself cantorian). In models of the kind constructed above, cantorian ordinals of NFU correspond to ordinals fixed by j (they are not the same objects because different definitions of ordinal numbers are used in the two theories). Equal in strength to Cantorian Sets is the • Axiom of Large Ordinals: For each noncantorian ordinal α, there is a natural number n such that Tn(Ω) < α. Recall that Ω is the order type of the natural order on all ordinals. This only implies Cantorian Sets if we have Choice (but is at that level of consistency strength in any case). It is remarkable that one can even define Tn(Ω): this is the nth term sn of any finite sequence of ordinals s of length n such that s0 = Ω, si + 1 = T(si) for each appropriate i. This definition is completely unstratified. The uniqueness of Tn(Ω) can be proved (for those n for which it exists) and a certain amount of common-sense reasoning about this notion can be carried out, enough to show that Large Ordinals implies Cantorian Sets in the presence of Choice. In spite of the knotty formal statement of this axiom, it is a very natural assumption, amounting to making the action of T on the ordinals as simple as possible. A model of the kind constructed above will satisfy Large Ordinals, if the ordinals moved by j are exactly the ordinals which dominate some j − i(α) in the underlying nonstandard model of ZFC. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common • Axiom of Small Ordinals: For any formula φ, there is a set A such that the elements of A which are strongly Cantorian ordinals are exactly the strongly cantorian ordinals such that φ. Solovay has shown the precise equivalence in consistency strength of NFUB = NFU + Infinity + Cantorian Sets + Small Ordinals with Morse–Kelley set theory plus the assertion that the proper class ordinal (the class of all ordinals) is a weakly compact cardinal. In the Foundation of mathematics, Kelley–Morse (KM or Morse–Kelley (MK set theory is a first order Axiomatic set theory that is closely In Mathematics, a weakly compact cardinal is a certain kind of Cardinal number introduced by; weakly compact cardinals are Large cardinals meaning that This is very strong indeed! Moreover, NFUB-, which is NFUB with Cantorian Sets omitted, is easily seen to have the same strength as NFUB. A model of the kind constructed above will satisfy this axiom if every collection of ordinals fixed by j is the intersection of some set of ordinals with the ordinals fixed by j, in the underlying nonstandard model of ZFC. Even stronger is the theory NFUM = NFU + Infinity + Large Ordinals + Small Ordinals. This is equivalent to Morse–Kelley set theory with a predicate on the classes which is a κ-complete nonprincipal ultrafilter on the proper class ordinal κ; in effect, this is Morse–Kelley set theory + "the proper class ordinal is a measurable cardinal"! The technical details here are not the main point, which is that reasonable and natural (in the context of NFU) assertions turn out to be equivalent in power to very strong axioms of infinity in the ZFC context. In the Foundation of mathematics, Kelley–Morse (KM or Morse–Kelley (MK set theory is a first order Axiomatic set theory that is closely In the mathematical field of Set theory, an ultrafilter on a set X is a collection of Subsets of X that is a filter, that In Mathematics, a measurable cardinal is a certain kind of Large cardinal number Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common This fact is related to the correlation between the existence of models of NFU, described above and satisfying these axioms, and the existence of models of ZFC with automorphisms having special properties. Zermelo–Fraenkel set theory with the axiom of choice, commonly abbreviated ZFC, is the standard form of Axiomatic set theory and as such is the most common In Mathematics, an automorphism is an Isomorphism from a Mathematical object to itself References 1. ^ Hailperin, T. Generically an alternative set theory is an alternative mathematical approach to the concept of set. This article examines the implementation of mathematical concepts in Set theory. In Mathematical logic, positive set theory is the name for a class of alternative set theories in which the Axiom of comprehension "\{x Several ways have been proposed to define the Natural numbers using Set theory. In Set theory, a universal set is a set which contains all objects including itself “A set of axioms for logic,” Journal of Symbolic Logic 9, pp. The Association for Symbolic Logic ("ASL" is an International organization of specialists in Mathematical logic and Philosophical logic —the 1-19. • Crabbé , Marcel, 1982, On the consistency of an impredicative fragment of Quine's NF, The Journal of Symbolic Logic 47: 131-136. • Holmes , Randall, 1998. Elementary Set Theory with a Universal Set. Academia-Bruylant. The publisher has graciously consented to permit diffusion of this introduction to NFU via the web. Copyright is reserved. • Jensen, R. B., 1969, "On the Consistency of a Slight(?) Modification of Quine's NF," Synthese 19: 250-63. Ronald Björn Jensen (born April 1 1936 is an American mathematician active in Europe primarily known for his work in Mathematical logic and Set theory With discussion by Quine. • Quine, W. V., 1980, "New Foundations for Mathematical Logic" in From a Logical Point of View, 2nd ed. Willard Van Orman Quine (June 25 1908 Akron, Ohio &ndash December 25 2000 (known to intimates as "Van" , revised. Harvard Univ. Press: 80-101. The definitive version of where it all began, namely Quine's 1937 paper in the American Mathematical Monthly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320449829101562, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/29208/when-would-the-proposed-black-hole-at-the-centre-of-milky-way-gulp-in-our-solar/29210
# When would the proposed black hole at the centre of Milky Way gulp in our solar system? I've heard and read that our solar system lies near to the peripheral region of the Galaxy. Then accordingly we would have a greater probability of sustaining to eventual gulping down by the super-massive black hole. But how long ? - – dmckee♦ May 30 '12 at 14:30 ## 1 Answer We don't have to worry about falling into the black hole because we have far too much angular momentum from our motion about the galactic center. This can be made quantitative by considering the sign of the effective potential of our orbit. For reference, the sun is about 27000 light years from the galactic center, and its orbital speed is about 220 km/s. You'll find that the centrifugal term overwhelms the gravitational term by a factor of about $10^5$. Also, on a galactic scale, the mass of the central black hole is tiny. It is a few million times as massive of the sun. On the other hand, the total mass of all the stars in the galaxy is about $10^5$ times larger, and the mass of all the dark matter is another factor of 10 or so larger than that. Careers in astrophysics have been built on the question opposite of the one posed here - how does material ever shed its angular momentum to feed the black hole and allow it to grow to the size we observe? The answer is quite detailed and complex, and is still an active research topic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570345878601074, "perplexity_flag": "head"}
http://mathschallenge.net/full/a_radical_proof
# mathschallenge.net ## A Radical Proof #### Problem The radical of $n$, $\text{rad}(n)$, is the product of distinct prime factors of $n$. For example, $504 = 2^3 \times 3^2 \times 7$, so $\text{rad}(504) = 2 \times 3 \times 7 = 42$. Given any triplet of relatively prime positive integers $(a, b, c)$ for which $a + b = c$ and with $a \lt b \lt c$, it is conjectured, but not yet proved, that the largest element of the triplet, $c \lt \text{rad}(abc)^2$. Assuming that this conjecture is true, prove that $x^n + y^n = z^n$ has no integer solutions for $n \ge 6$. #### Solution Let $a = x^n$, $b = y^n$, and $c = z^n$. If $x$, $y$, and $z$ were coprime then the maximum radical of $abc$ would be $xyz$. Therefore $\text{rad}(abc) \le xyz$, but as $z$ is the greatest in value, it follows that $\text{rad}(abc) \lt z^3$. By the conjecture, $c \lt \text{rad}(abc)^2 \lt (z^3)^2$; that is, $c = z^n \lt z^6$. Hence $n \lt 6$, and we conclude that $x^n + y^n = z^n$ has no integer solutions for $n \ge 6$. The cases of $n$ = 3, 4, and 5 all have elementary proofs, so if this conjecture were true, it would provide for an elegant completion of the proof of FLT (Fermat's Last Theorem). Of course the importance of this conjecture not yet being proved cannot be overstressed. It may turn out that the proof of this conjecture is more difficult than the current proof for FLT. Also note that the proof of FLT does not provide proof of this conjecture. However, mathematicians are still encouraged to find a proof for this conjecture, and other results relating to the radical function, as its usefulness is far reaching into many other areas of current mathematical research. Problem ID: 284 (23 Jul 2006)     Difficulty: 3 Star Only Show Problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412634372711182, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/170075-factor-completely.html
# Thread: 1. ## Factor Completely $<br /> x^4 - y^4 = (x^2 + y^2)(x^2 - y^2)<br /> \\ = (x^2 + y^2)(x + y)(x - y)<br />$ I'm ok with this until the last part. It looks like as the second term is a difference of squares it has been factored as well. I'm just a bit fuzzy on the use of brackets here. Can anyone help? Thanks. 2. Yes, each factorisation is a difference of two squares...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9802430272102356, "perplexity_flag": "middle"}
http://physics.aps.org/articles/print/v4/71
# Viewpoint: Excited atoms spin out of equilibrium , School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom Published September 12, 2011  |  Physics 4, 71 (2011)  |  DOI: 10.1103/Physics.4.71 Excited cold atoms in Rydberg states behave similarly to certain spin systems, providing us with a versatile toolbox with which to study nonequilibrium phenomena. A promising approach towards probing matter far from equilibrium, a fundamental challenge in physics, calls for the usage of cold atoms. Gases of such atoms possess temperatures of a few microkelvin, or even lower, and can be almost perfectly isolated from the environment. Furthermore, the interaction strength, as well as their external trapping potential, can be controlled to an outstandingly high precision [1]. They provide a platform not only for studying the dynamics of closed quantum systems but also for detailed exploration of the competition between coherent and dissipative dynamics within open systems with tailor-made properties [2]. In a paper in Physical Review A [3], Tony Lee at the California Institute of Technology, Pasadena, and colleagues theoretically investigate such an open quantum system. Their setting is a gas of alkali metal atoms confined to a regular lattice with exactly one atom per site—a setup that can be achieved experimentally with appropriately shaped laser beams or magnetic fields. What makes the setting different from traditional experiments with cold atoms, and therefore particularly interesting, is the use of atoms in highly excited states—so-called Rydberg states [4]. An alkali-metal atom, with its single active electron, shares many properties with the hydrogen atom. Excited states form a Rydberg series whose states can be labeled, just like in hydrogen, by the principal quantum number $n$. Interesting physics emerges in the presence of more than one Rydberg atom, as the large distance between the nucleus and the valence electron renders these atoms into electric dipoles. Depending on the particular Rydberg state, the interaction between two such atoms is then either determined by a van der Waals or a dipole-dipole potential. The authors consider the former potential, which is, in principle, also present between ground-state atoms. The striking difference, however, is that the interaction between atoms in Rydberg states is enhanced by a factor of up to $n11$. For values of the principal quantum number typically used in experiments, $n=40…80$, this means an increase of $10$ orders of magnitude, i.e., the interaction affects even atoms that are separated by several micrometers. This is in contrast to the contact potential usually present between ground-state atoms. In the most extreme case, interaction-induced level shifts are so huge that a simultaneous excitation of two nearby atoms to Rydberg states is virtually impossible [for an illustration see Fig. 1(a)]. This so-called Rydberg blockade mechanism [5] lies behind a number of exciting phenomena that make Rydberg atoms useful for applications ranging from quantum information processing and quantum simulation to nonlinear quantum optics and ultracold chemistry. A recent development is the use of Rydberg atoms to realize and explore the physics of strongly correlated spin systems, a direction that is also pursued by Lee et al. They evoke a scenario in which atoms are modeled by only two internal states, which is a huge simplification because an atom has infinitely many electronic levels. This two-level approximation is valid if the frequency of the laser used to excite Rydberg levels is closely resonant with only a single electronic transition. In this situation, the electronic ground state can be regarded as the down state and the Rydberg level as the up state of a pseudospin. Shifts in electronic levels caused by the presence of two or more Rydberg atoms then directly translate to a spin-spin interaction. The coupling to the excitation laser produces an effective magnetic field. Recent work has shown that these Rydberg pseudospin systems form a versatile toolbox for the study of critical phenomena [6], exotic quantum phases [7], dynamical crystallization [8], order-disorder phase transitions [9], as well as the thermalization of closed quantum systems [10]. Lee et al. include a final but important ingredient—dissipation. Like all excited states, Rydberg states are prone to spontaneous emission of photons. This aspect is often disregarded, as the lifetime of Rydberg states can be about $100μs$, longer than the typical duration of an experiment. Here, however, dissipation is rendered into a feature rather than a problem, which together with the involved coherent processes, i.e., the laser excitation and the interaction between Rydberg atoms [see Fig. 1(a)], produces an intricate dynamical behavior. Mathematically, the dynamics of this open spin system is governed by a Lindblad master equation. This equation captures the coherent quantum mechanical evolution and at the same time permits the inclusion of incoherent processes. Lee et al. solve it with a mean-field approach, where each spin experiences a fictitious averaged interaction potential—the mean-field—produced by the spins on the remaining lattice sites. In a first attempt, all lattices sites are assumed to be equivalent. This ansatz leads to a system of coupled equations that are similar to the optical Bloch equations but contain a nonlinearity due to the Rydberg-Rydberg interaction. Lee et al. perform an analysis of the steady state of these equations. Such a steady state always exists but here turns out to be unstable for certain combinations of the laser parameters and the interaction strength. A closer look reveals that, in particular, perturbations with a wavelength twice the lattice spacing trigger these instabilities. This decisive hint guides the authors to an augmented mean-field ansatz in which they break the system into two sublattices, each of which is described by its own mean field. Subsequent analysis of the resulting coupled equations reveals the existence of two stable fixed points, i.e., time-independent, steady-state solutions. One type of fixed point corresponds to a spatially uniform excitation of Rydberg atoms, while the other one shows an unequal population of the two sublattices. The broken sublattice symmetry of the system is reminiscent of an antiferromagnetic state. Lee and his colleagues continue by exploring the structure of the fixed points as a function of the laser detuning parameter ∆, which tells one how far off resonance the excitation laser frequency is with respect to the electronic transition from the ground state to the Rydberg state. For very large and negative $Δ$, the nonlinear equations have just one fixed point that corresponds to a uniform density. This uniform fixed point becomes unstable with increasing $Δ$ and stable nonuniform fixed points emerge from it. In an experiment, this should become visible in a continuous transition from a uniform density to an unequal occupation of the two sublattices. In their numerical treatment, Lee et al. also observe that the nonuniform fixed points can become unstable via what is known in the theory of nonlinear dynamical systems as a Hopf bifurcation. Here the system undergoes a transition to a stable limit cycle in which the population of the two sublattices oscillates periodically in time. In total, the system therefore exhibits three distinct phases [see the densities sketched in Fig. 1(b)-(d)]: a uniform phase, a nonuniform or antiferromagnetic phase, and an oscillatory phase. These findings provide a glimpse of the rich physics that should be accessible with Rydberg states of ultracold atoms. Experiments are about to catch up with the type of theoretical advance outlined in this paper; recently a first experimental implementation of a spin lattice system with Rydberg atoms was achieved [11]. ### References 1. I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008). 2. S. Diehl, A. Tomadin, A. Micheli, R. Fazio, and P. Zoller, Phys. Rev. Lett. 105, 015702 (2010). 3. T. Lee, H. Häffner, and M. Cross, Phys. Rev. A 84, 031402 (2011). 4. T. Gallagher, Rydberg Atoms (Cambridge University Press, Cambridge, 1984)[Amazon][WorldCat]. 5. M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010); M. D. Lukin, M. Fleischhauer, R. Côté, L. M. Duan, D. Jaksch, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 87, 037901 (2001). 6. H. Weimer, R. Löw, T. Pfau, and H. P. Büchler, Phys. Rev. Lett. 101, 250601 (2008). 7. H. Weimer, M. Müller, I. Lesanovsky, P. Zoller, and H. P. Büchler, Nature Phys. 6, 382 (2010). 8. T. Pohl, E. Demler, and M. D. Lukin, Phys. Rev. Lett. 104, 043002 (2010). 9. S. Ji, C. Ates, and I. Lesanovsky, Phys. Rev. Lett. 107, 060406 (2011); E. Sela, M. Punk, and M. Garst, Phys. Rev. B 84, 085434 (2011). 10. I. Lesanovsky, B. Olmos, and J. P. Garrahan, Phys. Rev. Lett. 105, 100603 (2010). 11. M. Viteau, M. G. Bason, J. Radogostowicz, N. Malossi, D. Ciampini, O. Morsch, and E. Arimondo, Phys. Rev. Lett. 107, 060402 (2011). ### Highlighted article #### Antiferromagnetic phase transition in a nonequilibrium lattice of Rydberg atoms Tony E. Lee, H. Häffner, and M. C. Cross Published September 12, 2011 | PDF (free) ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8935337066650391, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/04/12/tangent-vectors-geometrically/?like=1&_wpnonce=34114a6dfa
The Unapologetic Mathematician Tangent Vectors Geometrically Now we’re in a position to tie our notion of tangent vectors back into geometry. Let $M$ be a manifold containing a point $p$. Now consider a smooth curve $c$ in $M$ that passes through this point. Without loss of generality, we can let $c:(-1,1)\to M$, and let $c(0)=p$. This will simplify a lot of our discussion by standardizing some of the details. We know that $c$ gives us a tangent vector $c'(0)\in\mathcal{T}_pM$. The important thing for our purposes is that this tangent vector only depends on the germ of $c$ at $0$. That is, it is a very local property of the curve at $0$ and not in some particular neighborhood of $0$. Indeed, if we pick some coordinate patch $(U,x)$ around $p$ we can write out $c'(0)$ in components. If $d$ is another smooth curve whose derivative has the same components at $0$, then it makes sense to say that $c$ and $d$ have the same tangent vector at $0$. Tangent vectors, then, are equivalence classes of curves under this relation. We have to be careful, though. Does this definition depend on the coordinates we use? No, and our algebraic approach makes it easy to see why: if $y$ is another set of coordinates around $p$ we use the Jacobian of the transition function $y\circ x^{-1}$ to transform tangent vectors from one coordinate basis to another. Thus if $c'(0)$ and $d'(0)$ have the same components with respect to one coordinate system the same must be true with respect to all coordinate systems. Proving this from the geometric definition gets hairier. Now it’s clear that every geometric tangent vector — every equivalence class of curves — gives rise to a unique algebraic tangent vector — a certain linear functional on $\mathcal{O}_p$. Indeed, we can turn around our calculation of the derivative $c'(0)$ and use it as a definition: $\displaystyle\left[c'(0)\right](f)=\frac{d}{dt}(f\circ c)\bigg\vert_0$ Given a coordinate map $x$ we can write this out $\displaystyle\begin{aligned}\left[c'(0)\right](f)&=\frac{d}{dt}\left((f\circ x^{-1})\circ(x\circ c)\right)\bigg\vert_0\\&=\sum\limits_iD_i(f\circ x^{-1})\frac{d}{dt}(x^i\circ c)\bigg\vert_0\\&=\sum\limits_ic'(0)^i\left[\frac{\partial}{\partial x^i}(p)\right](f)\end{aligned}$ where we have used the multivariable chain rule to pass to the second line. Thus if two curves have the same components with respect to some local coordinate map (and thus with respect to all of them) they define the same operator on germs $f$. The flip side is where it gets a little hairier. Just because a geometric tangent vector gives a well-defined algebraic tangent vector, do all algebraic tangent vectors arise in this way? That is, given a vector $v\in\mathcal{T}_pM$, is there guaranteed to be some smooth curve $c$ passing through $c(0)=p$ with tangent vector $c'(0)=v$? Given such a vector $v$ and local coordinates $x$ at $p$ — without loss of generality we can pick $x(p)=0\in\mathbb{R}^n$ — we get components $v^i$. Now we just define a curve $\gamma$ in $\mathbb{R}^n$ whose $i$th component is $\gamma^i(t)=tv^i$, and define $c=x^{-1}\circ\gamma$. Now it’s clear that the component $c'(0)^i=v^i$, so $c'(0)=v$, as desired. Thus defining tangent vectors algebraically as we have done gives the same result as defining them geometrically. The geometric intuition had to wait, but it made establishing our desired results significantly easier. Like this: Posted by John Armstrong | Differential Topology, Topology 2 Comments » 1. [...] For vectors and covectors, we know the answers. We pass from the -coordinate basis to the -coordinate basis of by using a Jacobian: [...] Pingback by | July 8, 2011 | Reply 2. [...] that we’re letting tangent vectors spill “off the edge” of . But remember our geometric characterization of tangent vectors as equivalence classes of curves — of “directions” that curves [...] Pingback by | September 15, 2011 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052618741989136, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/10320/selecting-polynomial-roots-and-plotting-against-parameters/10324
# Selecting polynomial roots and plotting against parameters An implicit function $f(x,k)=0$ is quadratic in $x$ and contains one parameter $k$ which I must vary. Using `Solve`, I get two real solutions for each specific $k$: $x_1$ and $x_2$, of which I must choose the one that lies within $[0, 0.5]$. I must do this for a continuum of parameters $k$ between, say, $0$ and $1$. I then must plot the relevant $x$ solution against the parameter $k$ in a smooth curve. Note: I also tried the unglorious method i.e. if $k$ took discrete values, I thought I could manually select say 30 points and interpolate. But any curve fitting command I tried, using various polynomial and exponential expansions, could not give me smooth curve? Of course I hope to learn the elegant method, but under time pressure anything that gives me a smooth curve is welcome! - 1 A lot easier if you show what `f` looks like. – b.gatessucks Sep 7 '12 at 7:44 Well, `FindRoot[]` supports the option of root bracketing; if all you want is an approximate root, it should be fine. Otherwise, since you say it's quadratic in $x$, one could always manipulate the quadratic formula... – J. M.♦ Sep 7 '12 at 8:14 ## 1 Answer As J. M. says, `FindRoot` allows two options: either root bracketing, or the simpler choice of starting with an initial approximation for the root that is close enough. Since you said your root is unique in the [0,0.5] interval, and your function is smooth, you can expect that a starting value of 0.25 will usually give you the root you're looking for. ````f[x_] := Expand[(x - 0.5*Sin[k^2])*(x - 3*k - 1)] Plot[x /. FindRoot[f[x], {x, 0.25}], {k, 0, 1}] ```` - 2 Additionally, as a guarantee that the iteration never veers off the brackets you have set up, use Brent's method (i.e. append the option setting `Method -> "Brent"` to `FindRoot[]`). – J. M.♦ Sep 7 '12 at 9:48 Thank you, it works. Only thing is, when desireable solutions x*(k) approach 0 as parameter k is changed towards one end of the parameter range, the solution becomes discontinuous ..although i know there does exist a finite, smooth x(k).. – user2297 Sep 14 '12 at 9:18 @ridwandrusli we have no way of helping you more if you don't post your function (or better, your current code) – F'x Sep 14 '12 at 9:35 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993709087371826, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/131145-prove-print.html
# prove... Printable View • February 28th 2010, 01:11 AM flower3 prove... prove that if f is continuous on [a,b] with $f(x) \geq 0 \ , \forall x \in [a,b]$ prove that if g is strictly increasing on [a,b] with $\int_a^b f . dg =0$ then f(x)=0 $\forall x \in [a,b]$ • February 28th 2010, 03:25 AM Laurent Quote: Originally Posted by flower3 prove that if f is continuous on [a,b] with $f(x) \geq 0 \ , \forall x \in [a,b]$ prove that if g is strictly increasing on [a,b] with $\int_a^b f . dg =0$ then f(x)=0 $\forall x \in [a,b]$ The proof is the same as for usual Riemann integral. Procede by contradiction: assume that $f$ is not identically zero on $[a,b]$. Using continuity, prove that there exists $\epsilon>0$ and $a\leq u<v\leq b$ such that $f(x)\geq \epsilon$ when $x\in[u,v]$. Then, justify the following: $\int_a^b f\, dg\geq \int_u^v f\, dg\geq \epsilon \int_u^v dg=\epsilon(g(v)-g(u))>0$. And conclude. All times are GMT -8. The time now is 01:48 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215112328529358, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/74362?sort=votes
## Non isomorphic finite rings with isomorphic additive and multiplicative structure ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) About a year ago, a colleague asked me the following question: Suppose $(R,+,\cdot)$ and $(S,\oplus,\odot)$ are two rings such that $(R,+)$ is isomorphic, as an abelian group, to $(S,\oplus)$, and $(R,\cdot)$ is isomorphic (as a semigroup/monoid) to $(S,\odot)$. Does it follow that $R$ and $S$ are isomorphic as rings? I gave him the following counterexample: take your favorite field $F$, and let $R=F[x]$ and $S=F[x,y]$, the rings of polynomials in one and two (commuting) variables. They are not isomorphic as rings, yet $(R,+)$ and $(S,+)$ are both isomorphic to the direct sum of countably many copies of $F$, and $(R-\{0\},\cdot)$ and $(S-\{0\},\cdot)$ are both isomorphic to the direct product of $F-\{0\}$ and a direct sum of $\aleph_0|F|$ copies of the free monoid in one letter (and we can add a zero to both and maintain the isomorphism). He mentioned this example in a colloquium yesterday, which got me to thinking: Question. Is there a counterexample with $R$ and $S$ finite? - 1 There might be a counterexample of two 3-nilpotent rings. More specifically, take two non-isomorphic nilpotent of class 3 semigroups S,T of the same size, and consider algebras ${\mathbb F}_2S$ and ${\mathbb F}_2T$. Then these algebras are isomorphic as Abelian (additive) groups but might be non-isomorphic and might have isomorphic multiplicative semigroups. I do not have concrete examples, but I would search in this direction. – Mark Sapir Sep 2 2011 at 23:45 @Mark: Thank you for the suggestion; I'll give it some thought! – Arturo Magidin Sep 3 2011 at 4:17 1 If you assume that your finite rings are algebras over a finite field $F$, then their additive groups are isomorphic as soon as they have the same cardinality since they are then vector spaces of the same dimension. Thus in this case one can ask is it true that two finite dimensional algebras over a finite field are isomorphic iff their multiplicative monoids are isomorphic. Seems hard to believe. – Benjamin Steinberg Oct 13 2011 at 19:36 ## 1 Answer Here are some initial thoughts. Put $$X(R)=\{e\in R: e^2=e \text{ and } er=re \text{ for all } r\in R\}$$ We can partially order this by declaring that $e\leq f$ iff $ef=e$. We then put $$Y(R)=\{e\in X(R): 0\lt e \text{ and there is no } f\in X(R) \text{ with } 0 \lt f \lt e \}$$ One can check that $X(R)$ is a finite Boolean algebra under this order (with meet operation $e\wedge f=ef$ and join $e\vee f=e+f-ef$) so it is isomorphic to the lattice of subsets of its set of atoms, which is $Y(R)$. In particular, if $|Y(R)|=n$ then $|X(R)|=2^n$. For $e\in Y(R)$ we put $$R[e] = Re = \{ x\in R : ex=xe=x\}$$ We can then define $p:R\to\prod_{e\in Y(R)}R[e]$ by $p(x)_e=ex$. It is standard that this is an isomorphism of rings. Next, by hypothesis we have a bijection $f:R\to S$ that preserves multiplication. It follows that $f$ gives an isomorphism $X(R)\to X(S)$ of posets, and thus a bijection $Y(R)\to Y(S)$. As the sets $R[e]$ and the maps $p_e$ are defined using only the multiplicative structure, we see that $f$ gives an isomorphism $R[e]\to S[f(e)]$ of multiplicative monoids for each $e\in Y(R)$. However, we do not obviously have an additive isomorphism from $R[e]$ to $S[f(e)]$, so this does not succeed in reducing the problem to the indecomposable case. Nonetheless, it is worth thinking about the ring structure of $R[e]$. The quotient by the Jacobson radical is a finite simple ring and so is a matrix algebra over a finite division ring, but finite division rings are fields by a theorem of Wedderburn, so this quotient is quite tractable. - Is it correct to say that elements of the set $X(R)$, i.e. the central idempotents, correspond to subsets of the set of connected components of $\mathrm{Spec}(R)$ and elements of $Y(R)$ (positive minimal central idempotents) correspond to individual connected components? – Qfwfq Sep 2 2011 at 23:24 @Neil: Interesting; I'll think about this as well. Thanks! – Arturo Magidin Sep 3 2011 at 4:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456868171691895, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/75129/list
## Return to Answer 3 +addendum Addendum (Adam Bjorndahl): This construction can be viewed as a generalization of the quantifier duality$$\forall \equiv \lnot \exists \lnot.$$ As above, fix a set $X$. For $F \subseteq 2^{X}$, define the formula $(\text{F}x) \ \phi(x)$ to mean that$$\{x \in X : \phi(x)\} \in F.$$So $(\text{F}x) \ \phi(x)$ might be read "for $F$-many $x$, property $\phi$ holds". Three special cases deserve some attention. • When $F = \{X\}$, we recover the usual "for all" quantifier. Succinctly, $\forall = \{X\}$. • Dualizing, we obtain$$\lnot (\text{F}x) \lnot \phi(x) \iff \{x \in X : \lnot \phi(x)\} \notin F;$$thus if $A = \{x \in X : \phi(x)\}$, we have$$\lnot (\text{F}x) \lnot \phi(x) \iff A \in F^{*} \iff (\text{F}^{*}x) \phi(x),$$where $F^{*}$ is the blocker of $F$. • Finally, if $U \subset 2^{X}$ is an ultrafilter on $X$, then$$\lnot (\text{U}x) \lnot \phi(x) \iff (\text{U}x)\phi(x),$$which exhibits ultrafilters as self-dual quantifiers, a perspective I find appealing. • 2 fixed typo A very simple and important notion of duality is the following. Start with a collection $F$ of subsets of a ground set $X$. Now, define the blocker $F^*$ of $F$ as follows: $F^*=${$X \backslash A: A \notin F$}. In words, we take the complements of all sets not in $F$. This notion is very important in combinatorial optimization and polyhedral combinatorics. It is also a simple manifestation of Alexabder Alexander duality from algebraic topology. 1 [made Community Wiki] A very simple and important notion of duality is the following. Start with a collection $F$ of subsets of a ground set $X$. Now, define the blocker $F^*$ of $F$ as follows: $F^*=${$X \backslash A: A \notin F$}. In words, we take the complements of all sets not in $F$. This notion is very important in combinatorial optimization and polyhedral combinatorics. It is also a simple manifestation of Alexabder duality from algebraic topology.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8817840814590454, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/151743-express-fraction-simplest-form.html
Thread: 1. express the fraction in simplest form (1+(1/(x^2-1)))/((1/x)-(x/(x+1))) 2. Start by making common denominators in both top and bottom of the fraction. You follow? 3. yup i did do that. the answer i got was x^3/(-x^3+2x^2-x-1) but the answer sheet says its x^3/(-x^3+2x^2-1) 4. Thank you for using brackets, but this is still very hard to read... Is it $\frac{1 + \frac{1}{x^2 - 1}}{\frac{1}{x} - \frac{x}{x + 1}}$? If so, start by making some common denominators $\frac{1 + \frac{1}{x^2 - 1}}{\frac{1}{x} - \frac{x}{x + 1}} = \frac{\frac{x^2 - 1}{x^2 - 1} + \frac{1}{x^2 - 1}}{\frac{x + 1}{x(x + 1)} - \frac{x^2}{x(x + 1)}}$ $= \frac{\frac{x^2}{x^2 - 1}}{\frac{-x^2 + x + 1}{x(x + 1)}}$ $= \frac{\frac{x^2}{(x - 1)(x + 1)}}{\frac{-x^2 + x + 1}{x(x + 1)}}$ $= \frac{x^3(x + 1)}{(x - 1)(x + 1)(-x^2 + x + 1)}$ $= \frac{x^3}{(x - 1)(-x^2 + x + 1)}$. 5. yup i've got it now. thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554880261421204, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/metric-tensor?sort=active
# Tagged Questions The variables used in general relativity to describe the shape of spacetime. If your question is about metric units, use the tag "units", and/or "si-units" if it is about the SI system specifically. 1answer 86 views ### Would this be a metric? would a matrix $M$ with diagonal entries not necessarily equal 1, i.e. diag $M = (a,1,1,1)$ be a metric if $a \neq 1$ or $\neq 0$? I.e. in this case would this be like some sort of more general ... 1answer 75 views ### The most general form of the metric for a homogeneous, isotropic and static space-time What is the most general form of the metric for a homogeneous, isotropic and static space-time? For the first 2 criteria, the Robertson-Walker metric springs to mind. (I shall adopt the (-+++) ... 1answer 59 views ### Evaluating the Ricci tensor effectively If given a metric of the form $$ds^2=\alpha^2(dr^2+r^2d\theta^2)$$ where $\alpha=\alpha(r)$, then can one immediately conclude that $$R_{\theta\theta}=r^2R_{rr}$$ where $R_{ab}$ is the Ricci tensor, ... 0answers 33 views ### metric extension outside the light cone Could anyone explain what "extending the solution" beyond the past light cone means? Say, for example, if I have a metric (no coordinate singularities), how can I extend it to the outside of the past ... 1answer 85 views ### When a variation of a tensor is not a tensor? In a comment about variation of metric tensor it was shown that $$\delta g_{\mu\nu}=-g_{\mu\rho}g_{\nu\,\sigma}\delta g^{\rho\,\sigma}$$ which is contrary to the usual rule of lowering indeces of a ... 1answer 32 views ### What is the Lorentz tensor with a superscript and subscript index? I have been reading about symmetries of systems' actions, e.g. the Polyakov action, and I have encountered Lorentz transformations of the form: $\Lambda^{\mu}_{\nu} X^{\nu}$. I am moderately familiar ... 1answer 110 views ### Christoffel symbol for Schwarzschild metric I know that the christoffel (second kind) can be defined like this: \Gamma^m_{ij} = \frac{1}{2} g^{mk}(\frac{\partial g_{ki}}{\partial U^j}+\frac{\partial g_{jk}}{\partial U^i}-\frac{\partial ... 1answer 64 views ### Derivation of the volume element (which uses the metric tensor)? I have often seen $\sqrt{-g}$ in integrals, especially actions, where $g=\mathrm{det}(g_{\mu \nu})$. Does anyone know of a derivation that shows that this is indeed the volume element which must be ... 1answer 51 views ### Why vary the action with respect to the inverse metric? Whenever I have read texts which employ actions that contain metric tensors, such as the Nambu-Goto, Polyakov or Einstein-Hilbert action, the equations of motion are derived by varying with respect to ... 0answers 35 views ### Null vector fields given Bondi metric I'm trying to understand how to compute the null future-directed vector fields if I have a given (Bondi) metric $g=-e^{2\nu}du^{2}-2e^{\nu+\lambda}dudr+r^{2}d\Omega$ with $d\Omega$-standard metric ... 1answer 46 views ### Parallel transport of a vector along a closed curve in curvilinear coordinates There is an expression indicating the change of the vector parallel translation along a closed infinitesimal curve in curvilinear coordinates (one way of introducing curvature tensor): \Delta A_{k} ... 1answer 73 views ### Plane waves in QFT Suppose we work in the metric $(-1,+1)$. How do we describe an incoming particle with a plane wave; $\exp(-\mathrm ikx)$ or $\exp(+\mathrm ikx)$? What's the difference? Does it change if we work in ... 2answers 89 views ### Ricci tensor for a 3-sphere without Math packets Let's have the metric for a 3-sphere: $$dl^{2} = R^{2}\left(d\psi ^{2} + sin^{2}(\psi )(d \theta ^{2} + sin^{2}(\theta ) d \varphi^{2})\right).$$ I tried to calculate Riemann or Ricci tensor's ... 0answers 49 views ### Singularities in Schwarzchild space-time Can anyone explain when a co-ordinate and geometric singularity arise in Schwarzschild space-time with the element ... 0answers 98 views ### How to calculate Riemann and Ricci tensors for a sphere? [closed] Let's have the metric for a sphere: $$dl^{2} = R^{2}\left(d\psi ^{2} + sin^{2}(\psi )(d \theta ^{2} + sin^{2}(\theta ) d \varphi^{2})\right).$$ I tried to calculate Riemann or Ricci tensor's ... 0answers 25 views ### How to prove the derive the expression for space part of Riemann tensor for homogeneous and isotropic space-time? It's not a homework!! For spheric, hyperbolic and flat case $$dl^{2} = R^{2}\left(d \psi^{2} + sin^{2}(\psi )(d \theta^{2} + sin^{2}(\theta )d \varphi^{2})\right),$$ dl^{2} = R^{2}\left(d ... 1answer 38 views ### Contraction of the metric tensor This is perhaps a simple tensor calculus problem -- but I just can't see why... I have notes (in GR) that contains a proof of the statement In space of constant sectional curvature, $K$ is ... 0answers 49 views ### The interior of a cylinder as an Einstein manifold The interior of a curved cylinder is an Einstein manifold (the Ricci Curvature Tensor is proportional to the Metric $R_{\mu\nu}=kg_{\mu\nu}$) since it has a constant curvature. However, I was unable ... 3answers 585 views ### Why is the covariant derivative of the metric tensor zero? I've consulted several books for the explanation of why $$\nabla _{\mu}g_{\alpha \beta} = 0,$$ and hence derive the relation between metric tensor and affine connection \$\Gamma ^{\sigma}_{\mu ... 3answers 113 views ### How scalar curvature of following spacetime can be equal to zero? For an interval of this spacetime, $$ds^{2} = c^{2}dt^{2} - c^{2}t^{2}(d \psi^{2} + sh^{2}(\psi )(d \theta^{2} + sin^{2}(\theta )d \varphi^{2})),$$ scalar curvature is equal to zero. Also, Ricci ... 1answer 45 views ### Change of variables in an interval expression This question is a continuation of How to calculate a scalar curvature fast? . Let's have Lorentz-Fock spacetime with an interval d \hat {s}^{2} = \frac{t_{0}^{2}R^{2}}{\hat {t}^{4}}\left( d \hat ... 3answers 63 views ### Combining metric tensors/curvature tensors I was thinking about the following scenario: Consider a particle which causes a metric $g_{\mu\nu}$ on an otherwise Minkowski spacetime (or any manifold). Now, consider another particle, somewhere in ... 0answers 83 views ### How to calculate a scalar curvature fast? [closed] Let's have a metric tensor g^{\alpha \beta} = \frac{1}{\left( 1 + \frac{ct}{R} \right)^{2}}\begin{bmatrix} \frac{1 - \frac{r^{2}}{R^{2}}}{\left(1 + \frac{ct}{R}\right)^{2}} & ... 2answers 122 views ### What is metric of spherical coordinates $(t,r,\theta,\phi)$? In spherical coordinates the flat space-time metric takes: $$ds^2=-c^2dt^2+dr^2+r^2d\Omega^2$$ where $r^2d\Omega^2$ come from when the signature of metric $g_{\mu\nu}$ is (-,+,+,+)? what is ... 2answers 149 views ### Null geodesic given metric I (desperately) need help with the following: What is the null geodesic for the space time $$ds^2=-x^2 dt^2 +dx^2?$$ I don't know how to transform a metric into a geodesic...! There is no need to ... 3answers 104 views ### What is meant when it is said that the universe is homogeneous and isotropic? It is sometimes said that the universe is homogeneous and isotropic. What is meant by each of these descriptions? Are they mutually exclusive, or does one require the other? And what implications rise ... 0answers 46 views ### When is spacetime homogenous and isotropic? When is spacetime homogenous and isotropic? For example, some metric $g_{\mu \nu}$ is homogeneous and isotropic. We now construct effective metric n_{\mu \nu} ~\rightarrow~ g_{\mu \nu} + ... 2answers 56 views ### Changing the scalar curvature (k = 0,+1,-1) with coordinate transformations? I would like to prove that I can (or can't) change curvature of space, k = 0,+1,-1, via general coordinate transformations, which in principle can mix space and time coordinates together. 2answers 74 views ### Coordinate and conformal transformations of the FRW metric I'm considering a metric of the following form (signature $(+,-,-,-)$): $$ds^2 = (F(r,t)-G(r,t))dt^2 - (F(r,t)+G(r,t))dr^2 - r^2(d\Omega)^2$$ where $F(r,t)$ and $G(r,t)$ are arbitrary scalar ... 3answers 138 views ### How do you tell if a metric is curved? I was reading up on the Kerr metric (from Sean Carroll's book) and something that he said confused me. To start with, the Kerr metric is pretty messy, but importantly, it contains two constants - ... 0answers 40 views ### Switching from an accelerated frame of reference to a locally inertial reference system Using the equivalence principle, show that the interval for an accelerated observer ($\textbf{g}$ uniform and constant) has the form ds^2|_{\text{first order in ... 2answers 106 views ### Difference between slanted indices on a tensor In my class, there is no distinction made between, $$C_{ab}{}^{b}$$ and $$C^{b}{}_{ab}.$$ All I know, and read about so far, is the distinction of covariant and contravariant, form/vector, etc. ... 2answers 136 views ### How to find a curvature of the space-time by having $g^{\alpha \beta}$ in the following case without cumbersome calculations? The metric tensor for Fock-Lorentz space-time, \mathbf r_{||}{'} = \frac{\gamma (u)(\mathbf r_{||} - \mathbf u t)}{\lambda \gamma (u) (\mathbf u \cdot \mathbf r) + \lambda c^{2} (1 - \gamma (u))t + ... 2answers 67 views ### metric tensor of expanding universe Why is the metric tensor of a expanding universe a function of time? Why is it not a function of distance between the galaxies? I heard this from a lecture. Can anyone help me understand? 1answer 40 views ### Non-diagonal elements when switching metric signature? Considering a metric tensor with the signature $(-,+,+,+)$: \$g_{\mu\nu}= \begin{pmatrix} -c^2 & g_{01} & g_{02} & g_{03}\\ g_{10} & a^2 & g_{12} & g_{13}\\ g_{20} & g_{21} ... 2answers 87 views ### What is the link between the metric signature of spacetime and fundamental field equations? The signature of Minkowski spacetime is 2, as is explained here: metric signature explanation. The signature is related to the form the fundamental equations take, but I'm not totally clear on the ... 2answers 85 views ### Sign convention for basic Dirac equation The dirac equation;$$(i\gamma^\mu\partial_{\mu} - m)\psi=0$$ is just; $$(i\gamma^{0}\partial_{0} - i\gamma^{i}\partial_{i} - m)\psi=0$$ in a (+,---) metric right? 1answer 77 views ### Spacelike slicing of Schwarzschild geometry I am having trouble understanding how to obtain a spacelike slicing of the Schwarchild black hole. I understand there is not a globally well defined timelike killing vector, so we can define t=cte ... 3answers 114 views ### Relation between the determinants of metric tensors Recently I have started to study the classical theory of gravity. In Landau, Classical Theory of Field, paragraph 84 ("Distances and time intervals") , it is written We also state that the ... 2answers 71 views ### Why can certain functions be absorbed into the Schwarzschild metric, while others can't? Another question about the Schwarzschild solution of General Relativity: In the derivation (shown below) of the Schwarzschild metric from the vacuum Einstein Equation, at the step marked "HERE," we ... 1answer 170 views ### Polyakov action: difference induced metric and dynamical metric The Polyakov action is given by: S_p ~=~ -\frac{T}{2}\int d^2\sigma \sqrt{-g}g^{\alpha\beta}\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}\eta_{\mu\nu} ~=~ -\frac{T}{2}\int d^2\sigma ... 1answer 117 views ### Material strain from spacetime curvature Let's say that you moved an object made of rigid materials into a place with extreme tidal forces. Materials have a modulus of elasticity and a yield strength. Does the corresponding 3D geometric ... 1answer 51 views ### Constraint on a metric Given a metric of the form $$ds^2=dr^2+a^2\tanh^2(r/b)d\theta^2$$ why does it follow that $a=b$? I can't quite spot a constraint condition... 1answer 187 views ### Covariant derivative I would very much appreciate some help in The following: What is 2nd order covariant derivative $$\nabla_i\nabla_jf(r)$$ in terms of $r,\theta, g(r)$ and partial derivative, given that the metric ... 2answers 245 views ### Question about proper time in general relativity I think I may have some fundamental misunderstanding about what $dt, dx$ are in general relativity. As I understand it, in special relativity, $ds^2=dt^2-dx^2$, we call this the length because it is ... 2answers 171 views ### Does Kaluza-Klein Theory Require an Additional Scalar Field? I've seen the Kaluza-Klein metric presented in two different ways., cf. Refs. 1 and 2. In one, there is a constant as well as an additional scalar field introduced: \tilde{g}_{AB}=\begin{pmatrix} ... 1answer 113 views ### Lorentz transformation problem In the equation (1.18) they omitted the translation vector $a^\mu$, but why? 2answers 125 views ### Metric coefficients in rotating coordinates Let $(t,x,y,z)$ be the standard coordinates on $\mathbb{R}^4$ and consider the Minkowski metric $$ds^2 = -dt^2+dx^2+dy^2+dz^2.$$ I am trying to compute the metric coefficients under the change of ... 1answer 87 views ### Question from Schutz's In q. 22 in page 141, I am asked to show that if $U^{\alpha}\nabla_{\alpha} V^{\beta} = W^{\beta}$, then $U^{\alpha}\nabla_{\alpha}V_{\beta}=W_{\beta}$. Here's what I have done: \$V_{\beta}=g_{\beta ... 1answer 104 views ### Covariant derivative with upper index I just need clarification, that is, to see that I'm doing the right thing. When calculating central charge for certain metric, I need to solve an integral that contains Lie brackets etc. And I have ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8883278965950012, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/40728/list
Return to Question 2 deleted 62 characters in body; deleted 13 characters in body Consider a set of algebraic integers which lie on the unit circle, they will generate a multiplicative subgroup of $\mathbb S^1$. These Do these objects do they have a name? I would guess they contain useful arithmetic/number theoretic information, for example if the elements of the generating subset are set is the set of roots of a an irreducible polynomial, is it really so? What what kind of information do would they contain? Has the group structure of the units elements of an extension a number field which lie on the unit circle $\mathbb S^1$ and the subgroup of it which is made up of algebraic integers been used to obtain results which can be used in actual applicationstudied. Would greatly appreciate if you could suggest a reference. Regards Vagabond PS It would be real nice if you could answer keeping in mind that I do not know much Algebra/ commutative algebra/ algebraic geometry. But I hope that does not stop you from answering. 1 Algebraic integers on the unit circle Consider a set of algebraic integers which lie on the unit circle, they will generate a multiplicative subgroup of $\mathbb S^1$. These objects do they have a name? I would guess they contain useful arithmetic/number theoretic information, for example if the elements of the generating subset are the roots of a polynomial, is it really so? What kind of information do they contain? Has the group structure of the units of an extension field which lie on the unit circle $\mathbb S^1$ and the subgroup of it which is made up of algebraic integers been used to obtain results which can be used in actual application. Would greatly appreciate if you could suggest a reference. Regards Vagabond PS It would be real nice if you could answer keeping in mind that I do not know much Algebra/ commutative algebra/ algebraic geometry. But I hope that does not stop you from answering.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401611089706421, "perplexity_flag": "head"}
http://www.citizendia.org/Sawtooth_wave
The sawtooth wave (or saw wave) is a kind of non-sinusoidal waveform. Non-sinusoidal waveforms are Waveforms that are not pure Sine waves They are usually derived from simple math functions It is named a sawtooth based on its resemblance to the teeth on the blade of a saw. The usual convention is that a sawtooth wave ramps upward as time goes by and then sharply drops. However, there are also sawtooth waves in which the wave ramps downward and then sharply rises. The latter type of sawtooth wave is called a 'reverse sawtooth wave' or 'inverse sawtooth wave'. The 2 orientations of sawtooth wave sound identical when other variables are controlled. A bandlimited sawtooth wave pictured in the time domain (top) and frequency domain (bottom). The fundamental is at 220 Hz (A2). The piecewise linear function $x(t) = t - \operatorname{floor}(t)$ based on the floor function of time t, is an example of a sawtooth wave with period 1. In Mathematics, a piecewise linear function f \Omega \to V where V is a Vector space and \Omega In Mathematics and Computer science, the floor and ceiling functions map Real numbers to nearby Integers The Frequency is a measure of the number of occurrences of a repeating event per unit Time. A more general form, in the range −1 to 1, and with period a, is $x(t) = 2 \left( {t \over a} - \operatorname{floor} \left ( {t \over a} + {1 \over 2} \right ) \right )$ This sawtooth function has the same phase as the sine function. The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 A sawtooth wave's sound is harsh and clear and its spectrum contains both even and odd harmonics of the fundamental frequency. In Acoustics and Telecommunication, the harmonic of a Wave is a component Frequency of the signal that is an Integer The fundamental tone, often referred to simply as the fundamental and abbreviated fo, is the lowest frequency in a harmonic series. Because it contains all the integer harmonics, it is one of the best waveforms to use for constructing other sounds, particularly strings, using subtractive synthesis. Subtractive synthesis is a method of subtracting Harmonic content from a sound via Sound synthesis, characterised by the application of an Audio filter A sawtooth can be constructed using additive synthesis. Additive synthesis is a technique of audio synthesis which creates Musical Timbre. The infinite Fourier series $x_\mathrm{sawtooth}(t) = \frac {2}{\pi}\sum_{k=1}^{\infin} \frac {\sin (2\pi kft)}{k}$ converges to an inverse sawtooth wave. In Mathematics, a Fourier series decomposes a periodic function into a sum of simple oscillating functions A conventional sawtooth can be constructed using $x_\mathrm{sawtooth}(t) = -\frac {2}{\pi}\sum_{k=1}^{\infin} \frac {\sin (2\pi kft)}{k}$ In digital synthesis, these series are only summed over k such that the highest harmonic, Nmax, is less than the Nyquist frequency (half the sampling frequency). A digital system uses discrete (discontinuous values usually but not always Symbolized Numerically (hence called "digital" to represent information for The Nyquist frequency, named after the Swedish-American engineer Harry Nyquist or the Nyquist–Shannon sampling theorem, is half the Sampling frequency Sampling theorem The Nyquist–Shannon sampling theorem states that perfect reconstruction This summation can generally be more efficiently calculated with a Fast Fourier transform. If the waveform is digitally created directly in the time domain using a non-bandlimited form, such as y = x - floor(x), infinite harmonics are sampled and the resulting tone contains aliasing distortion. A bandlimited signal is a Deterministic or Stochastic signal whose Fourier transform or Power spectral density is zero above a certain finite In Mathematics and Computer science, the floor and ceiling functions map Real numbers to nearby Integers The This article applies to signal processing including computer graphics Animation of the additive synthesis of a sawtooth wave with an increasing number of harmonics An audio demonstration of a sawtooth played at 440 Hz (A4) and 880 Hz (A5) and 1760 Hz (A6) is available below. A440 is the 440 Hz tone that serves as the standard for musical pitch. Both bandlimited (non-aliased) and aliased tones are presented. Sawtooth aliasing demo Sawtooth waves played bandlimited and aliased at 440 Hz, 880 Hz, and 1760 Hz Problems listening to the file? See media help. ## Applications • The sawtooth wave along with the square wave are the most common starting points used to create sounds with subtractive analog and Virtual analog music synthesizers. Subtractive synthesis is a method of subtracting Harmonic content from a sound via Sound synthesis, characterised by the application of an Audio filter An Analog Modeling Synthesizer is a Synthesizer that emulates the sounds of traditional Analog synthesizers using Digital signal processing components • The sawtooth wave is the form of the vertical and horizontal deflection signals used to generate a raster on CRT-based television or monitor. In Computer graphics, a raster graphics image or bitmap, is a Data structure representing a generally rectangular grid of Pixels Oscilloscopes also use a sawtooth wave for their horizontal deflection, though they typically use electrostatic deflection. An oscilloscope (commonly abbreviated to scope or O-scope) is a type of Electronic test equipment that allows signal Voltages to be viewed Electrostatics is the branch of Science that deals with the Phenomena arising from what seems to be stationary Electric charges Since Classical • On the wave's "ramp", the magnetic field produced by the deflection yoke drags the electron beam across the face of the CRT, creating a scan line. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J A scan line is one line or row in a Raster scanning pattern such as a video line on a Cathode ray tube (CRT display of a television or computer • On the wave's "cliff", the magnetic field suddenly collapses, causing the electron beam to return to its resting position as quickly as possible. • The voltage applied to the deflection yoke is adjusted through various means (transformers, capacitors, center-tapped windings) so that the half-way voltage on the sawtooth's cliff is at the zero mark, meaning that a negative voltage will cause deflection in one direction and a positive voltage will produce deflection in the other direction, allowing the whole screen to be covered by a center-mounted deflection yoke. Frequency is 15. 75 kHz on NTSC, 15. NTSC ( National Television System Committee) is the Analog television system used in the United States, Canada, Japan, Mexico 625 kHz for PAL and SECAM) • The vertical deflection system operates the same way as the horizontal, though at a much lower frequency (60 Hz on NTSC, 50 Hz for PAL and SECAM). PAL, short for Phase Alternating Line, is a colour -encoding system used in Broadcast television systems in large parts of the world SECAM, also written SÉCAM ( Séquentiel couleur à mémoire, French for "Sequential Color with Memory" is an analog color television system NTSC ( National Television System Committee) is the Analog television system used in the United States, Canada, Japan, Mexico • The ramp portion of the wave must be perfectly linear - if it isn't, it's an indication that the voltage isn't increasing linearly, and therefore that the magnetic field produced by the deflection yoke won't be linear. As a result, the electron beam will accelerate during the non-linear portions. On a television picture, this would result in the image being "squished" to the direction of the non-linearity. Extreme cases will show obvious brightness increases, since the electron beam spends more time on that side of the picture. • Most TV sets used to have manual adjustments for vertical and/or horizontal linearity though they have generally disappeared due to the greater temporal stability of modern electronic components. ## See also Sine, square, triangle, and sawtooth waveforms A square wave is a kind of Non-sinusoidal waveform, most typically encountered in Electronics and Signal processing. A triangle wave is a Non-sinusoidal Waveform named for its triangular shape A triangle wave is a Non-sinusoidal Waveform named for its triangular shape A square wave is a kind of Non-sinusoidal waveform, most typically encountered in Electronics and Signal processing. A wave is a disturbance that propagates through Space and Time, usually with transference of Energy. Sound' is Vibration transmitted through a Solid, Liquid, or Gas; particularly sound means those vibrations composed of Frequencies This article applies to signal processing including computer graphics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132741689682007, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/195534-help-deriving-equation.html
1Thanks • 1 Post By emakarov # Thread: 1. ## Help deriving an equation. I believe that the following two equations are equivalent, but I am unable to algebraically derive one from the other. For computing a running average, I believe the following formula is correct where n = number of values in oldMean: $newMean = ((oldMean * n) + newVal ) / (n + 1)$. Similarly, I found this equation which I think does the same thing: $newMean = oldMean + (newVal - oldMean) / (n + 1)$. Are these equations in fact equivalent? They both seem to work when calculating a running average. If they are equivalent, how do I derive one from the other? Thank you. This seems like an easy problem, but I've managed to confuse myself. 2. ## Re: Help deriving an equation. Let's denote oldMean by m, newMean by m' and newVal by v. Then $m'=\frac{mn+v}{n+1}=\frac{mn+m-m+v}{n+1}=\frac{m(n+1)+v-m}{n+1}=m+\frac{v-m}{n+1}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525937438011169, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/177266-rearrange-formula.html
# Thread: 1. ## Rearrange formula Not sure if i have posted this in the correct Forum so sorry if it is incorrect But i am desperate to get this answered. I have an existing formula that i use in work to find out the OD of a coil from the weight, width and ID of the coil but i need to change it so i can find the weight from the ID, OD and width of the coil. The Formula i use is: OD=(SquareRoute((Coil Weight*1000/(24.66*Coil Width))+(Coil ID/2000*Coil ID/2000)))*2000 2. Originally Posted by chriscmartin Not sure if i have posted this in the correct Forum so sorry if it is incorrect But i am desperate to get this answered. I have an existing formula that i use in work to find out the OD of a coil from the weight, width and ID of the coil but i need to change it so i can find the weight from the ID, OD and width of the coil. The Formula i use is: $OD=2000\left(\sqrt{\dfrac{1000W}{24.66c_w}}+\dfrac {ID}{2000} \cdot \dfrac{ID}{2000}\right)$ I've edited the quote into it's latex form to make it easier to read. However, I may have made a mistake, is that the correct equation? edit: $W$ is coil weight and $c_w$ is coil width 3. To be honest i havent got a clue, i havent done math for ages since leaving school, it looks good to me. Basically im trying to work this out so i can enter it in to excel so in work i can enter the OD, ID & Width and it works out the weight for me. Thanks for your help so far 4. Start by isolating that square root and tidy up some terms $\dfrac{OD}{2000} = \sqrt{\dfrac{1000W}{24.66c_w}} + \dfrac{(ID)^2}{4 \cdot 10^6}$ $\dfrac{OD}{2000} - \dfrac{(ID)^2}{4 \cdot 10^6}= \sqrt{\dfrac{1000W}{24.66c_w}$ To make the next step easier I will multiply $\dfrac{OD}{2000}$ by 2000/2000: $\dfrac{2000(OD)}{4 \cdot 10^6} - \dfrac{(ID)^2}{4 \cdot 10^6}= \sqrt{\dfrac{1000W}{24.66c_w}$ This gives the same denominator so I can sum them now: $\dfrac{2000(OD) - (ID)^2}{4 \cdot 10^6}= \sqrt{\dfrac{1000W}{24.66c_w}$ Do you know how to continue? 5. No i dont know how to continue, sorry im a complete novice 6. Square both sides $\left(\dfrac{2000(OD) - (ID)^2}{4 \cdot 10^6}\right)^2 \cdot 24.66c_w = 1000W$ Now it's just left to isolate W. If this is for an excel formula I wouldn't bother expanding out 7. Thank you very much for your help so far i realy am greatful but could you expand it out like i first wrote it as i find it easier to understand? I know im being a lil cheeky but thanks for your help. 8. Thread closed because the OP needs someone in person to do this for them, not anonymous internet helpers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357559084892273, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/89859/a-concept-of-dynamical-coherence
## A concept of dynamical coherence ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to make an overview of the study of partial hyperbolicity and there is an interesting concept of dynamical coherence which appears there. Some call it mild (see the Thesis of Pablo Carrasco, Compact Dynamical Foliations 2010), some call it strong and unnatural (see the work of Amy Wilkinson and Keith Burns Dynamical coherence, accessibility and center bunching). The definition which is the most common is that local cental-unstable $E^{cu}$ and center-stable $E^{cs}$ bundles integrate to foliations $W^{cu}$ and $W^{cs}$. Let us suppose, that in a normally hyperbolic case, i.e. when we already have the $E^c$ that integrates to a foliation F, at which some diffeomorphism is hyperbolic. My question is how the normally hyperbolic (i.e. partially hyperbolic on foliation) system could be dynamically incoherent and is this concept somewhat related to the concept of local product structure? My question is, what is a simplest example of a normally hyperbolic foliation when $E^cu$ and $E^cs$ do not integrate to foliations? And how "often" does it happen in the world of normally hyperbolic foliations? PS. Updated after a useful remark of Rafael Potrie, the definition of a dynamical coherence is now more precise. - ## 1 Answer In general, it is not known if a partially hyperbolic diffeomorphism should be dynamically coherent. There are two obstructions for integrability of the center bundle. One is that the distributions are not integrable (Frobenius conditions fails) and the other one is that the distributions may lack of diferentiability (and so uniqueness of integrability may fail). For the first obstruction, Wilkinson noted that in diffeomorphisms with high dimensional center (i.e. Anosov automorphisms on nilmanifolds) the bracket condition fails. In the absolute partially hyperbolic setting, for diffeomorphisms of $T^3$ dynamical coherence has been obtained by Brin-Burago and Ivanov (http://www.pdmi.ras.ru/~svivanov/papers/coherence.pdf). This has been extended to nilmanifolds by Hammerlindl and Parwani (http://arxiv.org/abs/1103.3724, http://arxiv.org/abs/1001.1029). For pointwise partially hyperbolic systems (a weaker condition), recent examples have been constructed by Rodriguez-Hertz, Rodriguez-Hertz and Ures showing that dynamical coherence may fail even in $T^3$. On the other hand, I have recently proved that if the partially hyperbolic diffeomorphism of $T^3$ is transitive (or volume preserving) then it must be dynamically coherent (see http://www.mat.puc-rio.br/edai/textos/potrie.pdf). Local product structure I believe has something to do with plaque-expansiveness, which allows one to show robustness of dynamical coherence thanks to the work of Hirsch-Pugh and Shub, but in general it is not enough to show the existence of an invariant foliation tangent to the center. - Let's say there is a normally hyperbolic and plaque expansive foliation, isn't it always dynamically coherent? I'm not interested in partially hyperbolic case, since we have the Foliation stability theorem and the diffeomorphism remains normally hyperbolic. I just have the impression that all of normally hyperbolic and plaque expansive diffeomorfisms are dynamically coherent, maybe I do not feel the definition very well.. – nikutaibi Mar 19 2012 at 21:54 I believe we have slightly different definitions. If a \emph{foliation} is normally hyperbolic and plaque expansive for a diffeomorphism $f$, then $f$ is (as a partially hyperbolic diffeomorphism) robustly dynamically coherent. The problem with your second sentence is that in order to define plaque expansiveness, at least for the definition I know (of Hirsch-Pugh-Shub), one needs that $f$ be dynamically coherent. If there is no plaque expansiveness, it is open whether perturbations of a diffeomorphism having a normally hyperbolic invariant foliation is robustly dynamically coherent. – rpotrie Mar 22 2012 at 20:22 @ rpotrie Plaque Expansiveness can be (at least formally) defined whenever the center foliation exists. And this is the definition in many references. Why would we need the dynamical coherence assumption to define PE? Also does the following work: if $E^c_f$ integrates to $W^c_f$, then $W^c_f$ is normally hyperbolic. So for $g$ close to $f$, $E^c_g$ also integrates to some $W^c_g$ (close to $W^c_f$). Thank you! – Pengfei Mar 25 2012 at 14:10 @Pengfei: I know that the definition of dynamical coherence varies within the literature, but in the question, the definition given is the existence of the center foliation, so that is why I said that the definition of plaque-expansiveness required dynamical coherence. As for your second question, I believe that you need plaque-expansiveness or some other condition to get the persistence of integrability of the center direction (although there are no counterexamples for your statement). – rpotrie Mar 25 2012 at 21:10 1 Ok, I believe I understand your question now. As far as I know, it is not known in all generality if integrability of the center implies integrability of the center-stable and center-unstable. However, you may find the work of Brin-Burago-Ivanov pdmi.ras.ru/~svivanov/papers/bbi-parthyp.pdf (see Page 4 and Proposition 3.4) and the following work by Burago and Ivanov pdmi.ras.ru/~svivanov/papers/foliations.pdf (Proposition 3.1) helpful. They prove that unique integrability of the center implies unique integrability of both the center-stable in dimension 3. – rpotrie Mar 30 2012 at 14:24 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128925800323486, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/208288/fourier-transform-why-this-gives-incorrect-answer?answertab=votes
# Fourier transform, why this gives incorrect answer? Let $f(x) = \begin{cases}e^{-x} & ,0<x<1\\0 & ,\text{Otherwise}\end{cases}$ I'm trying to calculate the fourier transform of $xf(x)$, by using the fact that $xf(x) = -\frac{d}{da}f(a x)\bigg|_{a=1}$ and $\mathscr{F}\{f(a x)\} = \frac{1}{a}\hat{f}(\frac{\omega}{a}), \quad a>0$. The fourier transform should be: $$\mathscr{F}\left\{-\frac{d}{da}f(a x)\bigg|_{a=1}\right\} = - \frac{d}{da}\left(\frac{1}{a}\hat{f}(\frac{\omega}{a})\right)\bigg|_{a=1}$$ This gives the answer: $$\frac{-e+(1+i\omega+\omega^2) \cos(\omega )+i \ (1+i\omega+\omega^2) \sin(\omega )}{e \sqrt{2 \pi } \ (i+\omega )^2}$$ But the correct answer is: $$\frac{-e+(2-i \omega) \cos(\omega)+(2 i+\omega) \sin(\omega)}{e \sqrt{2 \pi \ } (i+\omega)^2}$$ The answer is close to the right one, the difference is $\frac{e^{i\omega-1}}{\sqrt{2 \pi}}$, but why it's not correct? - ## 1 Answer The reason is that your derivative $x f(x) = - \frac{d}{da} f(ax)|_{a=1}$ is not quite valid because the function you are differentiating is discontinuous. If you interpret it in the distribution sense, you get an additional term of the form $\frac1e \delta_1(x)$, where $\delta_1$ is the Dirac distribution with mass at $x=1$. The Fourier transform of this extra term is exactly the difference between your solution and the correct one. - Thanks, this makes perfect sense. – Ttl Oct 6 '12 at 18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299225807189941, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/132330/use-a-power-series-to-approximate-the-definite-integral-to-6-decimal-places-hel
# Use a power series to approximate the definite integral to 6 decimal places, help Use a power series to approximate the definite integral to 6 decimal places $$\int_0^{0.3} \frac{x^2}{1+x^4} dx$$ - 1 What have you tried? Where are you getting stuck? – Matthew Conroy Apr 16 '12 at 1:30 1 Welcome to math.SE: since you are fairly new, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are so far; this will prevent people from telling you things you already know, and help them write their answers at an appropriate level. If this is homework, please add the [homework] tag; people will still help, so don't worry. Also, many find the use of imperative ("Find", "Show") to be rude when asking for help; please consider rewriting your post. – Arturo Magidin Apr 16 '12 at 1:32 This is an exercise on a math book and I don't see any other elements around here that can give more context. – Kleon Kita Apr 16 '12 at 1:37 1 Can you find the power series for $\frac{x^2}{1+x^4}$? Would you know what to do if you already had the power series? Have you solved other problems like this before? Answers to all of those questions help provide context (as would the nature of the book, and whether this is part of a course, review on your part, or something else). – Arturo Magidin Apr 16 '12 at 1:56 no that what i cant find.i cant find the power series. – Kleon Kita Apr 16 '12 at 2:21 ## 2 Answers There is actually a fairly quick way to compute the power series. Remember the geometric sum: $$\sum_{n=0}^\infty r^n = \frac{1}{1-r}$$ if $|r| < 1$. This may not seem, at first, to be what we want, but do the following: $$\frac{x^2}{1+x^4} = x^2\frac{1}{1 - (-x^4)} = x^2 \sum_{n=0}^\infty (-x^4)^n = \sum_{n=0}^\infty (-1)^n x^{4n+2}$$ Integrate the series termwise. You'll get: $$\int_0^{0.3} \frac{x^2}{1+x^4}\, dx = \left. \sum_{n=0}^\infty (-1)^n \frac{x^{4n+3}}{4n+3}\right|_{0}^{0.3} = \sum_{n=0}^\infty (-1)^n \frac{(0.3)^{4n+3}}{4n+3}$$ Now, suppose $$S = \sum_{n=0}^\infty (-1)^n b_n$$ is an absolutely convergent sum, and then it is a fact that if we let $$S_N = \sum_{n=0}^N (-1)^n b_n$$ then the error $$|E_N| = |S - S_N| \leq b_{N+1}$$ So to finish the problem, just look for the smallest value of $N$ so that $\frac{(0.3)^{4N+3}}{4N+3} \leq 10^{-7}$. This would guarantee that the $N$th partial sum would be accurate to within 6 decimal places. For the sake of completeness, you likely only need to add up the first 2 terms, but to be safe (and since the 3rd term is not terribly difficult to compute), you may want to go ahead and sum up the first 3 terms to be certain. - We need a power series representation of $$f(x)={x^2\over1+x^4}$$ that is valid on the interval $[0,.3]$. In fact we will find one that's valid on the interval $(-1,1)$. We start with something familiar and similar to the function of interest: For $|x|<1$ $$\tag{1} {1\over 1-x} =1+x+x^2+\cdots.$$ Substituting $x=-u^4$ into $(1)$ gives $$\tag{2} {1\over 1+u^4}=1-u^4+u^8-u^{12}-\cdots;$$ which is valid for $|u|<1$. Almost there. Multiplying both sides of $(2)$ by $u^2$ gives $$\tag{3} {u^2\over 1+u^4}=u^2-u^6+u^{10}-u^{14}-\cdots$$ which is valid for $|u|<1$. Now that we have the series, we evaluate the integral by integrating the series representation term by term; this is valid since the radius of convergence of the series is 1, and since the region of integration is $[0,.3]\subset(-1,1)$: $$\eqalign{ \int_0^{.3} {u^2\over 1+u^4}\,du &= \int_0^{.3} u^2\,du- \int_0^{.3} u^6\,du+\int_0^{.3} u^{10}\,du- \cdots\cr &={u^3\over 3}\Bigl|_0^{.3} -{u^7\over 7}\Bigl|_0^{.3} +{u^{11}\over 11}\Bigl|_0^{.3} -\cdots\cr &= {(.3)^3\over 3} -{(.3)^7\over 7} +{(.3)^{11}\over 11} -\cdots }$$ But, what is the approximation? Where may we stop? Note that the series obtained after integrating is an alternating series and that$(.3)^{11}/11<1.62\cdot 10^{-7}$. By a standard result on estimating a convergent alternating series with a partial sum, the desired approximation is $$\int_0^{.3} {u^2\over 1+u^4}\,du \approx {(.3)^3\over 3} -{(.3)^7\over 7} .$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384258389472961, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=599446
Physics Forums ## Field-strength renormalization problem (13.1) in Srednicki Hi- I've just completed problem 13.1 in Srednicki in which he tells us to relate the field-strength renormalization $Z_{\phi}$ to the spectral density $\rho(s)$ that appears in the Lehmann representation of the exact propagator. It seems straightforward-- I follow the hint, insert unity using the 0, 1, and multi-particle states of the interacting theory, and make use of canonical equal-time commutation relations to get \begin{equation} \frac{1}{Z_{\phi}} = 1+ \int_{4m^2}^{\infty} ds \rho(s) \end{equation} My question is this-- How do I reconcile the above result, which implies that $0 < Z < 1$, with the divergent expressions one obtains for $Z_{\phi}$ in standard perturbative calculations? I'm almost positive I did the problem correctly (in many-body theory there are similar bounds one derives for the residue at the quasi-particle pole of the Green function). Would the bound on Z be restored if I could do a non-perturbative summation of diagrams? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 1 Recognitions: Science Advisor Isn't it just that this Z is the reciprocal of the infinite one? So this Z goes to zero. Sorry Bill_K, I don't follow. Let me rephrase my question in case the original wasn't clear. In problem 13.1, we use non-perturbative arguments to derive a sum rule that relates the field-strength renormalization $Z_{\phi}$ to the spectral density $\rho(s)$: \begin{equation} Z_{\phi} = \frac{1}{1+\int_{4m^2}^{\infty}ds\, \rho(s)} \end{equation} where $\rho(s)$ is defined in Eq. 13.11. Since the integral is positive-definite, this implies that $0 ≤ Z_{\phi} ≤ 1$. Now look at section 14 where Srednicki calculates the 1-loop contribution to the self-energy in phi**3 theory. In \bar{MS}, he finds that the counterterm vertex $A=Z_{\phi}-1$ diverges as $1/\epsilon$ where $\epsilon = 6 -d$ (see Eq. 14.37). My question is how do I reconcile the fact that the non-perturbative sum-rule implies $0 ≤ Z_{\phi} ≤ 1$, whereas any perturbative calculation gives a divergent $Z_{\phi}$? Is this just an artifact of perturbation theory that would disappear if I summed all diagrams to all orders? Blog Entries: 1 Recognitions: Science Advisor ## Field-strength renormalization problem (13.1) in Srednicki Thanks, it was clear. It is Z-1 that diverges, not Z. The fact that 0 ≤ Z ≤ 1 implies that the renormalized charge is less than the bare charge. In electrodynamics, and most other theories, Z-1 diverges, so that the matrix element of the field operator φ between a one-particle state and the vacuum state <Ψ | φ | p> = Z1/2/(2π)3/2 vanishes. One therefore defines a new, renormalized operator φR = Z-1/2 φ which has the property that its matrix element between the vacuum and the one-particle state is finite. Quote by Scott1137 My question is how do I reconcile the fact that the non-perturbative sum-rule implies $0 ≤ Z_{\phi} ≤ 1$, whereas any perturbative calculation gives a divergent $Z_{\phi}$? Is this just an artifact of perturbation theory that would disappear if I summed all diagrams to all orders? Great question, I wondered about that before, too. How is the renormalization constant in the exact two point function related to the renormalization constants in the perturbative loop calculations? Why is the in the former case something between zero and one, and in the latter infinite?? If you do not own Srednicki book, take a look at Peskin and Schroeder equations (10.14) to (10.18.). My uneducated guess is that since renormalization constants are unobservable, they are only required to fullfill some renormalization conditions and except from that can be whatever they want. Quote by Bill_K Thanks, it was clear. It is Z-1 that diverges, not Z. The fact that 0 ≤ Z ≤ 1 implies that the renormalized charge is less than the bare charge. In electrodynamics, and most other theories, Z-1 diverges, so that the matrix element of the field operator φ between a one-particle state and the vacuum state <Ψ | φ | p> = Z1/2/(2π)3/2 vanishes. One therefore defines a new, renormalized operator φR = Z-1/2 φ which has the property that its matrix element between the vacuum and the one-particle state is finite. Ok, sorry to be so dense, but I don't understand your statement that it is 1/Z and not Z that diverges. This seems to be in conflict with what's in Srednicki and every other QFT book I've glanced at. To 1-loop order, in Eq. 14.37 Srednicki finds (in phi**3 theory in d= 6-ε spacetime) \begin{equation} Z_{\phi} = 1 - \frac{\alpha}{6}\,\bigl[1/\epsilon + \rm{log}(\mu/m) + 1/2 + \kappa_A\bigr] \end{equation} where \begin{equation} \alpha = \frac{g^3}{(4\pi)^3}\,. \end{equation} This diverges, no? Blog Entries: 1 Recognitions: Science Advisor They mention this point in a Google Book. Go here and see the paragraph following Eq. (10.15.12). Edit: I think the point is the distinction between perturbative and nonperturbative. You can have something which produces infinite terms in the perturbation expansion but is actually zero nonperturbatively. For example Z = 1 - ∞ + ∞2 - ... = (1 + ∞)-1 Quote by Bill_K They mention this point in a Google Book. Go here and see the paragraph following Eq. (10.15.12). Edit: I think the point is the distinction between perturbative and nonperturbative. You can have something which produces infinite terms in the perturbation expansion but is actually zero nonperturbatively. For example Z = 1 - ∞ + ∞2 - ... = (1 + ∞)-1 Thanks. Nevertheless, this reasoning makes me very uneasy. This seems to say that the Z-factors are 1 for free field theory, but discontinuously jump to 0 with an interacting theory, even if the coupling in the interacting theory is arbitrarily small. This conflicts with my experience with interacting many-fermion systems, where the analogous quantity (the residue at poles in the Green function near the Fermi surface) passes continuously from 1 to 0 ≤Z ≤ 1 as on cranks up the strength of the interactions. There, moderate values of Z (i.e., not too close to zero) would indicate a system that qualitatively resembles the free theory albeit with renormalized couplings and masses (this is Landau's Fermi liquid theory), whereas Z~0 would indicate that the relevant degrees of freedom bear no resemblance at all to the free theory. I thought that much of the success of QFT relied on the fact that the interacting states bear a qualitative resemblance to the states of the zeroth-order free theory. I don't see how this can possibly be the case with Z = 0. From the book that Bill K gave the link to: " Z is somewhere between 0 and unity, with Z=1 for a free field. Actually, for a interacting theory Z is a divergent function of a regularization parameter." (below 10.15.12) So what now, is Z for an interacting theory smaller than unity or is it divergent?? Just for starters, what is the field renormalization factor anyway? The probability to create one-particle states from the vacuum or a constant that relates physical fields to bare fields? thanks Tags srednicki 13.1 Thread Tools | | | | |---------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Field-strength renormalization problem (13.1) in Srednicki | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 14 | | | Quantum Physics | 0 | | | Quantum Physics | 2 | | | Quantum Physics | 0 | | | Quantum Physics | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904237687587738, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/227794/in-field-f-cdot-how-can-i-prove-x2-1-implies-x-1-1?answertab=oldest
In field ($F, +, \cdot$) , how can I prove $x^2 =1\implies x=1,-1$ I'm a really confused about fields. I know that it means $x$ is the reciprocal element of itself, and I can easily show that $1^2=1$ (not as trivial for $(-1)^2$ though), but I'm not sure how it helps me. edit: oh... I can only approve one answer. Well Rankeya was first (by a very short time) so I guess I'll approve his though, I don't really have any idea what it means. Thanks to both Brian M. Scott and Rankeya for the help. - 1 Dear @Nescio: You accept an answer that you feel is most helpful to you. It does not have to be the first answer that is posted. (But, make sure that you always accept answers if you feel you are satisfied with them. It encourages people to answer your questions, and also brings a sense of completeness/closure.) – Rankeya Nov 3 '12 at 1:19 2 Answers A field is a domain, which in particular means that $ab = 0 \Rightarrow a = 0$ or $b = 0$. Write $x^2 = 1$ as $x^2 - 1 = 0$, and try to proceed from there. Also, welcome to MSE! - wow, that was so simple I feel stupid now... Thanks – Nescio Nov 2 '12 at 20:26 It happens to the best of mathematicians. So, don't worry too much about it. – Rankeya Nov 2 '12 at 20:27 Actually, I have been told by other mathematicians that it happens to the best of mathematicians. I often disbelieve them when they say this, and continue to feel stupid :) – Rankeya Nov 2 '12 at 20:30 @Rankeya: The best way to believe it is to see enough people who are more experienced and smarter than you do the same. Then again, even if you do believe it, it doesn't mean it won't make you feel stupid when you it happens to you. – tomasz Nov 2 '12 at 20:42 HINT: $x^2=1$ if and only if $x^2-1=0$. In any field $x^2-1=(x-1)(x+1)$, so $x^2=1$ if and only if $(x-1)(x+1)=0$. Now prove that for any $a,b\in F$, $ab=0$ if and only if at least one of $a$ and $b$ is $0$. - 2 +1 But you solved the problem for him. :) – Babak S. Nov 2 '12 at 20:24 @Babak: He may not agree. :-) – Brian M. Scott Nov 2 '12 at 20:25 still feeling stupid... Thanks for the quick response both of you. – Nescio Nov 2 '12 at 20:27 @Andrey: You’re welcome. And don’t worry about it: it’s happened to all of us. – Brian M. Scott Nov 2 '12 at 20:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9670200347900391, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/algebraic-curves+complex-analysis
# Tagged Questions 0answers 39 views ### $C^{\infty}$ 1-form on a Riemann surface is unique. Let $X$ be a Riemann surface and $\mathcal{A}$ be a complex atlas on $X$. Suppose that $C^{\infty}$ 1-forms are given for each chart of $\mathcal{A}$, which transform to each other on their common ... 2answers 220 views ### Can there be a point on a Riemann surface such that every rational function is ramified at this point? Let $X$ be a compact connected Riemann surface, and let $S\subset X$ be a finite subset. Does there exist a morphism $f:X\to \mathbf{P}^1(\mathbf{C})$ which is unramified at the points of $S$? I'm ... 1answer 121 views ### Modular functions of weight zero The following question was suggested by Sasha's answer to the following question : Is the derivative of a modular function a modular function . Question. What are the modular functions with respect ... 0answers 277 views ### An argument on page 62 of Griffith's book, “Introduction to Algebraic Curves” I am a bit confused about some of the things that Griffiths says on page 62 of his book, Introduction to Algebraic Curves. I am not sure how I can reproduce the text here. I can see that GoogleBooks ... 0answers 86 views ### Singularity type and number of irreducible local analytic curve components Let $V$ be an irreducible complex plane algebraic curve, $V=V(f)$, and let $\mathcal{O}_p$ be the local ring of holomorphic functions defined in some neighborhood of $p$. If $p=(0,0)$ is a smooth ... 0answers 257 views ### The Milnor Conjecture on the Unknotting Number of a Torus Knot Let $f \colon (\mathbb{C}^{n},\mathbf{0}) \to (\mathbb{C},0)$ be a complex analytic function with isolated critical point at the origin. Define the singular hypersurface \$V_{f, \kappa} = ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035162329673767, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/28921/why-is-a-book-on-a-table-not-an-example-of-newtons-third-law/45278
# Why is a book on a table not an example of Newton's third law? My textbook explains Newton's Third Law like this: If an object A exterts a force on object B, then object B exerts an equal but opposite force on object A It then says: Newton's 3rd law applies in all situations and to all types of force. But the pair of forces are always the same type, eg both gravitational or both electrical. And: If you have a book on a table the book is exerted a force on the table (weight due to gravity), and the table reacts with an equal and opposite force. But the force acting on the table is due to gravity (is this the same as a gravitational force?), and the forcing acting from the table to the book is a reaction force. So one is a gravitational, and the other is not. Therefore this is not Newton's Third Law as the forces must be of the same type. - 1 You've been given a rather confusing and imprecise explanation. The answer to this question is wrapped up in the same issues as the answer to your question about the ball. The Newtonian pair are the force of the book on the table and the force of the table on the book. They are both equal in magnitude to the weight of the book, but that is because the problem is static (nothing undergoing acceleration). I recommend that you try to understand the other question first, and then come back to this one. – dmckee♦ May 24 '12 at 20:12 Sorry I got the question slightly wrong, gravity is acting on the book, and the table pushing upwards is acting on the book. So they are both acting on the book. – Jonathan. May 24 '12 at 20:15 @dmckee, I have edited my question and I think it is different? – Jonathan. May 24 '12 at 20:18 Yes. And because the book is not accelerating you know the $F_g = -F_N$. You also know that the table feels a force from the book equal to $-F_N = F_g$. Got it? – dmckee♦ May 24 '12 at 20:19 @dmckee, I've ended up getting confused so I rewrote the question from scratch. – Jonathan. May 24 '12 at 20:26 show 1 more comment ## 5 Answers And: If you have a book on a table the book is exerted a force on the table (weight due to gravity), That's where you went wrong. The force that the book exerts on the table is not a gravitational force, it's a normal force. and the table reacts with an equal and opposite force. That's also a normal force. So the book exerts a (normal) force on the table, and the table exerts a (normal) force on the book. But the force acting on the table is due to gravity (is this the same as a gravitational force?), No, it's not, and in fact this force (the normal force) is only indirectly due to gravity. The only relevant gravitational force is the force exerted by the Earth on the book. And the book also exerts a gravitational force back on the Earth, but because the Earth is so heavy, that force has no noticeable effect. (The Earth also exerts a gravitational force on the table, and the table on the Earth, but those don't matter so much in this particular scenario.) - Newton's third law is about pairs of objects interacting. The force that acts on one object is equal and opposite to the force acting on the other object. So you can never have a third law pair acting on the same object. The equality of the reaction force and the weight force is nothing to do with the third law, and is just as a result of the first law applied to the forces acting on the book. Let's look at some third law pairs in this scenario: 1. The weight of the book and the weight of the earth. Yup, the earth is pulled up by the book, but because $F=ma$ and the earth is more than a little heavier, it doesn't result in a great deal of movement on the earth's part when the book is released! 2. The normal force of the table on the book and the book on the table. The force that the book exerts on the table is a normal force, not a weight force. (The book's weight doesn't act on the table, it acts on the book.) It's equal in magnitude to the weight of the book, again, because of the first law. The book and the table press on each other. It's probably better to think of the normal force as being generated by the electromagnetic forces between molecules in the table and book. You get a normal pair like this in the man-leaning-on-wall example. 3. The normal forces between the desk and the earth 4. The weight forces between the desk and the earth 5. (The gravitational forces between the book and the table are negligable.) Force 1=Force 2 in magnitude by law 1, not by law 3. (Same for forces 3 and 4.) - This is common misconception with my students too, and the only way to understand it you must draw all forces that act on both objects (in total five forces)! In order to make things clearer, I will label the force with which table acts on book as $F_{12}$ and not $F_\text{N}$! Also suppose that $z$ axis is vertically up, so positive forces push upward and negative forces push downward. There are two forces acting on book, its gravitational force $-F_\text{g,book}$ (downward) and the force of table on the book $F_{12}$ (upward). According to first Newton law for the book they are equal by magnitude $$F_{12} - F_\text{g,book} = 0.$$ According to the third Newton law book must be acting on table with the force $-F_{12}$ (downward). So there are three forces acting on table: its gravitational force $-F_\text{g,table}$, force of the book $-F_{12}$ (both downward) and the force of the ground $F_\text{N}$ (upward)! Now let's write the first Newton's law for the table $$F_\text{N} - F_{12} - F_\text{g,table} = 0.$$ Consequently $$F_\text{N} = F_{12} + F_\text{g,table} = F_\text{g,book} + F_\text{g,table}$$ The ground force must support both book and table! Isn't that obvious? Conclusion: So third Newton's law is perfectly valid for this case as well! If you still do not understand, write on the paper book, table, and all five forces (two acting on the book and three acting on the table). - Why isn't $F_g$ and $F_N$ the same force, as gravity cuases the book to push down on the table. – Jonathan. May 24 '12 at 20:28 $-F_\text{g,book}$ is gravitational (downward) force of the book and $F_\text{N}$ is (upward) force of the table. According to first Newton's law, they are equal by the magnitude and opposite in direction. These are two separate forces. – Pygmalion May 24 '12 at 20:32 @Jonathan. I edited the answer to distinguish between inter-force $F_{12}$ between book and table and ground force to table. – Pygmalion May 24 '12 at 20:36 One way to make it obvious is think about how the down-momentum is flowing. The book is getting down-momentum from the Earth (through action-at-a-distance gravity), and this down-momentum then flows downwards to the table, and across the table to the legs, then through the legs of the table back down to the Earth, making a closed circuit of down-momentum, like a closed electrical circuit. Each time momentum leaves an object A and enters another object B, we say a force is acting from A to B, and simultaneously that a reaction force is acting from B to A (since the momentum gained by B is the momentum lost by A). This is Newton's third law. In this circuit, the down-momentum goes Earth $\rightarrow$ book $\rightarrow$ table $\rightarrow$ Earth So there is an action/reaction pair from the Earth to the book (the Earth is pulling the book and transferring down-momentum to it, and the book is pulling the Earth, transferring an equal amount of negative down-momentum--- or up momentum--- to the Earth). There is an action reaction pair from the book to the table ( the book is transferring down-momentum to the table through a contact normal force, and the table is transferring negative down-momentum to the book by the same contact normal force), then the table has an action/reaction pair with the Earth (the table sends the down-momentum into the Earth, and the Earth sends negative down-momentum into the table) Each of these flows is describing how a conserved quantity, namely down-momentum is going from place to place. It is easiest to sort this out with flows of charge, because unlike charge, momentum is a vector. - A lot of questions here talk about "normal force", but I get the feeling that you're still confused about what that is. First consider the book - Whether it is resting on the table or not, it has a weight. Here weight is different from mass. The weight is the mass $m$ times the acceleration due to Earth's gravity $g$, or more familiarly $$F = mg$$ The same goes for the table. Now this is the important part - The weight isn't gravitational force. The gravitational force that you are thinking of is expressed as $$F_g = \frac{Gm_1 m_2}{r^2}$$ and that is the force due to the gravitational attraction between two bodies. In the case of the table and the book, the gravitational attraction is absolutely negligible, since they are both so tiny. The force that the table experiences because of the book is what is being called normal force. The table then exerts an equal and opposite force. This is also clearly seen, because if the table didn't exert an equal and opposite force, the book would be accelerating downward. But the whole system is at rest, therefore the total force on the book-table system must be zero. EDIT: @AndrewC has mentioned in the comments below why my earlier reasoning was wrong. Basically normal force is only indirectly due to gravity. Khan Academy has a brilliant explanation of these concepts. - Nonono, the "if the table didn't exert an equal and opposite force" argument is Newton's first law. If that's what Newton's third law said (every action has an equal and opposite reaction), it would mean nothing ever moved! My trailer exerts an equal and opposite tension force on my car, even when I'm accelerating. – AndrewC Nov 28 '12 at 21:38 Would you like to explain your interesting statement about Weight force not being gravitational force? – AndrewC Nov 28 '12 at 21:39 Newton's first law says that anything that's moving keeps moving, and anything that's at rest stays at rest, unless you have an external force. In this case, the external force is gravity, which is trying to pull the book down. That force is nicely cancelled with the force the table exerts on the book. – Kitchi Nov 29 '12 at 5:57 My point is that your last paragraph sounds like it's talking about Newton's third law by using the phrase equal and opposite, but you're actually using Newton's first law. That's exactly the confusion the textbook was trying to avoid and the question is trying to unpick, so it's unhelpful in this context. – AndrewC Nov 29 '12 at 22:44 I thought you were making an interesting point in distinguishing weight force from gravitational force (perhaps about the discrepency between $g=9.81m/s^2$ and $Gm_E/r_E^2$ in practice) but actually I think you were just making a mistake. Weight is the force due to gravity in the sense you're using it in your answer, calling the distinction important is misleading in this context. – AndrewC Nov 29 '12 at 22:45 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568055272102356, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/122336-cant-solve-these-congruences.html
# Thread: 1. ## Cant solve these... Congruences !! Hello, i want to share some problems that i found in the internet, but i cannot verify the solutions and others i dont know how to solve them, here: a) If N is a odd natural number, show that a+b divides (a^n)+(b^n) b) There exist a natural number N such that 1955 divides n^2+n+1 ? c) Find the remainder when 4444^4444 is divided by 9. d) p & q are different primes, show that 1) p^q + q^p i is congruent to p+q (mod pq) 2) (p^q + q^p)/(pq) is even, if p & q is not equal to 2. a step by step solution will be great, please.. thank you! 2. a) If N is a odd natural number, show that a+b divides (a^n)+(b^n) If n is an odd natural number, then $a^n +b^n$ factors into $(a+b)(a^{n-1}-a^{n-2}b+...+b^{n-1})$ . So, $\frac{a^n+b^n}{a+b} = <br /> (a^{n-1}-a^{n-2}b+...+b^{n-1}<br />$ -Andy 3. Hello, guidol92! c) Find the remainder when $4444^{4444}$ is divided by 9. We are working in modulo 9. We find that: . $4444\:\equiv\:7\text{ (mod 9)}$ Hence: . $4444^{4444}\:\equiv\:7^{4444}\text{ (mod 9)}$ Then we see that: . . $\begin{array}{cccc}<br /> 7^1 & \equiv & 7 & \text{ (mod 9)} \\ <br /> 7^2 & \equiv & 4 & \text{ (mod 9)} \\<br /> 7^3 & \equiv & 1 & \text{ (mod 9)} \\<br /> 7^4 & \equiv & 7 & \text{ (mod 9)} \\<br /> \vdots &&\vdots \end{array}$ The powers-of-7 have a three-step cycle. Since $4444 \:=\:3(1481) + 1$ . . we have: . $7^{4444} \:\equiv\:7^{3(1481) + 1} \:\equiv\:7^{3(1481)}\cdot7^1$ . . . . . . . . . . . . . $\equiv\:\left(7^3\right)^{1481}\cdot7 \;=\; 1^{1481}\cdot7 \;\equiv\;\boxed{7}$ 4. for(b): there is no natural number satisfying the condition. since 5 divides 1955, but $n^2+n+1\equiv 0$ (mod 5) has no solution, we obtain that there is no natural number such that 1955 divides $n^2+n+1$. for(d)(1): applying the fermat's little theorem, we get pq divides both $p^q-p, q^p-q$. for(2), that is impossible according to (1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8084111213684082, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/60905/approximate-equation-involving-elliptic-integrals
## approximate equation involving elliptic integrals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Dear Reader: Let $K(k)$ and $E(k)$ be elliptic integrals of respectively the first and second kind, where $k$ is the elliptic modulus and $k'=\sqrt{1-k^2}$ is the complementary elliptic modulus. I happened to encounter the following numerical "fact" (when solving an engineering problem regarding energy conversion): When I chose a $k$ such that $K(k) [ K(k')-E(k') ] = \pi/2$, then seemingly $K(k)/K(k')$ is quite close to $\pi/4$, if not exactly. I wonder whether there is an expansion like $K(k)/K(k')=\pi/4+(\text{small terms})$ for this particular $k$. I am just curious. Does someone know? Thank you! Best regards, Hiroshi Okamoto - ## 1 Answer Given the Legendre relation, your question is equally about K - E. This is a difference of hypergeometric function values (see http://en.wikipedia.org/wiki/Elliptic_integral for all of this). You seem to be setting a condition on k that also is simpler when read out of the Legendre relation, on E and K'. I would think the truth would come out of the power series in k, though I haven't looked at details. - Dear Dr. Matthews, Thank you for your helpful comment. Yes, the condition on $k$ could be simplified to $E(k)K(k')=\pi$. On the other hand, the first few terms in the power series of $E(k)$ and $K(k)$ results in $E(k)K(k)=\pi^2/4+O(k^4)$. Dividing the latter expression by the former, I obtained $K(k)/K(k') \sim \pi/4$, as desired. Would you mind if I mention your name in a paper I am writing up? Anyways, thanks again. Hiroshi Okamoto – a guy on the street Apr 8 2011 at 6:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269155859947205, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/10820/how-to-test-whether-spherical-caps-intersect
# How to test whether spherical caps intersect? I have a unit sphere, on the surface of which are defined spherical caps. I typically characterize the caps by the unit vector $n$ from the center of the sphere to the top of the cap, and the angle $\theta$. My question is: given a pair of spherical caps, how can I determine whether they intersect? - – J. M. Nov 18 '10 at 10:11 To add to that previous comment: you will also have to check if the line made by the intersection of the two planes goes through the sphere. – J. M. Nov 18 '10 at 10:35 ## 1 Answer The angle between $n_1$ and $n_2$ is the analogue of the "distance" between the centres of the two caps. The caps intersect if this angle is less than the sum of $\theta_1$ and $\theta_2$, assuming the $\theta_i$ play the role of "radius" and not "diameter" in your notation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319462776184082, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/99594/list
Return to Answer 1 [made Community Wiki] Faltings' almost purity theorem. The proof given, for the smooth case, in $p$-adic Hodge theory has some problems, and the proof of the general case in the Asterisque paper Almost Étale Extensions is completely unreadable (at least to me) and also contains some mistakes. We now finally have a very good proof (by Peter Scholze), but the almost purity theorem has been used as a black box for years.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9613676071166992, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Wake
# Wake For other uses, see Wake (disambiguation). Kelvin wake pattern generated by a small boat. A wake is the region of recirculating flow immediately behind a moving or stationary solid body, caused by the flow of surrounding fluid around the body. ## Fluid dynamics In fluid dynamics, a wake is the region of disturbed flow (usually turbulent) downstream of a solid body moving through a fluid, caused by the flow of the fluid around the body. In incompressible fluids (liquids) such as water, a bow wake is created when a watercraft moves through the medium; as the medium cannot be compressed, it must be displaced instead, resulting in a wave. As with all wave forms, it spreads outward from the source until its energy is overcome or lost, usually by friction or dispersion. The formation of these waves in liquids is analogous to the generation of shockwaves in compressible flow, such as those generated by rockets and aircraft traveling supersonic through air (see also Lighthill equation). The non-dimensional parameter of interest is the Froude number. For a blunt body in subsonic external flow, for example the Apollo or Orion capsules during descent and landing, the wake is massively separated and behind the body is a reverse flow region where the flow is moving toward the body. This phenomenon is often observed in wind tunnel testing of aircraft, and is especially important when parachute systems are involved, because unless the parachute lines extend the canopy beyond the reverse flow region, the chute can fail to inflate and thus collapse. Parachutes deployed into wakes suffer dynamic pressure deficits which reduce their expected drag forces. High-fidelity computational fluid dynamics simulations are often undertaken to model wake flows, although such modeling has uncertainties associated with turbulence modeling (for example RANS versus LES implementations), in addition to unsteady flow effects. Example applications include rocket stage separation and aircraft store separation. • Wave cloud pattern in the wake of the Île Amsterdam (lower left, at the "tip" of the triangular formation of clouds) in the southern Indian Ocean. • Cloud wakes from the Juan Fernández Islands. ## Wake pattern of a boat Waterfowls and boats moving across the surface of water produce a wake pattern, first explained mathematically by Lord Kelvin and known today as the Kelvin wake pattern. This pattern consists of two wake lines that form the arms of a V, with the source of the wake at the point. Each wake line is offset from the path of the wake source by around 19° and is made up with feathery wavelets that are angled at roughly 53° to the path. The interior of the V is filled with transverse curved waves, each of which is an arc of a circle centered at a point lying on the path at a distance twice that of the arc to the wake source. This pattern is independent of the speed and size of the wake source over a significant range of values. The angles in this pattern are not intrinsic properties of water; Any isentropic and incompressible liquid with low viscosity will exhibit the same phenomenon. This phenomenon has nothing to do with turbulence. Everything discussed here is based on the linear theory of an ideal fluid. This pattern follows from the dispersion relation of deep water waves, which is often written as, $\omega = \sqrt{g k},$ where $g$ is the strength of the gravity field and "deep" means that the depth is greater than half of the wavelength. This formula has two implications: first, the speed of the wave scales with the wavelength and second, the group velocity of a deep water wave is half of its phase velocity. As a surface object moves along its path at a constant velocity $v$, it continuously generates a series of small disturbances corresponding to waves with a wide spectrum of wavelengths. Those waves with the longest wavelengths have phase speeds above $v$ and simply dissipate into the surrounding water without being easily observed. Only the waves with phase speeds at or below $v$ get amplified through the process of constructive interference and form visible shock waves. In a medium like air, where the dispersion relation is linear, i.e. $\omega = c k,\,$ the phase velocity c is the same for all wavelengths and the group velocity has the same value as well. The angle $\theta$ of the shock wave thus follows from simple trigonometry and can be written as, $\theta = \arcsin \left( \frac{c}{v} \right).$ This angle is dependent on $v$, and the shock wave only forms when $v > c$. In deep water, however, shock waves always form even from slow-moving sources because waves with short enough wavelengths move still more slowly. These shock waves also manifest themselves at sharper angles than one would naively expect because it is group velocity that dictates the area of constructive interference and, in deep water, the group velocity is only half of the phase velocity. By a simple accident in geometry, all shock waves that should have had angles between 33° and 72° get compressed into a narrow band of wake with angles between 15° and 19° with the strongest constructive interference occurring at the outer edge, resulting in the two arms of the V in the Kelvin wake pattern. This can be seen easily in the diagram on the left. Here, we consider waves generated at point C by the source which has now moved to point A. These waves would have formed a shock wave at the line AB, with the angle CAB = 62° because the phase velocity of the wave has been chosen to be $\sin \left( 62^\circ \right)$ = 0.883 of the boat velocity. But the group velocity is only half of the phase velocity, so the wake actually forms along the line AD, where D is the midpoint on the segment BC, and the wake angle CAD turns out to be 19°. The wavefronts of the wavelets in the wake coming from the wave components in our example still maintain an angle of 62° to the AC line. In reality, all the waves with would-be-shock-wave-angles between 33° and 72° contribute to the same narrow wake band and the wavelets exhibit an angle of 53°, which is roughly the average of 33° and 72°. The wave components with would-be-shock-wave-angles between 73° and 90° dominate the interior of the V. Again, the waves that should have joined together and formed a wall similar to the phenomenon in sonic boom end up half-way between the point of generation and the current location of the wake source. This explains the curvature of the arcs. Those very short waves with would-be-shock-wave-angles below 33° lack a mechanism to reinforce their amplitudes through constructive interference and are usually perceived by the naked eyes as small ripples on top of the interior transverse waves. • Wake from a small motorboat with an outboard motor. • Wake of a boat crossing an alpine lake. • The wakes of two slow-moving boats. The nearer boat has made a striking series of ruler-straight waves. ## Other effects The above describes an ideal wake, where the body's means of propulsion has no other effect on the water. In practice the wave pattern between the V-shaped wavefronts is usually mixed with the effects of propeller backwash and eddying behind the boat's (usually square-ended) stern. Germany's only consistent surf spot is a giant wake from a ship. ## Recreation "No wake zones" may prohibit wakes in marinas, near moorings and within some distance of shore[1] in order to facilitate recreation by other boats, and reduce the damage wakes cause. Powered narrowboats on British canals are not permitted to create a breaking wash (a wake large enough to create a breaking wave) along the banks, as this erodes them. This rule normally restricts these vessels to 4 statute miles per hour or less. Wakes are occasionally used recreationally. Swimmers, people riding personal watercraft, and aquatic mammals such as dolphins can ride the leading edge of a wake. In the sport of wakeboarding the wake is used as a jump. The wake is also used to propel a surfer in the sport of wakesurfing. In the sport of water polo, the ball carrier can swim while advancing the ball, propelled ahead with the wake created by alternating armstrokes in crawl stroke, a technique known as dribbling. • Sunseeker wake on the Indian River looking at the 17th Street Bridge • Wake behind a ferry in the Baltic Sea • Wake of a boat in the Hawaiian Islands • Wake of a ferryboat just off British Columbia, Canada.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413211941719055, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21858/expected-values-over-binomial-distributions/41076
## expected values over binomial distributions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In some works of economics/risk analysis etc., I have seen situations where people take the expected value of a function (such as a utility function/cost function) over a binomial distribution: $$F(n) = \sum_{k=0}^n \binom{n}{k} p^k(1 - p)^{n - k} f(k)$$ This expected value operation seems to have a lot of nice properties with respect to differentiation (for instance, in this economics paper (Sah 1991) where the author proved some nice properties of these functions to deduce other things). So I suspect that this must be a named and well-studied phenomenon in the combinatorics/probability/convex optimization theory literature. But I couldn't find any discussion of it in the places I looked. (I tried "binomial distribution transformation", "binomial distribution transform", "expected value over binomial distribution", "expected utility function over binomial distribution", "convolution with binomial distribution", etc., but all the results I got were in applied economics/statistics). Any ideas for where or under what name this might have been studied? - 2 What exactly do you mean by "this"? The properties in the article you mentioned are just some auxiliary identities and inequalities that particular author needed for his model and none of them is anything an average mathematician would have any difficulty proving if he had needed them for anything. There is little point in trying to figure out all such properties of $F(n)$, much less in publishing them, so I doubt it has ever been done. On the other hand, if you need something particular and have trouble proving it, just post your question and we'll see what we can do. – fedja Apr 19 2010 at 17:26 ## 2 Answers This is a generalization of the binomial transform of the function $f(k)$. See, for instance, the Wikipedia article on binomial transform, and, in particular, the generalizations given therein. The Prodinger reference deals specifically with your expression for $F(n)$. Or, if you rewrite it as $$F(n) = (1-p)^n \sum_{k=0}^n \binom{n}{k} \left(\frac{p}{1-p}\right)^k f(k),$$ then you having a scaled version of the rising $k$-binomial transform of $f(k)$ as described in my 2006 paper with Laura Steil. At any rate, it appears the term you want is "binomial transform," and there is a small literature on its properties. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This should have been a comment but I don't have enough reputation points to post comments. The expression for $F(n)$ looks very similar to the Bernstein approximation (or Bernstein polynomial) to the function $f(.)$. Actually, it would be the Bernstein polynomial with respect to $p$ if the values $f(k)$ are replaced with $f(k/n)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428975582122803, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=519753
Physics Forums Page 1 of 3 1 2 3 > ## Electrostatic Potential Concept My textbook says that Electrostatic potential is the work done on a unit charge to bring it from infinity to a point from a given charge without acceleration, against the electric field presend due to the given charge. So, as it says that there will be no acceleration, does it imply that there will be no change in the velocity of the charge that we are moving? And thus, does this imply that there is no change in the kinetic energy of the charge? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus So, as it says that there will be no acceleration, does it imply that there will be no change in the velocity of the charge that we are moving? And thus, does this imply that there is no change in the kinetic energy of the charge? Yes that's correct. Another way to put this is to say that for its entire journey from infinity to its location, the charge is in equlibrium or passes through a series of equilibrium states. go well Also, when a charge is accelerated, it emits electromagnetic radiation. It implies that no work is wasted in generating these propagating modes of the electromagnetic field. ## Electrostatic Potential Concept BTW, this definition assumes that the electric potential at infinity is taken as zero (electric potential, like potential energy is determined up to an additive constant). This is possibly only for fields that decay fast enough at infinity, like, for example, the field of a point charge (that decays inversely proportional to the square of the distance). However, the field from a uniformly charged infinite line decays inversely proportional to the distance from the line and the potential is logarithmically divergent. The field of a uniform electric field gives a linearly divergent potential. Although all fields generated by real bodies decay sufficiently fast, sometimes it makes sense to take into account such idealized field sources for which your definition is not directly applicablel Quote by thebiggerbang And thus, does this imply that there is no change in the kinetic energy of the charge? Quote by Studiot Yes that's correct. Is the absolute value of kynetic energy not relevant?, any speed will do, as long as it remains constant? In many textbooks I read slowly What is slowly? This is because you are dealing with electrostatics. If the charge is moving rapidly, you are dealing with electrodynamics and magnetic fields. I don't really know what exactly is slow enough to be described by electrostatic theorems, though. Quote by rbnvrw This is because you are dealing with electrostatics. ... I don't really know what exactly is slow enough to be described by electrostatic theorems, though. My point is that (KE), velocity cannot be left vague. because if v is half, then time t is double. Then also total work done on charge (PE) is double, or more So what is the correct Electrostatic Potential Energy? If you travel from infinity with any finite speed, the time is infinite. If you half the speed, time remains the same, namely, infinite. Quote by Dickfore If you travel from infinity with any finite speed, the time is infinite. If you half the speed, time remains the same, namely, infinite. so, what is E-PE? it seems we cannot calculate it! b)And what if you don't start from infinity? Quote by formal And what if you don't start from infinity? But, look at the definition you quoted. It says taken from infinity. Quote by thebiggerbang 1) Electrostatic potential is the work done on a unit charge to bring it from infinity to a point from a given charge without acceleration, against the electric field presend due to the given charge. Quote by Studiot 2)Yes that's correct. Quote by Dickfore But, look at the definition you quoted. It says taken from infinity. That's my quote 1 and 2: how do we calculate E-PE from infinity to a point if time is infinite? b) if we do not start from infinity, the absolute value of velocity is not relevant? again, what is the threshold? Quote by formal My point is that (KE), velocity cannot be left vague. because if v is half, then time t is double. Then also total work done on charge (PE) is double, or more So what is the correct Electrostatic Potential Energy? You are doing a mistake here, the work done on the particle doesnt depend on its velocity neither on the time it takes, because the force that does this work doesnt depend on velocity(the electric field is only due to the given charge which is considered stationary, while we consider the field from the moving unit charge negligible) neither on time (it is constant with time) Quote by formal That's my quote 1 and 2: how do we calculate E-PE what does this have to do with anything? Quote by Delta² the work done on the particle doesnt depend on its velocity neither on the time it takes, because the force that does this work doesnt depend on velocity) but I surely know that the longer you expose anything to a force the greater is the KE acquired. Think about gravity! Quote by Dickfore what does this have to do with anything? If one says that PE is calculated bringing a test charge from infinity to anywhere and the time required to do this is infinite, then he is saying simply we cannot calculate it, at any speed. Am I wrong? Quote by formal If one says that PE is calculated bringing a test charge from infinity to anywhere and the time required to do this is infinite, then he is saying simply we cannot calculate it. Am I wrong? Yes. The work done by a conservative force is equal to the difference between the initial potential energy and the final potential energy: [tex] W_{\mathrm{cons}} = (E_{p})_{i} - (E_{p})_{f} [/tex] The work-energy theroem tells us that the total work done on an object is equal to the change in kinetic energy. In our case, since there is no acceleration, the velocity of the object remains the same, therefore the change in kinetic energy is zero, regardless of the speed of the object. Furthermore, there are two forces acting on the object at any time. The electrostatic force (no Lorentz force since the speed of the object is infinitely small) and the external force that counters it. Therefore, the work-energy theorem gives: [tex] W_{\mathrm{ext}} + W_{\mathrm{cons}} = 0 [/tex] Substituting for the work done by the conservative (electrostatic) force: [tex] W_{\mathrm{ext}} + \left((E_{p})_{i} - (E_{p})_{f}\right) = 0 [/tex] Solving for the final potential energy: [tex] (E_{p})_{f} = (E_{p})_{i} + W_{\mathrm{ext}} [/tex] Now, we choose the normalization that $(E_{p})_{i} = (E_{p})_{\infty} = 0$. Then: [tex] (E_{p})_{f} = W_{\mathrm{ext}} [/tex] This is the mathematical formulation of the sentence stated in the OP. Is the absolute value of kynetic energy not relevant?, any speed will do, as long as it remains constant? In many textbooks I read slowly What is slowly? Potential energy in any force system is independent of time and dependant solely on position. That includes gravity. We are discussing electric potential enrgy here. The definition is specifically worded to run from infinity to the position because we cannot calculate the work of separating charges. The calculation is easy and shown on post 40 of this thread http://www.physicsforums.com/showthr...ight=potential Your book says 'slowly' instead of 'equilibrium' or 'without acceleration' I expect. They are all equivalent statements. The represent the desired condition that as the charge is brought in from infinity none of the work done against the electric force is converted to kinetic energy - it is all stored as electric potential energy. This condition gives validity to the calculation described above. Page 1 of 3 1 2 3 > Tags acceleration, electric potential, kinetic energy Thread Tools | | | | |------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Electrostatic Potential Concept | | | | Thread | Forum | Replies | | | General Physics | 16 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 6 | | | Advanced Physics Homework | 1 | | | Introductory Physics Homework | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351267218589783, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/09/04/enriched-adjunctions/?like=1&source=post_flair&_wpnonce=d0199d491e
# The Unapologetic Mathematician ## Enriched Adjunctions When I started in on adjoint functors, I gave the definition in terms of a bijection of hom-sets. Then I showed that we can also specify it in terms of its unit and counit. Both approaches (and their relationship) generalize to the enriched setting. Given a functor $F:\mathcal{C}\rightarrow\mathcal{D}$ and another $G:\mathcal{D}\rightarrow\mathcal{C}$, an adjunction is given by natural transformations $\eta:1_\mathcal{C}\rightarrow G\circ F$ and $\epsilon:F\circ G\rightarrow1_\mathcal{D}$. These transformations must satisfy the equations $(1_G\circ\epsilon)\cdot(\eta\circ1_G)=1_G$ and $(\epsilon\circ1_F)\cdot(1_F\circ\eta)=1_F$. By the weak Yoneda Lemma, this is equivalent to giving a $\mathcal{V}$-natural isomorphism $\phi_{C,D}:\hom_\mathcal{D}(F(C),D)\rightarrow\hom_\mathcal{C}(C,G(D))$. Indeed, a $\mathcal{V}$-natural transformation in this direction must be of the form $\phi_{C,D}=\hom_\mathcal{C}(\eta_C,1_{G(D)})\circ G_{F(C),D}$, and one in the other direction must be of the form $\varphi_{C,D}=\hom_\mathcal{D}(1_{F(C)},\epsilon_D)\circ F_{C,G(D)}$. The equations $\phi\circ\varphi=1$ and $\varphi\circ\phi=1$ are equivalent, by the weak Yoneda Lemma, to the equations satisfied by the unit and counit of an adjunction. The $2$-functor $\mathcal{V}\mathbf{-Cat}\rightarrow\mathbf{Cat}$ that sends an enriched category to its underlying ordinary category sends an enriched adjunction to an ordinary adjunction. The function underlying the $\mathcal{V}$-natural isomorphism $\phi$ is the bijection of this underlying adjunction. As we saw before, a $\mathcal{V}$-functor $G$ has a left adjoint $F$ if and only if $\hom_\mathcal{D}(D,F(\underline{\hphantom{X}}))$ is representable for each $D\in\mathcal{D}$. Also, an enriched equivalence is an enriched adjunction whose unit and counit are both $\mathcal{V}$-natural isomorphisms. Just as for ordinary adjunctions, we have transformations between enriched adjunctions, a category of enriched adjunctions between two enriched categories, enriched adjunctions with parameters, and so on. ### Like this: Posted by John Armstrong | Category theory No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134265184402466, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/216/adaptive-sampling-for-slow-to-compute-functions-in-2d/
# Adaptive sampling for slow to compute functions in 2D EDIT: Although I have posted an answer based on my current progress, this in incomplete. Please see the "open issues" section in the answer. Most plotting functions in Mathematica adjust the sampling density dynamically, based on the slope of the function: ````DensityPlot[1/( 1 + Exp[10 (Sqrt[x^2 + y^2] - 3)]), {x, 0, 5}, {y, 0, 5}, Mesh -> All, PlotRange -> All] ```` Unfortunately the internal algorithm that does this is not directly callable. Question: I have a slow to compute 2D function (takes up to 10-40 seconds for a single point even though it's in optimized C++ called through LibraryLink) that I need to sample. How can I sample it adaptively, in a convenient and controlled way? Since the function is so slow to compute, • I'd like to be able to take the existing points, and refine them more if needed (i.e. continue the computation using the existing results) • I can't use DensityPlot directly because I can't control how many points it is going to compute, and I need an upper limit on that (i.e. an upper limit on computation time). It also can't be interrupted and continued later. I am looking for 1. effective methods to do this 2. implementations in Mathematica (either as an answer or pointers to libraries) The messy details: I am trying to compute a phase diagram and map the boundaries between the phases precisely. So I don't need the function value everywhere, only where it very suddenly drops. The function is either of magnitude 1 (say, between $0.1 \div 1$), or very small (close to zero). The function is computed using Monte Carlo methods, so at a small scale it doesn't appear smooth, and I might get inconsistent results close to the phase boundaries on subsequent runs of the function. This should give you an idea of what sort of function I'd like to apply this to, which might be important when choosing a method. - 1 Why not threshold to get 0-1 values? Could do a bit map, say. Then do something more extensive in (typically thin) regions that contain both 0s and 1s. – Daniel Lichtblau Jan 18 '12 at 21:24 I'd say you have to code this manually, i.e. generate a crude grid, define points between the existing function values, convolve the thing with some flattener and find out whether there's still a lot of fluctuation going on, and then deciding to calculate the value for those fluctuating intermediate points. I mean in the end what you've got is more of a data list generation than a plotting problem. (I don't think you can do more with `Plot` than changing `MaxRecursion` and `PlotPoints`.) – David Jan 19 '12 at 1:40 If the function is computed with Monte Carlo, do you actually use ListDensityPlot for a set of points then? Or somehow the function has symbolic form? – Vitaliy Kaurov Jan 19 '12 at 4:31 – David Zaslavsky Jan 19 '12 at 12:02 1 – adavid Oct 6 '12 at 17:30 show 7 more comments ## 1 Answer Update: I described an alternative approach based on built in plotting functions in this answer. That approach is not very practical here though because I need to be able to handle points at arbitrary positions while built in functions work with a rectangle-based mesh. I am still looking for improvements. I came up with this very naive approach and implementation (I know that the implementation is not optimal at all): First let's define a test function (same one as in the question): ````fun[{x_, y_}] := 1/(1 + Exp[10 (Norm[{x, y}] - 3)]) ```` These functions will subdivide lines in the Delaunay triangulation of the points if 1. the points are further apart than a threshold (i.e. the resolution is controlled) and 2. the function values in the two points also differ by more than another threshold. ````<< ComputationalGeometry` makeLines[tri_] := Union[Sort /@ Flatten[Thread /@ tri, 1]] subdivision[points_, values_, valueThreshold_, distanceThreshold_] := Module[ {tri, lines, linesToDivide}, tri = DelaunayTriangulation[points]; lines = makeLines[tri]; linesToDivide = Pick[lines, (Abs[values[[#1]] - values[[#2]]] > valueThreshold && Norm[points[[#1]] - points[[#2]]] > distanceThreshold ) & @@@ lines]; Mean /@ (linesToDivide /. n_Integer :> points[[n]]) ] ```` Let's define an initial point grid to compute the function in: ````points = Tuples[Range[0, 5, 1], 2]; ```` We can iterate this function to add more and more points and recursively subdivide the grid (evaluate the following commands together repreatedly): ````values = fun /@ N[points]; newpoints = subdivision[points, values, 0.1, 0.1]; ListDensityPlot[Flatten /@ Thread[{points, values}], InterpolationOrder -> 0, Mesh -> All, ColorFunction -> "MintColors", Epilog -> {PointSize[Large], Point[points], Red, Point[newpoints]}] points = Join[points, newpoints]; ```` The result after several iterations: ````values = fun /@ N[points]; ListDensityPlot[Flatten /@ Thread[{points, values}], InterpolationOrder -> 0, Mesh -> All, ColorFunction -> "MintColors"] ```` Open question: My aim is to minimize the number of points I need to compute while getting a precise approximation. This is probably not the best subdivision method for it. What are some easy-to-implement better methods? I think ideally the decision for refining the grid should be made based on some sort of curvature. Take for example the following function: ````ContourPlot[Erf[1/(1 + 20 x^2) - y], {x, -3, 3}, {y, -3, 3}] ```` Using a `valueThreshold` of `0.3` and `distanceThreshold` of `0.1`, and a starting grid with a spacing of `0.5` produces this: Let's turn on interpolation (because I can't turn interpolation off in `DensityPlot`) and compare it with a `DensityPlot` made using similar options (`PlotPoints -> 12, MaxRecursion -> 15`): The curvature-based `DensityPlot` (right) is clearly much better. Furthermore, my method won't properly detect "fjord-like" structures (similar to the one in this example). It tend to "jump" over them, this is why some artefacts are visible in the middle of the plot. Thanks to @ruebenko for the hints and ideas he sent me! - very nice. Here is an idea for an improvement. Compute an initial mesh. Subdivide everything and compute the new points. Then it should be possible estimate if and where the next subdivision is necessary. You could also look at the gradient of change. – ruebenko Jan 20 '12 at 7:48 If your going to mesh a symmetric function I'd expect the result to be asymmetric as well. In your last mesh plot the result is clearly asymetric, so perhaps there's another angle of attack. – Sjoerd C. de Vries Mar 31 '12 at 6:27 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914861261844635, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/51264/smallest-prime-that-does-not-divide-the-vandermonde-determinant/51268
## Smallest prime that does not divide the Vandermonde determinant [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $V = \Pi_{1 \le i < j \le n} (a_j - a_i)$ be the determinant of the Vandermonde matrix where $1 = a_1 < \cdots < a_n = d$ (with $d >> n$). What is the smallest prime $p$ (or the lower bound) such that $p \nmid V$? Preferably $p < n$. - 3 I don't understand the question. Do you have a particular choice of a_i in mind? – Qiaochu Yuan Jan 5 2011 at 23:51 1 I still don't understand the question. Do you want the smallest prime which does not have to divide V? – Qiaochu Yuan Jan 6 2011 at 0:05 Yes. Consider the primes in the range [2 .. d]. Some of them will divide V, other won't. I want the smallest prime p that does not divide V. – M.S. Jan 6 2011 at 0:07 2 @M.S.: you do not specify what $a_i$ are. Integers? What are the quantifiers? Should we fix $a_i$, then find $p$ or should we find a $p$ that works for every collection of $a_i$. The question as stated does not make sense. I voted to close. – Mark Sapir Jan 6 2011 at 1:11 ## 4 Answers Not really clear about what is being asked. If the $a_i$ are all divisible by the same p (choose one) then this p does divide V. Suppose the $a_i$ are 1 ... n, then if $p < n$ then then with $a_i=1$ there is an $a_j$ with $a_j-a_i=p$. If $p \ge n$ then $p \nmid V$. If $p < n$ then in any n numbers there are two with the same residue mod p (pigeonhole) so $p \mid V$. Is a more general context intended? - I'm looking for a prime $p < n$ such that $p \nmid V$. It is the case that a_1 = 1 (actually, 1 = a_1 < .. < a_n = d and d >> n). – M.S. Jan 6 2011 at 0:04 Under new conditions $d >> n$ it seems easy to me to start with 1 then $a_i$ = $1 + i2^m$ for arbitrarily large (constant) m. The difference from 1 ... n is that differences all have a high power of 2 to add as a factor of the determinant. d can be as large as desired and the prime factors remain lower than n. – Mark Bennet Jan 6 2011 at 0:42 Sorry - last comment could allow n=p as a factor. – Mark Bennet Jan 6 2011 at 0:46 It was my bedtime - use (i-1) in place of i as a factor – Mark Bennet Jan 6 2011 at 8:29 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To extend Mark Bennet's answer, one could have a_2 = a_1 + P_m, the mth primorial, giving that V is a multiple of P_m. So without parameters, there is no bound. If you want something in terms of V or the a_i, you might start with the idea that such a prime need be not much larger than the largest of (a_i - a_j), and is likely to be smaller. - 1 In tha above, I assume V is nonzero. Gerhard "Ask Me About System Design" Paseman, 2011.01.05 – Gerhard Paseman Jan 5 2011 at 23:57 Now that I see the conditions involving d, I'd say Mark's answer is the best you will get, namely n is near the lower bound for nonzero V. It is easy to construct examples which realize this bound. Gerhard "Ask Me About System Design" Paseman, 2011.01.05 – Gerhard Paseman Jan 6 2011 at 0:05 1 It is also easy to extend the above idea: let a_{i+1} = a_i + Q_i, where Q_i is a product of small primes with size of Q_i comparable to P_m. Many of the Q_i will sum to form multiples of larger primes. All that can be guaranteed is again from Mark's answer, that n <= p <= a_n + epsilon, which is likely less than d. Gerhard "Ask Me About System Design" Paseman, 2011.01.05 – Gerhard Paseman Jan 6 2011 at 0:19 If `$p<n$` then it must be that $p \mid V$. However if $p \ge n$ then it can be arranged that $p \nmid V$. If you set $a_i=2^m(i-1)+1$ then no prime greater than $n-1$ divides $V$. You could replace $2^m$ by $(n-1)!$ or anything else with all divisors less than $n$. - Considered as a polynomial in the $a_i$'s, $V$ is never divisible by p, since the monomial $a_1^{n-1}a_2^{n-2}\cdots a_{n-1}$ always appears with coefficient 1. However, by the magic of Fermat's little theorem, it can be that all of its values are divisible by p, even if the polynomial itself isn't. As Mark points out, this happens if and only if `$p< n$`. - Thanks for the addendum, Andres. Not sure what happened there. – Ben Webster♦ Jan 6 2011 at 3:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308686256408691, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/5808/what-would-the-internet-be-like-without-public-key-cryptography/5816
# What would the Internet be like without public-key cryptography? The functioning of the Internet as we know it nowadays depends very heavily on public-key cryptography, including several key root systems that depend on its asymmetric properties. But what would it be like to not have that tool? Could a sufficiently advanced but less convenient private-key cryptography system still make up for what public-key cryptography does? Would the Internet still be able to function in roughly the way it currently does? - 1 Do you allow hash signatures? You can create a digital signature scheme, using nothing apart from cryptographic hashes. – CodesInChaos Dec 21 '12 at 9:37 ## 1 Answer You can create a public key cryptosystem from any trapdoor function, and you need some form of trapdoor to design a public key cryptosystem. Do you mean "those trapdoor functions, which are currently USED in Internet standards"? The most prominent ones are discrete log of integers, factorization and discrete log in elliptic curves. If so: Yes, it might be possible to set up new protocols, based on other hard problems, although I can just think of lattice problems atm (SVP,CVP,LWE,etc) and McEliece(based on linear codes). It is easy to use these instead of any factorization-based public key cryptosystem. However, the discrete log problem is a different matter: The group structure does not depend on a specific private key and several (pk,sk) key pairs can operate on the same group. This is required for several primitives, e.g. DH key exchange. I can't think of any hard problem with this, other than discrete log on integers and elliptic curves. If you want to disallow all known public key cryptosystems (in the sense of all protocols based on trapdoor function)... it would be VERY different. While you could communicate over secure channels with symmetric key crypto, there is no way to exchange a key. If you want to use a key for symmetric encryption, both sides have to know it previously or send it plain. In this case, some parts of the Internet still would work: You could store a secret key for every single party, with whom you interacted. Based on this key, you can set up a new secure connection again. Authentication might still work with hash functions, etc. But assuming, you want to visit a website and interact over a secure channel: You have no known secret and have to agree on something. Whatever you send, the routers between you and the website will know. If you can trust ALL of them, you are okay. But any of them could just listen in your secret channel To sum it up: If the Internet was based on trustworthy infrastructure (providers, exchange points, etc.), it would be similar. However, that would not be the Internet we have today. The current Internet would just not work. - I meant in the sense of disallowing all trapdoor functions. So your second explanation is the one I wanted. – Joe Z. Dec 22 '12 at 5:00 "The current internet" just needs secure key agreement, not necessarily trapdoor functions. $\:$ (Although I would count secure key agreement as "public-key cryptography".) $\;\;$ – Ricky Demer Dec 22 '12 at 10:34 How would you create a public-key cryptosystem from hashes? – Joe Z. Dec 25 '12 at 16:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430068731307983, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=411839
Physics Forums ## A question on proof of Riesz Representation Theorem when p=1 This question comes from the proof of Riesz Representation Theorem in Bartle's "The Elements of Integration and Lebesgue Measure", page 90-91, as the image below shows. The equation (8.10) is $$G(f)=\int fgd\mu$$. The definition of $$L^\infty$$ space is as follows: My question is: why the g determined by Radon-Nikodym Theorem is in $$L^\infty$$? I can only prove that it is Lebesgue integrable, that is, belongs to $$L^1$$ space, but the proof mentions no word on why it is in $$L^\infty$$, that is, bounded a.e.. Could you please tell me how to prove this? Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Science Advisor Quote by zzzhhh I can only prove that it is Lebesgue integrable, that is, belongs to $$L^1$$ space The L^1 property already comes from Radon-Nikodym, right? Radon-Nikodym says such a g in L^1 exists. To prove it is also in L^\infty, what about this: Suppose g is not a.e. bounded, then for every n we can find An with $0<\mu(A_n)<\infty$ such that for all $x\in A_n$ we have $|g(x)|>n$. Now take $$f_0:=\frac{1_{A_n}|g|}{g}.$$ Then $$\|G\|=\sup\frac{|Gf|}{\|f\|}\geq \frac{|Gf_0|}{\|f_0\|}=\frac{1}{\mu(A_n)}\left|\int fg\right|=\frac{1}{\mu(A_n)}\int_{A_n} |g|>n,$$ in contradiction with G being bounded. Yes! this is the proof! Thank you for the ingenious construction, I got it. Thread Tools | | | | |-----------------------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: A question on proof of Riesz Representation Theorem when p=1 | | | | Thread | Forum | Replies | | | Calculus | 4 | | | Linear & Abstract Algebra | 3 | | | Linear & Abstract Algebra | 12 | | | Calculus | 4 | | | Calculus | 16 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121976494789124, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/77016/tail-bound-on-binomial/77018
## Tail Bound on Binomial ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Context: circuit complexity argument: How do I show that $$\sum_{i=0}^{n/2- \sqrt{n}} {n \choose i} \geq 2^n/50$$ ? (as n goes to infinity) [This shows up in proving Mod2 is not in ACC(3)]. Standard approach would be to use chernoff bounds; but it provides the wrong direction. Thanks! - ## 1 Answer If $X_n$ is a binomial random variable with parameters $n$ and $p=1/2$, this inequality says $P(X_n \le n/2 -\sqrt{n}) \ge 1/50$. By the deMoivre-Laplace Theorem, $Z_n = (2X_n - n)/\sqrt{n}$ converges in distribution as $n \to \infty$ to a standard normal distribution, and thus $P(X_n \le n/2 - \sqrt{n}) = P(Z_n \le -2) \to \Phi(-2) \approx .0227501320 > 1/50$ where $\Phi$ is the cumulative distribution function of the standard normal distribution. So your inequality is true for sufficiently large $n$. - 2 Yes indeed, and the Berry-Esseen inequality will give an explicit value of $n$ beyond which it is true, small enough that the gap can be easily filled by computation. Actually $n=23$ is the largest value for which the inequality fails. – Brendan McKay Oct 3 2011 at 13:13 No, there are many values greater than 23 for which it fails. The largest is 307, I think: for $n = 307$, $n/2 - \sqrt{n} \approx 135.9785845$ and $P(X_{307} \le 135) \approx .01987042485$. – Robert Israel Oct 4 2011 at 6:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956908583641052, "perplexity_flag": "head"}
http://www.physicsforums.com/library.php?do=view_item&itemid=23
Physics Forums Menu Home Action My entries Defined browse Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list Search linear ordinary differential equation Definition/Summary An nth-order linear ordinary differential equation (ODE) is a differential equation of the form $$\sum_{i=0}^n a_i(x)y^{(i)}(x)\ =\ b(x)$$ where $y^{(i)}(x)$ denotes the ith derivative of y with respect to x. The difference between any two solutions is a solution of the homogeneous part: $$\sum_{i=0}^n a_i(x)y^{(i)}(x)\ =\ 0$$ This equation has n independent solutions, and every solution is a linear combination of them. So the general solution of a linear ODE is the sum of one particular solution of the whole equation plus a linear combination of n independent solutions of the homogeneous part (in other words: a particular solution plus any homogeneous solution). Equations Scientists Joseph Louis Lagrange (1736-1813) Recent forum threads on linear ordinary differential equation Breakdown Mathematics > Calculus/Analysis >> Differential Equations See Also recurrence relation Images Extended explanation Homogeneous linear ODE: A homogeneous linear ODE is a linear ODE in which $b(x)=0.$ Homogeneous solutions: These are linear combinations of solutions of the form $e^{\lambda x}$ or $x^ie^{\lambda x}$. Any solution of the form $e^{\lambda x}$ can be found by the method shown in the following example. For the general method of solution, see the method of characteristic polynomial, at the foot of this page. Example of homogeneous solution: Find the general solution to $y'' - 2y' -8y = \sin{x}+ \cos{x}$ First we must find the general homogeneous solution, $y_h$. Let's guess that $y_h$ is in the form $y_h=e^{\lambda x}$ - this seems like a good guess, doesn't it? So, plugging in, we have $$\left(\lambda^2 - 2\lambda -8\right) e^{\lambda x} = 0$$ Now, since $e^{\lambda x}$ is never $0$, we can solve the above equation as a quadratic (the characteristic quadratic), and we find that either $\lambda=4$ or $\lambda=-2$. Therefore the general solution is $$y_h = C_1 e^{4x} + C_2 e^{-2x}$$ where $C_1$ and $C_2$ are arbitrary constants. Particular solutions: We only need one particular solution. Finding one is generally a combination of common-sense and guesswork, based on the nature of the polynomial $b(x)$. Let's see how to do it with the above example, in which $b(x)\ =\ \sin{x}+ \cos{x}$, using the method of undetermined coefficients … There is also the more general variation of parameters method, but this is not usually needed in examination questions. Example of Particular Solution by the Method of Undetermined Coefficients: Let's guess a particular solution in the form $y_p\ =\ A\sin{x} + B\cos{x}$. The method of undetermined coefficients is to solve for $A$ and $B$ by plugging $y\ =\ y_p$ into the original equation, giving: $$\sin{x}(-9A\ +\ 2B)\ +\ \cos{x}(-2A\ -\ 9B)\ =\ \sin{x}\ +\ \cos{x}$$ which, dealing with the coefficients of $\sin{x}$ and $\cos{x}$ separately, gives: $$A\ =\ -11/85\ \ \ \ B = -7/85$$ and so $y_p\ =\ -(11\sin{x}\ +\ 7\cos{x})/85$ is a solution to the original equation. Using this as the particular solution, the general solution of the original equation is: $$y= C_1 e^{4x} + C_2 e^{-2x} - (11\sin{x}\ +\ 7\cos{x})/85$$ Homogeneous solution by characteristic polynomial: This is not the same as the characteristic polynomial of a matrix or matroid In a linear differential equation, the derivative may be replaced by an operator, D, giving a polynomial equation in D: $$\sum_{n\,=\,0}^m\,a_n\,\frac{d^ny}{dx^n}\ =\ 0\ \mapsto\ \left(\sum_{n\,=\,0}^m\,a_n\,D^n\right)y\ =\ 0$$ If this polynomial has distinct (different) roots $\lambda_1,\dots,\lambda_m$: $$\prod_{n\,=\,1}^m(D\,-\,\lambda_n)\ =\ 0$$ then the general solution is a linear combination of the solutions of each of the equations: $$\left(D\,-\,\lambda_n\right)y\ =\ 0$$ which are the same as $$\frac{dy}{dx}\ =\ \lambda_n\,y$$ and so the general solution is of the form: $$y\ =\ \sum_{n\,=\,1}^m\,C_n\,e^{\lambda_nx}$$ For a pair of complex roots (they always come in conjugate pairs) $p\ \pm\ iq$ or $r\,e^{\pm is}$, a pair of $C_ne^{r_nx}$ may be replaced by $$e^{px}(A\,cos(qx)\,+\,iB\,sin(qx))$$ or $$r^k(A\,cos(sk)\,+\,iB\,sin(sk))$$ However, if the polynomial has some repeated roots: $$\prod_{p\,=\,1}^q\prod_{n\,=\,1}^p(D\,-\,\lambda_n)^p\,y\ =\ 0$$ then the general solution is of the form: $$y\ =\ \sum_{p\,=\,1}^q\sum_{n\,=\,1}^pC_{n,p}\,x^{p-1}\,e^{\lambda_nx}$$ Commentary paulojomaje @ 03:27 AM Dec18-10 too difficult tiny-tim @ 11:47 AM Jan16-09 foxjwill's original draft of this page, which got lost in the preview process, is now restored, with amendments to Definition and other minor editing. Method of characteristic polynomial added. foxjwill @ 09:57 PM Jun19-08 yes, it is. Actually, when I wrote it originally, it said that I wasn't able to post it so, until today, I actually hadn't known it was here. mathwonk @ 01:12 PM May10-08 the simplest linear homogeneous constant coeff equation has form (D-c)f = 0, where D denotes differentiation, and the most complicated one is merely a formal product of these. Since such products commute with each other, it suffices to solve this one and its iterates, of form (D-c)^n f = 0. clearly, the solution of Df = 0 is a constant, and the solution of (D-c)f = 0 is a constant multiple of e^(ct). Moreover a linear combination of solutions is again a solution, so at least when all factors (D-c) occur simply in our operator, the solutions are linear combinations of the functions e^(ct). as an exercise, check then that solutions of (D-c)^2 f = 0, are exactly solutions of (D-c)f = constant times e^(ct), or linear combinations of te^(ct) and e^(ct). FunkyDwarf @ 07:30 AM May10-08 Err is this meant to be incomplete or...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387275576591492, "perplexity_flag": "head"}
http://mathoverflow.net/questions/123194/research-level-applications-of-row-rank-column-rank/123222
## Research level applications of “row rank = column rank”? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) No less an authority than Gilbert Strang frames "row rank equals column rank" (and a couple of other facts) as "The Fundamental Theorem of Linear Algebra." I'd simply like to assemble (for teaching purposes) a list of research level applications of this basic fact. Applications can be theoretical or practical, and I would particularly appreciate learning what value this fact has in the minds of physicists. (It goes without say) please do not start a debate concerning the centrality of this fact (that's not what MO is for). My question merely seek insights or pointers to the literature that would support making a positive case for centrality. So if you use linear algebra all the time but never this fact, no need to chime in. "Row rank equals column rank" has the consequence for square matrices that $A$ singular makes $A^T$ singular; I'm sure this case comes up everywhere. Here I'm specifically looking for applications of the one-the-nose numerical equality of ranks. Feel free to offer a philosophical take on linear algebra that would support the centrality of "row rank equals column rank" even if that philosophy isn't grounded in specific results. (For example, does this simple statement offer a hidden paradigm for whole sophisticated theories.) - 8 Perhaps it's worth remarking that the measuring the failure of this theorem for linear operators on infinite dimensional spaces has lots of applications: this is the basis of Fredholm index theory. – Paul Siegel Feb 28 at 15:29 1 (I took the liberty of editing the title because, adding the " " ) – Qfwfq Feb 28 at 17:29 2 To me, the statement that column rank is equal to row rank is equivalent to the statement that the dimension of the quotient space of the domain by the kernel is equal to the dimension of the image. Or that the dimension of the domain minus the dimension of the kernel is equal to the dimension of the range minus the dimension of the cokernel. This is used almost everywhere in mathematics, notably homology and cohomology theories. – Deane Yang Mar 1 at 5:44 ## 3 Answers There is this proof of the De Bruijn-Erdös theorem: $p$ points in the plane, not all on the same line, at least $p$ lines go through at least two of the points. The linear algebraic proof goes like this: let $A$ be the incidence matrix of points versus lines (each row is labeled by a point, each column by a line going through at least two of the points, and the $ij$ coefficient is $1$ if the given point is on the given line, $0$ otherwise). Then it is easily seen that $det(AA^T)\neq0$. In particular the rank of $A$ is $p$, and since this is its column rank the number of columns must be at least $p$. - beautiful proof! – Delio Mugnolo Feb 28 at 22:27 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In some sense you can view the singular value decomposition as a sharpening of this theorem (for real and complex matrices, anyway). This, in turn, is useful all over the place. - Let $M$ be a finite monoid (e.g. a finite group, etc.) and $k$ a field. A function $f$ in the algebra $k[M]$ of $M$ is said of rank $m$ if the dimension of its orbits by shifts (i.e. for $y\in M$; $y^{-1}f,fy^{-1}$ are the "shifted" function defined by $y^{-1}f(x):=f(yx)$, $fy^{-1}(x)=f(xy)$) is of rank $m$. The fact the the "right rank" equals the "left rank" is an incarnation of the equality of the (row-column) ranks by means of the Hankel matrix indexed by $M\times M$ and defined by $$(x,y)\to f(xy)$$ This holds even for infinite monoids when one considers the functions that have finite dimensional orbits by shifts [1] (for example, with $M=(\mathbb{R},+,0)$, the functions $\sin$ and $\cos$ are of rank 2, $\exp$ has rank 1). The interest of this notion is when the monoid is NOT commutative and then, the Hankel matrix may not be symmetric. [1] For people who are familiar with these matters, this is the Sweedler's dual of $k[M]$ for the comultiplication of the monoid. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269991517066956, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/geometric-optics+reflection
# Tagged Questions 2answers 151 views ### Light Ray Reflection from concave mirror Suppose a ray of light hits a concave mirror and is parallel to principle axis but far away from it such that it doesn't follow paraxial ray approximation. Will it pass through focus or between focus ... 2answers 74 views ### All mirrors always shrink to 50% scale? I have this geometric optics exercise here, in which a man is looking at himself in a mirror. Determine the minimum height at which the bottom of the mirror must be placed so the man can see his ... 0answers 51 views ### Perimeter of Image of a Square A concave mirror of focal length =10 cm is placed 15 cm from a square. The square lie on the principal axis i.e it's one side coincides with the principal axis. What is the perimeter of the image? How ... 2answers 162 views ### Redirecting light beams from beam splitters I'm doing a project where I am taking a laser beam and sending it through a beam splitter. As I understand, approximately 50% of the light will go pass through and 50% will be reflected. So this means ... 2answers 170 views ### How much of himself a person can see in the mirror? [closed] A man who is $6$ ft tall is standing in front of a plane mirror that is $2$ ft in length. The mirror is placed lengthwise with its bottom edge $4$ ft above the floor on a wall that is $5$ ft ... 3answers 426 views ### Virtual images in (plane) mirrors? The following image is taken from teaching physics lecture Was man aus virtuellen Bildern lernen kann (in German): Now the cited paper claims that the left hand side is the correct picture to ... 14answers 41k views ### A mirror flips left and right, but not up and down Why is it that when you look in the mirror left and right directions appear flipped, but not the up and down?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330338835716248, "perplexity_flag": "middle"}
http://physics.aps.org/articles/print/v4/3
# Viewpoint: In tight spaces , Slow Positron Facility, Institute of Materials Structure Science, High Energy Accelerator Organization, KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Published January 10, 2011  |  Physics 4, 3 (2011)  |  DOI: 10.1103/Physics.4.3 Laser spectroscopy of positronium confined to nanoscale pores is a tool to probe the size of buried cavities and a step toward the long-term goal of a positronium BEC. Positronium ($Ps$) is the bound state of an electron and its antiparticle, the positron [1]. It is made purely of leptons and, as such, provides a unique opportunity for studying bound-state, two-body quantum electrodynamics and the effect of virtual annihilation that occurs between a particle and an antiparticle in a bound state. So far, experimentalists performing precision measurements [2] of the energy levels and the intrinsic lifetime of $Ps$ have made every effort to eliminate the effects of the environment so they could compare their results with theoretical predictions [3]. In a paper appearing in Physical Review Letters, David Cassidy and colleagues at the University of California, Riverside, and San Diego State University present a spectroscopic study of the $Ps$ atom in a completely different situation [4]. They measure the line shape of the Lyman-$α$ atomic transition (the $1S$-$2P$ transition) of $Ps$ confined within the roughly $5nm$ diameter pores of a porous silica film. This is the first time that spectroscopy has been done on atoms inside such small pores, and shows that the $1S$-$2P$ transition energy of $Ps$ confined to the nanometer length scale is significantly shifted compared to “free” $Ps$. One of the long-term goals of studying $Ps$ in confinement is to produce a positronium Bose-Einstein condensate (BEC). With positronium’s small mass (and long de Brogli wavelength), it should be possible to form a $Ps$ BEC at higher temperatures than with more massive atoms like sodium or rubidium [5]. There are, of course, obvious difficulties in doing this, which stem from the complexities of preparing antiparticles and cooling them within their short lifetime of $140ns$. In this context, Cassidy et al.’s results are an important proof of principle that optical spectroscopy of the confined $Ps$ atoms can be made before they annihilate. Silica is well known as a medium for forming $Ps$. Positrons shot into a silica target can, as they thermalize, interact with electrons in the solid to form $Ps$. If the target is porous silica, an aggregate of silica nanoparticles, or ultrafine silica powders, the $Ps$ atoms are likely to diffuse to a surface from which they can be spontaneously emitted into the pore region or free space between the nanoparticles. (Similarly, some of the positrons that diffuse to the surface may also form $Ps$ by picking up an electron.) Once within these “voids,” a $Ps$ atom will not return to the bulk. This is because the work function of $Ps$ for silica is negative, meaning that it costs more energy for the $Ps$ atom to be in the silica than in the vacuum [6]. Perhaps it is worth mentioning that the particle-antiparticle annihilation is a relatively slow process compared to electron and phonon excitations and $Ps$ formation. $Ps$ is unique among the neutral atoms in that it spontaneously emits gamma rays when it annihilates. Analyzing these gamma rays provides a useful probe of a positronium atom’s interaction with other atoms, molecules, or solid surfaces before it annhilates and is an established technique for studying materials. As a result, there are a number of methods for preparing and measuring $Ps$ that consist essentially of detecting the annihilation gamma rays. But in order to perform accurate spectroscopy of $Ps$ in confinement, Cassidy et al. had to first create a very short and dense pulse of positrons to maximize the temporal and spatial overlap of the $Ps$ bunch and the laser beam used in the measurements. Harnessing a series of existing techniques, they first moderated and trapped positrons that were produced by a sodium-$22$ radioactive source in a so-called Surko trap [7]. They then “dumped” the trapped positrons by applying a parabolic potential along the cavity of the trap to form a pulse with a temporal width of $15$–$20ns$, which was further compressed (with a “buncher”) into a subnanosecond pulse containing $∼2×107$ positrons. The pulse was also spatially compressed with a pulsed magnet [8]. Finally, they injected the positron pulse onto a porous silica target having an open pore structure. The lifetime of the ortho-$Ps$ ($Ps$ with the electron and positron spins parallel) in the pore is shorter than its intrinsic lifetime but still longer than about $50ns$, allowing some of the $Ps$ atoms to diffuse though the connected pores into the open space in front of the film surface. The team used a combination of different lasers and specialty optics to both produce light over a range of frequencies near the $Ps$ Lyman-$α$ transition and to photoionize the $2P$ state of $Ps$. They irradiated the target in two configurations: one where the target surface was perpendicular to the positron beam but parallel to the laser light, and a second where the surface was rotated by $∼45$° with respect to the positron beam and the light axes (see schematics, Fig. 1). In the former case, the light did not touch the surface and thus only the $Ps$ coming out of the surface (vacuum $Ps$) was excited. In the latter, the laser entered the silica and the $Ps$ could be excited while inside the film pore. To make their principle measurement, the team used a technique they developed in earlier work [9], called single-shot positron annihilation lifetime spectroscopy, which yields the excitation spectrum of $Ps$. When only those $Ps$ atoms that were in the vacuum were excited, they observed a Doppler broadened line shape centered at the $1S$-$2P$ wavelength $λ0$ (top plot, Fig. 1). The broadening occurs because the $Ps$ atoms are moving with a distribution of speeds. When the target was rotated so that the light passed though the film, they saw two components in the line shape, whose centers were above and below $λ0$ (bottom plot, Fig. 1). The longer wavelength component corresponds to free $Ps$ in vacuum: the Doppler broadened line shape is shifted to lower frequencies because the $Ps$ emitted from the tilted target surface is, on an average, moving towards the laser. The shorter wavelength component is due to the transition of $Ps$ atoms inside the pore. The width of this component is narrower than the vacuum component. Cassidy et al. see an energy shift of about $1.3meV$ for the confined $Ps$, which they attribute to energy level shifts of both the $1S$ and the $2P$ states. A simple model for the center-of-mass energy level shift that would correspond to changes in the size of the $1S$ and $2P$ $Ps$ atoms in confinement would give $ΔE0=3.6meV$, almost three times larger than the observed value. By accounting for the repulsive potential the cavity surface exerts on the $Ps$ wave function [10], they show the shift they observe corresponds to a cavity diameter of $∼5nm$, consistent with the nominal size of the pore of the sample. The transition width for positronium in confinement is, experimentally, narrower than that of vacuum, but still broader than expected [11]. That said, the line shapes for “free” $Ps$ are already extremely broad because of positronium’s low mass; thus any methods to further narrow them would open up the possibility of, for example, using positronium atoms for gravity interferometry experiments [12]. Since Cassidy et al. attribute the residual broadening they see to disorder in the sample, it may be possible to achieve narrower $Ps$ transitions by using a sample with less disorder. The fact that Cassidy et al. are able to perform laser spectroscopy of confined positronium before the atoms annihilate provides a new method for the determination of the size of pores in materials in addition to the already established uses of $Ps$ [13]. It also paves the way for preparing a $Ps$ BEC. Though challenges remain, including finding a way to increase the spatial density of the injected positrons and to effectively cool the $Ps$ atoms, a $Ps$ BEC could be the ultimate source for precision measurements. Though it is an even longer-term goal, there is particular interest in observing stimulated annihilation from a positronium BEC as a first step toward making a gamma-ray laser. ### References 1. A. Rich, Rev. Mod. Phys. 53, 127 (1981). 2. For example, M. S. Fee, et al., Phys. Rev. A 48, 192 (1993); R. S. Vallery, P. W. Zitzewitz, and D. W. Gidley, Phys. Rev. Lett. 90, 203402 (2003); O. Jinnouchi, S. Asai, and T. Kobayashi, Phys. Lett. B 572, 117 (2003). 3. S. Adkins et al., Ann. Phys. (N.Y.) 295, 136 (2002), and references therein. 4. D. B. Cassidy, M. J. Bromley, L. C. Cota, T. H. Hisakado, H. W. K. Tom, and A. P. Mills, Phys. Rev. Lett. 106, 023401 (2011). 5. B. Cassidy and J. A. Golovchenko, in New Directions in Antimatter Chemistry and Physics, edited by C. M. Surko and F. A.Gianturco (Kluwer Academic, Dordrecht, 2001)[Amazon][WorldCat]; P. M. Platzman and A. P. Mills, Jr., Phys. Rev. B 49, 454 (1994). 6. Y. Nagashima et al., Phys. Rev. B 58, 12676 (1998). 7. C. M. Surko and R. G. Greaves, Phys. Plasma 11, 2333 (2004). 8. D. B. Cassidy et al., Rev. Sci. Instrum. 77, 073106 (2006). 9. D. B. Cassidy et al., Appl. Phys. Lett. 88 194105 (2006). 10. J. Mitroy and G. G. Ryzhikh, J. Phys. B 32, 2831 (1999); V. A. Dzuba et al., Phys. Rev. A 60 3641 (1999). 11. R. H. Dicke, Phys. Rev. 89, 472 (1953); T. Ido and H. Katori, Phys. Rev. Lett. 91, 053001 (2003). 12. A. Kellerbauer et al., Nucl. Instrum. Methods B 266, 351 (2008); A. P. Mills, Jr., and M. Leventhal, Nucl. Instrum. Meth. B 192, 102 (2002). 13. S. J. Tao, J. Chem. Phys. 56, 5499 (1971); M. Eldrup et al., Chem. Phys. 63, 51 (1981); M. Hasegawa et al., Nucl. Instrum. Meth. B 91, 263 (1994). ### Highlighted article #### Cavity Induced Shift and Narrowing of the Positronium Lyman-α Transition D. B. Cassidy, M. W. J. Bromley, L. C. Cota, T. H. Hisakado, H. W. K. Tom, and A. P. Mills, Jr. Published January 10, 2011 | PDF (free) ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 75, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8960632085800171, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/121371-measuring-model-uncertainty.html
# Thread: 1. ## Measuring model uncertainty I have a model with a number of parameters. I can calculate the posterior distribution of the parameter vector. This, in part, means that I can calculate the expected value and the variance of each parameter. Question. How do I calculate model uncertainty as a single quantity? While I have some ad hoc ideas, I'd like to take a principled approach. One idea is to look at the variance of the likelihood: $Var( L(\theta) )$ ? I'm not sure how to approach this, other than numerically. Another idea is to compute something like this: $\sum_i \left | \frac {\partial L} {\partial \theta_i} \right | \sigma_i$ Any and all help appreciated. 2. hi, I think you are thinking too complicated, you know already the variances of each parameter. If parameters are independent then system uncertainty would be the sum of the variances of each parameter. If they are correlated you need to take into the account the correlation factor too. Hope this will help. Merry Christmas. Sincerely Hametceq 3. I thought of the sum of the variances at first. But I do not understand why that should be the correct approach. Why not the sum of the standard deviations, for example? Also, I do think that the likelihood should figure somehow into the calculation, since different parameters might have different importance in the model. An important parameter having a low variance and an unimportant one having a high variance should be better than the important parameter having a high variance and an unimportant one having a low variance. As an extreme example, consider a parameter that has no influence on the model at all. Then its variance (or standard deviation) should not be figured into the sum. That's why I thought of weighing the standard deviations by the derivative of the likelihood with respect to the parameters. In the case of the parameter that has no influence on the model, the derivative would be zero. 4. I see what is the question now, OK do the following, Strategy 1: STEP1: find the covariance matrix for the all parameters. STEP2: find the generalized variance (determinant) Strategy 2 (detailed analysis, I will do this for my analysis) STEP1: find the covariance matrix for the all parameters. STEP2: find eigenvalues and eigenvectors for that matrix (you can not set by yourself which one is less or more important so multivariate analysis should be applied). STEP4: eliminate less important eigenvalues and correspondingly you will reduce dimension. STEP5: find the generalized variance from the covariance matrix from the important parameters. Merry Christmas Sincerely Hametceq p.s. Q:Why not the sum of the standard deviations, for example? Ans: Not for you case, but generally because Variances are additive for independent r.v.s, but SDs are not.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8870530724525452, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51415?sort=oldest
Is it possible to show that an infinite set has a countable (infinite) subset, without using the Axiom of Choice? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let X be an infinite set. Is it possible to show the existence of a countably infinite subset of X without using the Axiom of Choice? - You may also be interested in this question: mathoverflow.net/questions/51171/… – Andres Caicedo Jan 7 2011 at 18:01 3 Answers Short answer: No. By countably infinite subset you mean, I guess, that there is a 1-1 map from the natural numbers into the set. If ZF is consistent, then it is consistent to have an amorphous set, i.e., a set whose subsets are all finite or have a finite complement. If you have an embedding of the natural numbers into a set, the image of the even numbers is infinite and has an infinite complement. So the set cannot be amorphous. - How do you show that nice consistency result? I used to know, but it’s escaping me now — something with Frankel-Mostowski permutation models, is it? – Peter LeFanu Lumsdaine Jan 7 2011 at 17:46 1 @Peter: Yes, the set $A$ in the basic Fraenkel model is amorphous, see A. Levy, "The independence of various definitions of finiteness", Fund. Math. 46 (1958), 1–13. But you can also see this directly in the original Cohen model for not-AC. – Andres Caicedo Jan 7 2011 at 18:16 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No. A set which has a countably infinite subset is called Dedekind-infinite. Clearly every Dedekind-infinite set is infinite; the statement that every infinite set is Dedekind-infinite is not provable in ZF (assuming ZF is consistent, of course). You don't need full AC, though. In fact, the equivalence isn't even as strong as countable choice. - It’s perhaps worth noting that this may not be the most familiar definition of Dedekind-infinite (“X is D-infinite if it is bijective to some proper subset of itself”), but that these two definitions are equivalent in ZF. – Peter LeFanu Lumsdaine Jan 7 2011 at 17:45 @Peter: If $X$ is amorphous (as defined by Stefan in his answer) is it not Dedekind-infinite with some bijection to a cofinite subset? – Asaf Karagila Jan 7 2011 at 18:09 1 @Asaf: Amorphous sets are Dedekind finite. The point is that if you remove a point $a$ from a set $X$, and the result has the same size as $X$, you can iterate, and get an injection of ${\mathbb N}$: Keep track of the orbit of $a$ as you iterate the bijection `$X\to X\setminus\{a\}$`. – Andres Caicedo Jan 7 2011 at 18:19 Andres: So simple! Many thanks :-) – Asaf Karagila Jan 7 2011 at 19:39 The following (nicely written) paper might be relevant: http://arxiv.org/abs/math.LO/0605779 Division by three Peter G. Doyle, John Horton Conway We prove without appeal to the Axiom of Choice that for any sets A and B, if there is a one-to-one correspondence between 3 cross A and 3 cross B then there is a one-to-one correspondence between A and B. The first such proof, due to Lindenbaum, was announced by Lindenbaum and Tarski in 1926, and subsequently `lost'; Tarski published an alternative proof in 1949. We argue that the proof presented here follows Lindenbaum's original. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183910489082336, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=376677
Physics Forums Vector spaces Hi. please anyone help me with vector spaces and the way to prove the axioms. like proving that (-1)u=-u in a vector space. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Ok, here goes: -u is the unique element such that u + (-u) = 0 = (-u) + u, so all we have to do is show that (-1) u has this property. That's not too bad: u + (-1)u = (1 + -1)u = 0u = 0. (The other case is identical.) The first equality follows from the distributivity of scalar multiplication. The third equality follows from this computation: 0u = (0 + 0)u = 0u + 0u, so adding -0u to both sides, we get 0u = 0. Just a word about the jargon: axioms are rules that are given and don't need proving. Theorems are what you prove from the axioms. For example, rochfor1's proof uses axioms such as the distributivity of scalar multiplication to prove the theorem that (-1)u = -u. Vector spaces thanx now i know that i dont need to prove axioms, they are given. thanks to that. thanks now I know that I dont need to prove axioms, they are given. thanks to that. Im still worried about this vector thing and ill try to prove that the negative of a vector in V is unique, thaks all :) Mentor I recommend that you start by proving that x+y=x+z implies y=z. The uniquess of the additive inverse follows from that. Recognitions: Gold Member Science Advisor Staff Emeritus I would also like to make sure you understand what the difference between $(-1)\vec{u}$ and $-\vec{u}$ and why we need to prove they are equal. $(-1)\vec{u}$ is the is the additive inverse of the multiplicative identity in the field of scalars (the real numbers if you like) multiplied by the vector $\vec{u}$. $-\vec{u}$ is the additive inverse of vector $\vec{u}$. It is not at all obvious that those two things have to be the same! To show that they are the same you use the basic properties (axioms) of vector spaces: specifically that -1 and 1 are addivitive inverses in the field of scalars, that $1\vec{u}= \vec{u}$, that $0\vec{u}= \vec{0}$, and the distributive law $(a+ b)\vec{u}= a\vec{u}+ b\vec{u}$. $(1+ -1)\vec{u}= 0\vec{u}= \vec{0}$ and $(1+ -1)\vec{u}= 1\vec{u}+ (-1)\vec{u}= \vec{u}+ (-1)\vec{u}$. Since those are both equal to $(1+ -1)\vec{u}$ they are equal to each other and $\vec{u}+ (-1)\vec{u}= \vec{0}$, which is precisely the definition of "additive inverse: $(-1)\vec{u}$ is equal to the additive inverse of $\vec{u}$. Mentor Quote by HallsofIvy $0\vec{u}= \vec{0}$ This step requres proof as well. We have $$0\vec u=(0+0)\vec u=0\vec u+0\vec u$$ which implies that $$0\vec u+\vec 0=0\vec u+0\vec u$$ and now the result $0\vec u=\vec 0$ follows from the theorem I mentioned in #6. if you are required to prove that -(-u)=u can you say that proving that condition is equivalent to proving that -(-u)+(-u)=0 since you would have added a negative of a vector -u and worked it through until you arrive there? if you are required to prove that -(-u)=-u can you say that proving that condition is equivalent to proving that -(-u)+(-u)=0 since you would have added a negative of a vector -u and worked it through until you arrive there? Mentor There are too many minus signs in what you wrote. You probably want to prove that $-(-\vec u)=\vec u$, i.e. that the additive inverse of $-\vec u$ is $\vec u$. The axiom $-\vec u+\vec u=\vec 0$ says that $\vec u$ is an additive inverse of $-\vec u$, and if you have already proved that the additive inverse is unique, you can safely conclude that $\vec u$ is the additive inverse of $-\vec u$. As I said before, the uniqueness of follows from #6. Have you proved that one yet? Thread Tools | | | | |------------------------------------|----------------------------|---------| | Similar Threads for: Vector spaces | | | | Thread | Forum | Replies | | | Differential Geometry | 13 | | | Calculus & Beyond Homework | 1 | | | Linear & Abstract Algebra | 4 | | | General Math | 1 | | | Linear & Abstract Algebra | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957263171672821, "perplexity_flag": "head"}
http://medlibrary.org/medwiki/De_Broglie%E2%80%93Bohm_theory
# De Broglie–Bohm theory Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: Quantum mechanics Introduction Glossary · History Background Fundamental concepts Experiments Formulations Equations Interpretations Advanced topics People Quantizations Mechanics Interpretations - Theorems Mathematical Formulations The de Broglie–Bohm theory, also called the pilot-wave theory, Bohmian mechanics, and the causal interpretation, is an interpretation of quantum theory. In addition to a wavefunction on the space of all possible configurations, it also includes an actual configuration, even when unobserved. The evolution over time of the configuration (that is, of the positions of all particles or the configuration of all fields) is defined by the wave function via a guiding equation. The evolution of the wavefunction over time is given by Schrödinger's equation. The de Broglie–Bohm theory is explicitly non-local: The velocity of any one particle depends on the value of the guiding equation, which depends on the whole configuration of the universe. Because the known laws of physics are all local, and because non-local interactions combined with relativity lead to causal paradoxes, many physicists find this unacceptable. This theory is deterministic. Most (but not all) variants of this theory that support special relativity require a preferred frame. Variants which include spin and curved spaces are known. It can be modified to include quantum field theory. Bell's theorem was inspired by Bell's discovery of the work of David Bohm and his subsequent wondering if the obvious non-locality of the theory could be eliminated. This theory results in a measurement formalism, analogous to thermodynamics for classical mechanics, which yields the standard quantum formalism generally associated with the Copenhagen interpretation. The measurement problem is resolved by this theory since the outcome of an experiment is registered by the configuration of the particles of the experimental apparatus after the experiment is completed. The familiar wavefunction collapse of standard quantum mechanics emerges from an analysis of subsystems and the quantum equilibrium hypothesis. The theory has a number of equivalent mathematical formulations and has been presented by a number of different names. The de Broglie wave has a macroscopical analogy termed Faraday wave.[1] ## Overview De Broglie–Bohm theory is based on the following postulates: • There is a configuration $q$ of the universe, described by coordinates $q^k$, which is an element of the configuration space $Q$. The configuration space is different for different versions of pilot wave theory. For example, this may be the space of positions $\mathbf{Q}_k$ of $N$ particles, or, in case of field theory, the space of field configurations $\phi(x)$. The configuration evolves (for spin=0) according to the guiding equation $m_k\frac{d q^k}{dt} (t) = \hbar \nabla_k \operatorname{Im} \ln \psi(q,t) = \hbar \operatorname{Im}\left(\frac{\nabla_k \psi}{\psi} \right) (q, t)$. Here, $\psi(q,t)$ is the standard complex-valued wavefunction known from quantum theory, which evolves according to Schrödinger's equation $i\hbar\frac{\partial}{\partial t}\psi(q,t)=-\sum_{i=1}^{N}\frac{\hbar^2}{2m_i}\nabla_i^2\psi(q,t) + V(q)\psi(q,t)$ This already completes the specification of the theory for any quantum theory with Hamilton operator of type $H=\sum \frac{1}{2m_i}\hat{p}_i^2 + V(\hat{q})$. • The configuration is distributed according to $|\psi(q,t)|^2$ at some moment of time $t$, and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics. ### Two-slit experiment The Bohmian trajectories for an electron going through the two-slit experiment. A similar pattern was also extrapolated from weak measurements of single photons.[2] The double-slit experiment is an illustration of wave-particle duality. In it, a beam of particles (such as electrons) travels through a barrier with two slits removed. If one puts a detector screen on the other side, the pattern of detected particles shows interference fringes characteristic of waves; however, the detector screen responds to particles. The system exhibits behaviour of both waves (interference patterns) and particles (dots on the screen). If we modify this experiment so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. We can also arrange to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When we do that, the interference pattern disappears. The Copenhagen interpretation states that the particles are not localised in space until they are detected, so that, if there is not any detector on the slits, there is no matter of fact about which slit the particle has passed through. If one slit has a detector on it, then the wavefunction collapses due to that detection. In de Broglie–Bohm theory, the wavefunction travels through both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. The wave function interferes with itself and guides the particles in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen. To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space. ## The Theory ### The ontology The ontology of de Broglie-Bohm theory consists of a configuration $q(t)\in Q$ of the universe and a pilot wave $\psi(q,t)\in\mathbb{C}$. The configuration space $Q$ can be chosen differently, as in classical mechanics and standard quantum mechanics. Thus, the ontology of pilot wave theory contains as the trajectory $q(t)\in Q$ we know from classical mechanics, as the wave function $\psi(q,t)\in\mathbb{C}$ of quantum theory. So, at every moment of time there exists not only a wave function, but also a well-defined configuration of the whole universe. The correspondence to our experiences is made by the identification of the configuration of our brain with some part of the configuration of the whole universe $q(t)\in Q$, as in classical mechanics. While the ontology of classical mechanics is part of the ontology of de Broglie–Bohm theory, the dynamics are very different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the velocities of the particles are given by the wavefunction, which exists in a 3N-dimensional configuration space, where N corresponds to the number of particles in the system;[3] Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction.[4] Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are not necessarily localized at the position of the particle in de Broglie-Bohm theory.[5] The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrodinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles".[6] P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory".[7] It should be noted however that Holland has later called this a merely apparent lack of back reaction, due to the incompleteness of the description.[8] In what follows below, we will give the setup for one particle moving in $\mathbb{R}^3$ followed by the setup for $N$ particles moving in 3 dimensions. In the first instance, configuration space and real space are the same while in the second, real space is still $\mathbb{R}^3$, but configuration space becomes $\mathbb{R}^{3N}$. While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space which is how particles are entangled with each other in this theory. Extensions to this theory include spin and more complicated configuration spaces. We use variations of $\mathbf{Q}$ for particle positions while $\psi$ represents the complex-valued wavefunction on configuration space. ### Guiding equation For a spinless single particle moving in $\mathbb{R}^3$, the particle's velocity is given $\frac{d \mathbf{Q}}{dt} (t) = \frac{\hbar}{m} \operatorname{Im} \left(\frac{\nabla \psi}{\psi} \right) (\mathbf{Q}, t)$. For many particles, we label them as $\mathbf{Q}_k$ for the $k$th particle and their velocities are given by $\frac{d \mathbf{Q}_k}{dt} (t) = \frac{\hbar}{m_k} \operatorname{Im} \left(\frac{\nabla_k \psi}{\psi} \right) (\mathbf{Q}_1, \mathbf{Q}_2, \ldots, \mathbf{Q}_N, t)$. The main fact to notice is that this velocity field depends on the actual positions of all of the $N$ particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe. ### Schrödinger's equation The one particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on $\mathbb{R}^3$. The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function $V$ on $\mathbb{R}^3$: $i\hbar\frac{\partial}{\partial t}\psi=-\frac{\hbar^2}{2m}\nabla^2\psi + V\psi$ For many particles, the equation is the same except that $\psi$ and $V$ are now on configuration space, $\mathbb{R}^{3N}$. $i\hbar\frac{\partial}{\partial t}\psi=-\sum_{k=1}^{N}\frac{\hbar^2}{2m_k}\nabla_k^2\psi + V\psi$ This is the same wavefunction of conventional quantum mechanics. ### Relation to the Born Rule In Bohm's original papers [Bohm 1952], he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by $|\psi|^2$. And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies $|\psi|^2$. For a given experiment, we can postulate this as being true and verify experimentally that it does indeed hold true, as it does. But, as argued in Dürr et al.,[9] one needs to argue that this distribution for subsystems is typical. They argue that $|\psi|^2$ by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. They then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e., $|\psi|^2$) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical. The situation is thus analogous to the situation in classical statistical physics. A low entropy initial condition will, with overwhelmingly high probability, evolve into a higher entropy state: behavior consistent with the second law of thermodynamics is typical. There are, of course, anomalous initial conditions which would give rise to violations of the second law. However, absent some very detailed evidence supporting the actual realization of one of those special initial conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly, in the de Broglie–Bohm theory, there are anomalous initial conditions which would produce measurement statistics in violation of the Born rule (i.e., in conflict with the predictions of standard quantum theory). But the typicality theorem shows that, absent some particular reason to believe one of those special initial conditions was in fact realized, Born rule behavior is what one should expect. It is in that qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate. It can also be shown that a distribution of particles that is not distributed according to the Born rule (that is, a distribution 'out of quantum equilibrium') and evolving under the de Broglie-Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as $|\psi|^2$. See, for example Ref. .[10] A pretty video of the electron density in a 2D box evolving under this process is available here. ### The conditional wave function of a subsystem In the formulation of the De Broglie–Bohm theory, there is only a wave function for the entire universe (which always evolves by the Schrödinger equation). However, once the theory is formulated, it is convenient to introduce a notion of wave function also for subsystems of the universe. Let us write the wave function of the universe as $\psi(t,q^{\mathrm I},q^{\mathrm{II}})$, where $q^{\mathrm I}$ denotes the configuration variables associated to some subsystem (I) of the universe and $q^{\mathrm{II}}$ denotes the remaining configuration variables. Denote, respectively, by $Q^{\mathrm I}(t)$ and by $Q^{\mathrm{II}}(t)$ the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wave function of subsystem (I) is defined by: $\psi^{\mathrm I}(t,q^{\mathrm I})=\psi(t,q^{\mathrm I},Q^{\mathrm{II}}(t)). \,$ It follows immediately from the fact that $Q(t)=(Q^{\mathrm I}(t),Q^{\mathrm{II}}(t))$ satisfies the guiding equation that also the configuration $Q^{\mathrm I}(t)$ satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wave function $\psi$ replaced with the conditional wave function $\psi^{\mathrm I}$. Also, the fact that $Q(t)$ is random with probability density given by the square modulus of $\psi(t,\cdot)$ implies that the conditional probability density of $Q^{\mathrm I}(t)$ given $Q^{\mathrm{II}}(t)$ is given by the square modulus of the (normalized) conditional wave function $\psi^{\mathrm I}(t,\cdot)$ (in the terminology of Dürr et al.[11] this fact is called the fundamental conditional probability formula). Unlike the universal wave function, the conditional wave function of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wave function factors as: $\psi(t,q^{\mathrm I},q^{\mathrm{II}})=\psi^{\mathrm I}(t,q^{\mathrm I})\psi^{\mathrm{II}}(t,q^{\mathrm{II}}) \,$ then the conditional wave function of subsystem (I) is (up to an irrelevant scalar factor) equal to $\psi^{\mathrm I}$ (this is what Standard Quantum Theory would regard as the wave function of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II) then $\psi^{\mathrm I}$ does satisfy a Schrödinger equation. More generally, assume that the universal wave function $\psi$ can be written in the form: $\psi(t,q^{\mathrm I},q^{\mathrm{II}})=\psi^{\mathrm I}(t,q^{\mathrm I})\psi^{\mathrm{II}}(t,q^{\mathrm{II}})+\phi(t,q^{\mathrm I},q^{\mathrm{II}}), \,$ where $\phi$ solves Schrödinger equation and $\phi(t,q^{\mathrm I},Q^{\mathrm{II}}(t))=0$ for all $t$ and $q^{\mathrm I}$. Then, again, the conditional wave function of subsystem (I) is (up to an irrelevant scalar factor) equal to $\psi^{\mathrm I}$ and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), $\psi^{\mathrm I}$ satisfies a Schrödinger equation. The fact that the conditional wave function of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of Standard Quantum Theory emerges from the Bohmian formalism when one considers conditional wave functions of subsystems. ## Extensions ### Spin To incorporate spin, the wavefunction becomes complex-vector valued. The value space is called spin space; for a spin-½ particle, spin space can be taken to be $\mathbb{C}^2$. The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term. $\frac{d \mathbf{Q}_k}{dt} (t) = \frac{\hbar}{m_k} Im \left(\frac{(\psi,D_k \psi)}{(\psi,\psi)} \right) (\mathbf{Q}_1, \mathbf{Q}_2, \ldots, \mathbf{Q}_N, t)$ $i\hbar\frac{\partial}{\partial t}\psi = \left(-\sum_{k=1}^{N}\frac{\hbar^2}{2m_k}D_k^2 + V - \sum_{k=1}^{N} \mu_k \mathbf{S}_{k}/{S}_{k} \cdot \mathbf{B}(\mathbf{q}_k) \right) \psi$ where $\mu_k$ is the magnetic moment of the $k$th particle, $\mathbf{S}_{k}$ is the appropriate spin operator acting in the $k$th particle's spin space, ${S}_{k}$ is spin of the particle (${S}_{k} = 1/2$ for electron), $D_k=\nabla_k-\frac{ie_k}{c\hbar}\mathbf{A}(\mathbf{q}_k)$, $\mathbf{B}$ and $\mathbf{A}$ are, respectively, the magnetic field and the vector potential in $\mathbb{R}^{3}$ (all other functions are fully on configuration space), $e_k$ is the charge of the $k$th particle, and $(\cdot,\cdot)$ is the inner product in spin space $\mathbb{C}^d$, $(\phi,\psi) = \sum_{s=1}^d \phi_s^* \psi_s.$ For an example of a spin space, a system consisting of two spin 1/2 particle and one spin 1 particle has a wavefunctions of the form $\psi: \mathbb{R}^{9}\times \mathbb{R} \to \mathbb{C}^{2}\otimes \mathbb{C}^{2} \otimes \mathbb{C}^{3}$. That is, its spin space is a 12 dimensional space. ### Curved space To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of Schrödinger's equation. For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space and the potential in Schrödinger's equation becomes a local self-adjoint operator acting on that space.[12] ### Quantum field theory In Dürr et al.,[13][14] the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space. Hrvoje Nikolić [15] introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place. ### Exploiting nonlocality Antony Valentini[16] has extended the de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but it has the virtue that it makes the parallel universes of the chaotic inflation theory observable in principle. Unlike de Broglie–Bohm theory, Valentini's theory has the wavefunction evolution also depend on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. ### Relativity Pilot wave theory is explicitly nonlocal. As a consequence, most relativistic variants of pilot wave theory need a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time. However, this way (which explicitly breaks the relativistic covariance) is not the only way. It is also possible that a rule which defines instantaneousness is contingent, by emerging dynamically from relativistic covariant laws combined with particular initial conditions. In this way, the need for a preferred foliation can be avoided and relativistic covariance can be saved. There has been work in developing relativistic versions of de Broglie–Bohm theory. See Bohm and Hiley: The Undivided Universe, and [3], [4], and references therein. Another approach is given in the work of Dürr et al.[17] in which they use Bohm-Dirac models and a Lorentz-invariant foliation of space-time. Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically.[18] In 1996, Partha Ghose had presented a relativistic quantum mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons).[18] In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics.[19] The same year, Ghose worked out Bohmian photon trajectories for specific cases.[20] Subsequent weak measurement experiments yielded trajectories which coincide with the predicted trajectories.[21][22] Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wave functions.[23] He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory,[15][24][25] in which $|\psi|^2$ is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings.[26] ## Results Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of the standard predictions of quantum mechanics in so far as the latter has predictions. However, while standard quantum mechanics is limited to discussing the results of 'measurements', de Broglie–Bohm theory is a theory which governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell[27]). The basis for agreement with standard quantum mechanics is that the particles are distributed according to $|\psi|^2$. This is a statement of observer ignorance, but it can be proven[9] that for a universe governed by this theory, this will typically be the case. There is apparent collapse of the wave function governing subsystems of the universe, but there is no collapse of the universal wavefunction. ### Measuring spin and polarization According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or -1, meaning that it is aligned the opposite way. For an ensemble of particles, if we expect the particles to be aligned, the results are all 1. If we expect them to be aligned oppositely, the results are all -1. For other alignments, we expect some results to be 1 and some to be -1 with a probability that depends on the expected alignment. For a full explanation of this, see the Stern-Gerlach Experiment. In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible[28] to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin up while in the other setup it registers as spin down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle—instead spin is, so to speak, in the wave function of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality, and is related to naive realism about operators.[29] ### Measurements, the quantum formalism, and observer independence De Broglie–Bohm theory gives the same results as quantum mechanics. It treats the wavefunction as a fundamental object in the theory as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics. References include Bohm's original 1952 paper and Dürr et al.[9] #### Collapse of the wavefunction De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with Schrödinger's equation and the guiding equation, with an initial $|\psi|^2$ distribution for the particles in the system (see the section on the conditional wave function of a subsystem for details). It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results. Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by Schrödinger's equation and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger's equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include and this will affect when "collapse" occurs. #### Operators as observables In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem.[30] A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction. In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant. There have also been claims that experiments reject the Bohm trajectories [5] in favor of the standard QM lines. But as shown in [6] and [7], such experiments cited above only disprove a misinterpretation of the de Broglie–Bohm theory, not the theory itself. There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to $|\psi|^2$ and no contradiction to experimental results is possible to detect. Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al.[31] for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators. #### Hidden variables De Broglie–Bohm theory is often referred to as a "hidden variable" theory. The alleged applicability of the term "hidden variable" comes from the fact that the particles postulated by Bohmian mechanics do not influence the evolution of the wavefunction. The argument is that, because adding particles does not have an effect on the wavefunction's evolution, such particles must not have effects at all and are, thus, unobservable, since they cannot have an effect on observers. There is no analogue of Newton's third law in this theory. The idea is supposed to be that, since particles cannot influence the wavefunction, and it is the wavefunction that determines measurement predictions through the Born rule, the particles are superfluous and unobservable. Bohm and Hiley have stated that they found their own choice of terms of an "interpretation in terms of hidden variables" to be too restrictive. In particular, a particle is not actually hidden but rather "is what is most directly manifested in an observation", even if position and momentum of a particle cannot be observed with arbitrary precision.[32] Put in simpler words, the particles postulated by the de Broglie–Bohm theory are anything but "hidden" variables: they are what the objects we see in everyday experience are made of; it is the wavefunction itself which is "hidden" in the sense of being invisible and not-directly-observable. Even a whole particle trajectory can be measured by a weak measurement. Such a measured trajectory coincides with the de Broglie–Bohm trajectory. In this sense, de Broglie–Bohm trajectories are not hidden variables. Or at least they are not more hidden than the wave function, in the sense that both can only be experimentally determined through a large number of measurements on an ensemble of equally prepared systems. ### Heisenberg's uncertainty principle The Heisenberg uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of $\Delta x$, and the momentum with an accuracy of $\Delta p$, then $\Delta x\Delta p\gtrsim h.$ If we make further measurements in order to get more information, we disturb the system and change the trajectory into a new one depending on the measurement setup; therefore, the measurement results are still subject to Heisenberg's uncertainty relation. In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above), on the de Broglie–Bohm theory. To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation. For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that it describes it from the viewpoint of the Copenhagen interpretation. ### Quantum entanglement, Einstein-Podolsky-Rosen paradox, Bell's theorem, and nonlocality De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem,[33] which in turn led to the Bell test experiments. In the Einstein–Podolsky–Rosen paradox, the authors describe a thought-experiment one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory.[34] Decades later John Bell proved Bell's theorem (see p. 14 in Bell[27]), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality". Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated—meaning that the relevant quantum mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent non-locality of the effect. The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored."[35] The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. An analysis of exactly what kind of nonlocality is present and how it is compatible with relativity can be found in Maudlin.[36] Note that in Bell's work, and in more detail in Maudlin's work, it is shown that the nonlocality does not allow for signaling at speeds faster than light. ### Classical limit Bohm's formulation of de Broglie–Bohm theory in terms of a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al.[37] for steps towards a rigorous analysis. ### Quantum trajectory method Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time-step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H+H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.) This approach has been adapted, extended, and used by a number of researchers in the Chemical Physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A recent (2007) issue of the Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "Computational Bohmian Dynamics". Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat-capacity of small clusters Nen for n~100. There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where $R^{-1}\nabla^2R\rightarrow\infty.$ This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged. These methods, as does Bohm's Hamilton-Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account. ### Occam's razor criticism Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter then Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave-packet).[38] No particle (in the Bohm sense of having a defined position and velocity) exists, according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Talking of Bohm's 1952 approach, Everett says: “ Our main criticism of this view is on the grounds of simplicity - if one desires to hold the view that $\psi$ is a real field then the associated particle is superfluous since, as we have endeavored to illustrate, the pure wave theory is itself satisfactory.[39] ” In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument of Everett's is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor.[40] Many authors have expressed critical views of the de Broglie-Bohm theory, by comparing it to Everett's many worlds approach. Many (but not all) proponents of the de Broglie-Bohm theory (such as Bohm and Bell) interpret the universal wave function as physically real. According to some supporters of Everett's theory, if the (never collapsing) wave function is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohm particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"[38]); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers.[38] H. Dieter Zeh comments on these "empty" branches: “ It is usually overlooked that Bohm's theory contains the same "many worlds" of dynamically separate branches as the Everett interpretation (now regarded as "empty" wave components), since it is based on precisely the same . . . global wave function . . .[41] ” David Deutsch has expressed the same point more "acerbically":[38] “ pilot-wave theories are parallel-universe theories in a state of chronic denial.[42] ” According to Brown & Wallace[38] the de Broglie-Bohm particles play no role in the solution of the measurement problem. These authors claim[38] that the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. These authors also claim[38] that a standard tacit assumption of the de Broglie-Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini[43] who argues that the entirety of such objections arises from a failure to interpret de Broglie-Bohm theory on its own terms. According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function.[44] ## Derivations De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations all of which are very different and lead to different ways of understanding and extending this theory. • Schrödinger's equation can be derived by using Einstein's light quanta hypothesis: $E = \hbar \omega \;$ and de Broglie's hypothesis: $\mathbf{p} = \hbar \mathbf{k}\;$. The guiding equation can be derived in a similar fashion. We assume a plane wave: $\psi(\mathbf{x},t) = Ae^{i(\mathbf{k}\cdot\mathbf{x}- \omega t)}$. Notice that $i\mathbf{k}= \nabla\psi /\psi$. Assuming that $\mathbf{p} = m \mathbf{v}$ for the particle's actual velocity, we have that $\mathbf{v}= \frac{\hbar}{m} Im \left(\frac{\nabla\psi}{\psi}\right)$. Thus, we have the guiding equation. Notice that this derivation does not use Schrödinger's equation. • Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method which generalizes to many possible alternative theories. The starting point is the continuity equation $-\frac{\partial \rho}{\partial t} = \nabla \cdot (\rho v^{\psi})$ for the density $\rho=|\psi|^2$. This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle. • A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform Schrödinger's equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows: Decomposition: $\psi(\mathbf{x},t) = R(\mathbf{x},t)e^{i S(\mathbf{x},t) / \hbar}.$ Note $R^2(\mathbf{x},t)$ corresponds to the probability density $\rho (\mathbf{x},t) = |\psi (\mathbf{x},t)|^2$. Continuity Equation: $-\frac{\partial \rho(\mathbf{x},t)}{\partial t} = \nabla \cdot \left(\rho (\mathbf{x},t)\frac{\nabla S(\mathbf{x},t)}{m}\right)$ Hamilton–Jacobi Equation: $\frac{\partial S(\mathbf{x},t)}{\partial t} = -\left[ V + \frac{1}{2m}(\nabla S(\mathbf{x},t))^2 -\frac{\hbar ^2}{2m} \frac{\nabla ^2R(\mathbf{x},t)}{R(\mathbf{x},t)} \right].$ The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential $V-\frac{\hbar ^2}{2m} \frac{\nabla ^2 R}{R}$ and velocity field $\frac{\nabla S}{m}.$ The potential $V$ is the classical potential that appears in Schrödinger's equation and the other term involving $R$ is the quantum potential, terminology introduced by Bohm. This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by $\frac{\nabla S}{m}$ which is a symptom of this being a first-order theory, not a second-order theory. • A fourth derivation was given by Dürr et al.[9] In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that Schrödinger's equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis. • A fifth derivation, given by Dürr et al.[13] is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator $H$, the equation to satisfy for all functions $f$ (with associated multiplication operator $\hat{f}$) is $(v(f))(q) = \mathrm{Re} \frac{(\psi, \frac{i}{\hbar} [H,\hat f] \psi)}{(\psi,\psi)}(q)$ where $(v,w)$ is the local Hermitian inner product on the value space of the wavefunction. This formulation allows for stochastic theories such as the creation and annihilation of particles. • A further derivation has been given by Peter R. Holland, on which he bases the entire work presented in his quantum physics textbook The Quantum Theory of Motion, a main reference book on the de Broglie–Bohm theory. It is based on three basic postulates and an additional fourth postulate that links the wave function to measurement probabilities:[45] 1. A physical system consists in a spatiotemporally propagating wave and a point particle guided by it; 2. The wave is described mathematically by a solution $\psi$ to Schrödinger's wave equation; 3. The particle motion is described by a solution to $\mathbf{\dot x}(t) = [\nabla S (\mathbf{x}(t),t))]/m$ in dependence on initial condition $\mathbf{x}(t=0)$, with $S$ the phase of $\psi$. The fourth postulate is subsidiary yet consistent with the first three: 4. The probability $\rho (\mathbf{x}(t))$ to find the particle in the differential volume $d^3 x$ at time t equals $|\psi(\mathbf{x}(t))|^2$. ## History De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference. ### Pilot-wave theory Dr. de Broglie presented his pilot wave theory at the 1927 Solvay Conference,[46] after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild mannerism left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless in 1932 due to both the Copenhagen school's more successful P.R. efforts and his own inability to understand quantum decoherence. Also in 1932, John von Neumann published a paper,[47] claiming to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades. In truth, von Neumann's proof is based on invalid assumptions, such as quantum physics can be made local, and it does not really disprove the pilot-wave theory. De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al.[48][49] Around this time Erwin Madelung also developed a hydrodynamic version of Schrödinger's equation which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory.[50] The Madelung equations, being quantum Euler equations (fluid dynamics), differ philosophically from the de Broglie–Bohm mechanics[51] and are the basis of the hydrodynamic interpretation of quantum mechanics. Peter R. Holland has pointed out that, in 1927, Einstein had submitted a preprint with a related proposal but, not convinced, had withdrawn it before publication.[52] According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them."[53] This entity is the quantum potential. After publishing a popular textbook on Quantum Mechanics which adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's theorem. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It extended the original Pilot Wave Theory to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993]. This stage applies to multiple particles, and is deterministic. The de Broglie–Bohm theory is an example of a hidden variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local. Bohm's paper was largely ignored by other physicists. Even Albert Einstein did not consider it a satisfactory answer to the quantum non-locality question. The rest of the contemporary objections, however, were ad hominem, focusing on Bohm's sympathy with liberals and supposed communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee. Eventually the cause was taken up by John Bell. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden variables theories (which include Bohm's). Bell showed that von Neumann's objection amounted to showing that hidden variables theories are local, and that nonlocality is a feature of all quantum mechanical systems. ### Bohmian mechanics This term is used to describe the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles. The papers of Dürr et al. popularized the term. All of non-relativistic quantum mechanics can be fully accounted for in this theory. ### Causal interpretation and ontological interpretation Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is 'The Undivided Universe' [Bohm, Hiley 1993]. This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not, strictly speaking, a formulation of the de Broglie–Bohm theory. However, it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and the de Broglie–Bohm theory. An in-depth analysis of possible interpretations of Bohm's model of 1952 was given in 1996 by philosopher of science Arthur Fine.[54] ## Notes 1. David Bohm (1957). Causality and Chance in Modern Physics. Routledge & Kegan Paul and D. Van Nostrand. ISBN 0-8122-1002-6 [Amazon-US | Amazon-UK]., p. 117. 2. D. Bohm and B. Hiley: The undivided universe: An ontological interpretation of quantum theory, p. 37. 3. H. R. Brown, C. Dewdney and G. Horton: Bohm particles and their detection in the light of neutron interferometry, Foundations of Physics, 1995, Volume 25, Number 2, pp. 329-347. 4. D. Bohm and B. Hiley: The undivided universe: An ontological interpretation of quantum theory, p. 24 5. Peter R. Holland: The Quantum Theory of Motion: An Account of the De Broglie-Bohm Causal Interpretation of Quantum Mechanics, Cambridge University Press, Cambridge (first published June 25, 1993), ISBN 0-521-35404-8 [Amazon-US | Amazon-UK] hardback, ISBN 0-521-48543-6 [Amazon-US | Amazon-UK] paperback, transferred to digital printing 2004, Chapter I. section (7) "There is no reciprocal action of the particle on the wave", p. 26 6. * P. Holland: Hamiltonian theory of wave and particle in quantum mechanics II: Hamilton-Jacobi theory and particle back-reaction, Nuovo Cimento B 116, 2001, pp. 1143-1172, full text preprint p. 31) 7. ^ a b c d 8. ^ a b 9. ^ a b 10. Valentini, A., 1991, "Signal-Locality, Uncertainty and the Subquantum H-Theorem. II," Physics Letters A 158: 1–8. 11. ^ a b Partha Ghose: Relativistic quantum mechanics of spin-0 and spin-1 bosons, Foundations of Physics, vol. 26, no. 11, pp. 1441-1455, 1996, doi:10.1007/BF02272366 12. Nicola Cufaro Petroni, Jean-Pierre Vigier: Remarks on Observed Superluminal Light Propagation, Foundations of Physics Letters, vol. 14, no. 4, pp. 395-400, doi:10.1023/A:1012321402475, therein: section 3. Conclusions, page 399 13. Sacha Kocsis, Boris Braverman, Sylvain Ravets, Martin J. Stevens, Richard P. Mirin, L. Krister Shalm, Aephraim M. Steinberg: Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer, Science, vol. 332 no. 6034 pp.&nbsp:1170-1173, 3 June 2011, doi:10.1126/science.1202218 (abstract) 14. Hrvoje Nikolić: , Foundations of Physics Letters, vol. 18, no. 6, November 2005, pp. 549-561, doi:10.1007/s10702-005-1128-1 15. Hrvoje Nikolić: , arXiv:0811/0811.1905v2 (submitted 12 November 2008 (v1), revised 12 Jan 2009) 16. Hrvoje Nikolić: , arXiv:1002.3226v2 [quant-ph] (submitted on 17 Feb 2010, version of 31 May 2010) 17. Hrvoje Nikolić: , 2007 J. Phys.: Conf. Ser. 67 012035 18. ^ a b Bell, John S. (1987). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. ISBN 0521334950 [Amazon-US | Amazon-UK]. 19. Albert, D. Z., 1992, Quantum Mechanics and Experience, Cambridge, MA: Harvard University Press 20. David Bohm, Basil Hiley: The Undivided Universe: An Ontological Interpretation of Quantum Theory, edition published in the Taylor & Francis e-library 2009 (first edition Routledge, 1993), ISBN 0-203-98038-7 [Amazon-US | Amazon-UK], p. 2 21. Bell J. S. (1964). "On the Einstein Podolsky Rosen Paradox" (PDF). Physics 1: 195. 22. Einstein; Podolsky; Rosen (1935). "Can Quantum Mechanical Description of Physical Reality Be Considered Complete?". 47 (10): 777–780. doi:10.1103/PhysRev.47.777. 23. Bell, page 115 24. Maudlin, T. (1994). Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics. Cambridge, MA: Blackwell. ISBN 0631186093 [Amazon-US | Amazon-UK]. 25. Harvey R Brown and David Wallace, Solving the measurement problem: de Broglie-Bohm loses out to Everett, Foundations of Physics 35 (2005), pp. 517-540. [2] Abstract: "The quantum theory of de Broglie and Bohm solves the measurement problem, but the hypothetical corpuscles play no role in the argument. The solution finds a more natural home in the Everett interpretation." 26. See section VI of Everett's thesis:Theory of the Universal Wavefunction, pp 3-140 of Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X [Amazon-US | Amazon-UK] 27. Daniel Dennett (2000). With a little help from my friends. In D. Ross, A. Brook, and D. Thompson (Eds.), Dennett's Philosophy: a comprehensive assessment. MIT Press/Bradford, ISBN 0-262-68117-X [Amazon-US | Amazon-UK]. 28. David Deutsch, Comment on Lockwood. British Journal for the Philosophy of Science 47, 222228, 1996 29. Peter R. Holland: The quantum theory of motion, Cambridge University Press, 1993 (re-printed 2000, transferred to digital printing 2004), ISBN 0-521-48543-6 [Amazon-US | Amazon-UK], p. 66 ff. 30. Solvay Conference, 1928, Electrons et Photons: Rapports et Descussions du Cinquieme Conseil de Physique tenu a Bruxelles du 24 au 29 October 1927 sous les auspices de l'Institut International Physique Solvay 31. von Neumann J. 1932 Mathematische Grundlagen der Quantenmechanik 32. Bacciagaluppi, G., and Valentini, A., Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference 33. Madelung, E. (1927). "Quantentheorie in hydrodynamischer Form". 40 (3–4): 322–326. doi:10.1007/BF01400372. 34. Peter Holland: What's wrong with Einstein's 1927 hidden-variable interpretation of quantum mechanics?, Foundations of Physics (2004), vol. 35, no. 2, p. 177–196, doi:10.1007/s10701-004-1940-7, arXiv: quant-ph/0401017, p. 1 35. Peter Holland: What's wrong with Einstein's 1927 hidden-variable interpretation of quantum mechanics?, Foundations of Physics (2004), vol. 35, no. 2, p. 177–196, doi:10.1007/s10701-004-1940-7, arXiv: quant-ph/0401017, p. 14 36. A. Fine: On the interpretation of Bohmian mechanics, in: J. T. Cushing, A. Fine, S. Goldstein (Eds.): Bohmian mechanics and quantum theory: an appraisal, Springer, 1996, pp. 231−250 ## References • Albert, David Z. (May 1994). "Bohm's Alternative to Quantum Mechanics". Scientific American 270 (5): 58–67. doi:10.1038/scientificamerican0594-58. • Barbosa, G. D.; N. Pinto-Neto (2004). "A Bohmian Interpretation for Noncommutative Scalar Field Theory and Quantum Mechanics". Physical Review D 69: 065014. arXiv:hep-th/0304105. Bibcode:2004PhRvD..69f5014B. doi:10.1103/PhysRevD.69.065014. • Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I". Physical Review 85: 166–179. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166. (full text) • Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", II". Physical Review 85: 180–193. Bibcode:1952PhRv...85..180B. doi:10.1103/PhysRev.85.180. (full text) • Bohm, David (1990). "A new theory of the relationship of mind and matter". Philosophical Psychology 3 (2): 271–286. doi:10.1080/09515089008573004. • Bohm, David; B.J. Hiley (1993). The Undivided Universe: An ontological interpretation of quantum theory. London: Routledge. ISBN 0-415-12185-X [Amazon-US | Amazon-UK]. • Durr, Detlef; Sheldon Goldstein, Roderich Tumulka and Nino Zangh (December 2004). "Bohmian Mechanics" (PDF). Physical review letters 93 (9): 090402. arXiv:quant-ph/0303156. Bibcode:2004PhRvL..93i0402D. doi:10.1103/PhysRevLett.93.090402. ISSN 0031-9007. PMID 15447078. More than one of `|author=` and `|last=` specified (help) • Goldstein, Sheldon (2001). "Bohmian Mechanics". . • Hall, Michael J.W. (2004). "Incompleteness of trajectory-based interpretations of quantum mechanics". Journal of Physics a Mathematical and General 37: 9549. arXiv:quant-ph/0406054. Bibcode:2004JPhA...37.9549H. doi:10.1088/0305-4470/37/40/015. (Demonstrates incompleteness of the Bohm interpretation in the face of fractal, differentialble-nowhere wavefunctions.) • Holland, Peter R. (1993). The Quantum Theory of Motion : An Account of the de Broglie–Bohm Causal Interpretation of Quantum Mechanics. Cambridge: Cambridge University Press. ISBN 0-521-48543-6 [Amazon-US | Amazon-UK]. • Nikolic, H. (2004). "Relativistic quantum mechanics and the Bohmian interpretation". Foundations of Physics Letters 18: 549. arXiv:quant-ph/0406173. Bibcode:2005FoPhL..18..549N. doi:10.1007/s10702-005-1128-1. • Passon, Oliver (2004). Why isn't every physicist a Bohmian?. arXiv:quant-ph/0412119. Bibcode:2004quant.ph.12119P. • Sanz, A. S.; F. Borondo (2003). "A Bohmian view on quantum decoherence". The European Physical Journal D 44: 319. arXiv:quant-ph/0310096. Bibcode:2007EPJD...44..319S. doi:10.1140/epjd/e2007-00191-8. • Sanz, A.S. (2005). "A Bohmian approach to quantum fractals". J. Phys. A: Math. Gen. 38: 319. arXiv:quant-ph/0412050. Bibcode:2005JPhA...38.6037S. doi:10.1088/0305-4470/38/26/013. (Describes a Bohmian resolution to the dilemma posed by non-differentiable wavefunctions.) • Silverman, Mark P. (1993). And Yet It Moves: Strange Systems and Subtle Questions in Physics. Cambridge: Cambridge University Press. ISBN 0-521-44631-7 [Amazon-US | Amazon-UK]. • Streater, Ray F. (2003). "Bohmian mechanics is a "lost cause"". Retrieved 2006-06-25. • Valentini, Antony; Hans Westman (2004). Dynamical Origin of Quantum Probabilities. arXiv:quant-ph/0403034. Bibcode:2005RSPSA.461..253V. doi:10.1098/rspa.2004.1394. • Bohmian mechanics on arxiv.org ## Further reading • John S. Bell: Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy, Cambridge University Press, 2004, ISBN 0-521-81862-1 [Amazon-US | Amazon-UK] • David Bohm, Basil Hiley: The Undivided Universe: An Ontological Interpretation of Quantum Theory, Routledge Chapman & Hall, 1993, ISBN 0-415-06588-7 [Amazon-US | Amazon-UK] • Detlef Dürr, Sheldon Goldstein, Nino Zanghì: Quantum Physics Without Quantum Philosophy, Springer, 2012, ISBN 978-3-642-30690-7 [Amazon-US | Amazon-UK] • Detlef Dürr, Stefan Teufel: Bohmian Mechanics: The Physics and Mathematics of Quantum Theory, Springer, 2009, ISBN 978-3-540-89343-1 [Amazon-US | Amazon-UK] • Peter R. Holland: The quantum theory of motion, Cambridge University Press, 1993 (re-printed 2000, transferred to digital printing 2004), ISBN 0-521-48543-6 [Amazon-US | Amazon-UK] Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "De Broglie–Bohm theory", available in its original form here: http://en.wikipedia.org/w/index.php?title=De_Broglie%E2%80%93Bohm_theory • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 128, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8852155804634094, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/52273/is-it-possible-for-a-physical-object-to-have-a-irrational-length/52351
# Is it possible for a physical object to have a irrational length? Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational. If I were to use this caliper to measure any small object, would the caliper ever return an irrational number, or would the true dimensions of physical objects be constrained to rational numbers? - 1 This is much more a physics question than a math question, as it is about how space behaves at very small distances. – Lieven Jan 27 at 0:04 2 Can one have such calipers? (Heisenberg.) – Brian M. Scott Jan 27 at 0:04 3 We are starting by assuming something that isn't so. And there is no such thing as the true dimension. – André Nicolas Jan 27 at 0:08 9 What units are you using for length? It's possible that the units in regular use in my country are an irrational multiple of the units used in yours --- then what will you do? – Gerry Myerson Jan 27 at 0:14 1 The problem I see with your question is that you have defined a hypothetical caliper and are complaining when people are applying it to hypothetical objects. You can only measure physical objects with physical calipers, and hypothetical objects with hypothetical calipers. – Rahul Narain Jan 27 at 0:15 show 5 more comments ## 11 Answers The set of irrational numbers densely fills the number line. Even assuming that quantum mechanics doesn't disable the preimse of your question, the probability that you will randomly pick an irrational number out of a hat of all numbers is roughly $1 - \frac{1}{\infty} \approx 1$. So the question should be "is it possible to have an object with rational length? - This is the appropriate question. – KDN Jan 27 at 2:33 2 All of what you have said makes sense, but isn't $1-1/\infty$ = 1 because $1/\infty$ = 0? – Nick Anderegg Jan 27 at 4:26 But wait, that doesn't actually make sense. There are an infinite number of rational numbers as well. The irrational number may be "densely" packed at whatever precision you choose, but at infinite precision, there would have to be an even distribution of both rational and irrational numbers. – Nick Anderegg Jan 27 at 4:32 2 @NickAnderegg: yes, there are an infinity of rational numbers. But there is a bigger infinity of irrational numbers. Namely, the number of rational number is countably infinite, while the number of irrational numbers is uncountably infinite. – Jerry Schirmer Jan 27 at 10:45 4 Didn't you mean that the probability of randomly picking an irrational number is $1$? @NickAnderegg Even though the set of rational number is dense in $\mathbb{R}$, it's measure is zero. This means that if we take any set and remove all rational numbers from it, we cannot tell the difference by measuring it. – Petr Pudlák Jan 27 at 16:26 show 5 more comments Is it possible for a physical object to have an irrational length? It's a bit of a philosophical question, but one could say this: Just for fun, assume you have a perfect 45-degree right triangular piece of metal whose base and height is rational. Then it's hypotenuse is irrational because its length is the base times $\sqrt{2}$. So it is possible to have a physical object of irrational length IF you can have a physical object of rational length. - 1 This seems to be where my whole premise falls apart and I'm not able to communicate my thinking clearly. Basically, what I'm asking is it possible for that hypotenuse to exist. Perhaps the base and height cannot both be equal because then the hypotenuse would be irrational. But otherwise, this make sense. – Nick Anderegg Jan 27 at 4:30 1 you also need to be able to assume you can have an object which has a perfect right angle. – RoundTower Jan 27 at 14:48 On very short distances, the position of particles isn't deterministic, but random. The electrons on the very outer edge of this object won't necessarily be the exact same distance from neighboring electrons, but occupy positions according to a wave function that obeys a number of laws and is based on neighboring electrons. So imagine your calipers are exactly flat and have infinite precision, then if the outermost electrons on each end can occupy some non-discrete interval then it would have to be able to occupy an irrational distance. I'm not however, a physicist, and even the current state of physics isn't sure about how things act on very short distances. Below one Planck length it doesn't really make sense to measurably distinguish between the location of two points, and since there is a rational number arbitrarily close to any real number, we will never be able to actually experimentally determine the truth of this statement. So since this is really a physical question and not a mathematical one, and all current physical knowledge points in the direction that this question is unanswerable, I'm going to rule it as such. - physical objects do not have well-defined lengths (there is this thing called quantum mechanics conceived in its entirety upon this concept). A more interesting question is if dimensionless numbers in physics can be irrational, for instance, the ratio between the mass of the electron and the proton. Theoretically, we will need a numerical expansion and some limiting argument to tell to what domain of the reals the limit belongs (irrational, transcendental, rational). Experimentally this can never be asserted, as naturally all experimental numbers are known with a finite number of digits of precision - Suppose your infinitely precise caliper gives the answer $2.00000000000000\dots$ How would you know whether this is $2$ exactly, or if somewhere past the trillionth decimal it starts to deviate from $2$? How would you read your infinitely precise caliper? - Well, that's just cheating the question. These are clearly not any sort of calipers in existence. I've modified the question to accommodate this response. – Nick Anderegg Jan 27 at 0:09 1 You're still assuming, without justification, that there is such a thing as a "precise length" of a physical object. – Gerry Myerson Jan 27 at 0:13 But matter is quantized: atoms/quarks/...strings? Even strings are quantized. If everything is quantized we don't have infinite precision or infinite decimal places. – Raindrop Jan 28 at 2:21 If we assume that the universe is continuous, and say fix everything at a certain time frame. Then everything has an irrational length, regardless to how well we can measure it. Simply because we can define a unit of measure whose result would be irrational. For example, measure my foot. Now define the unit of measure $1\ \small\bf Karf$ to be the square root of twice the length. Then my foot would be exactly $\sqrt\frac12\ \small\bf Karf$ long. As we know $\sqrt\frac12$ is irrational. But this requires the assumption that the universe is continuous and that we can freeze time and measure with infinite precision. If the universe is discrete, or if we cannot measure accurately, then we can't really say too much. Not to mention that everything changes all the time (cells falling off, atoms released, etc. etc.) so there's no constant length to anything large enough. - I think, comming up with a scale like that isn't what the question asks for: Take the right Isosceles triangle in the example. It assumes that you meassure the side-lengths with length 1 and thus, the hypothenuse has to be $/sqrt{2}$. The question essentially is: Given you use an infinitely accurate scale in which one of the sides comes out rational, would, on a physical level, all sides be rational (two of them of miniscully different length) or could two of them possibly be exactly the same, making the third side irrational? (or the third side could be rational and the other two irrational.) – kram1032 Jan 27 at 23:32 If you are talking about real, physical objects, then your question collapses completely, because such objects are composed of particles which have no definite positions and momenta according to Heisenberg's uncertainty principle. So lets stick to a stick in classical mechanics, then your caliper can return irrational numbers. But a mathematical line-segment doesnt even have to have rational or irrational length, it could have an even 'finer' scale, a so called non-standard number. - From the point of view of measure theory, the probability of measuring a rational length is actually zero. Consider, without loss of generality, the interval $[0,1]$. Using the standard Lebesgue measure, the measure of this set (its length) is 1. If we consider the subset which consists of all the rational numbers from this set, its measure is actually 0. This starts to make sense if one considers how miniscule the size of the rational numbers is compared with all the other real numbers. In fact, it turns out that the only subsets of our interval with non-zero measure are continuous ones (eg $[a, b]$, where $a<b$ and the measure is $b-a$) and ones that contain so-called normal numbers. Only the normal numbers are said to 'take up any space' on the real number line. That is, virtually all the real numbers are actually normal numbers (which can never be written down on paper), and so the probability of measuring anything that's not a normal number is 0. http://en.wikipedia.org/wiki/Normal_number - Let's take the smallest possible case of such a triangle. It would be made of three atoms of equal size, linked together in a L-shape with a 90° angle in between. If you have an arrangement like that, and something similar might be chemically possible, the centers of mass of the more distant two atoms would be apart [exactly][1] $\sqrt{2}\times$the distance between the directly touching ones. Presumably, if you take a more rigorous and accurate approach, if you look at the bonding structure of water (which, of course, won't feature a right angle but the situation is equivalent), the centers of mass of the two Hydrogen atoms would also be an irrational distance appart compared the the distances of the centers of mass of each Hydrogen to the Oxygen. No matter what scale you use, at least one of the two distances will always be irrational. If you can somehow limit the set of all possible distances to a countable infinity, I'd suspect this set not to be the rationals but rather the algebraic numbers. (or at least the subset of them that are positive) [1]: modulo Heisenberg but I didn't use proper orbitals either. Let's, for the sake of the argument, define a distance on quantum level by the distances of expected values of the corresponding probability clouds. - One can give an argument based on measure theory and the like, but one must not forget that physics is about measurement. The question whether the length can be rational or irrational would need an infinitely precise measurement, which is not possible (measurements bear an error). Hence this question cannot be answered from the physics viewpoint. Any answer will be just speculation. - The hypotenuse of a right angled triangle with legs 1 is irrational. Alternatively, consider a pyramid. As you take measurements of the 'base length' towards the apex, you get a continuous sets of values. One of these must be irrational. Of course, you can then start an argument about what 'physical' object is, and if length is truly continuous, or it has to be discrete because it is constructed by atoms. - Well, but then you have to find a right angled triangle, and you have to have the legs equal exactly $1$ of something rational... – Asaf Karagila Jan 27 at 0:09 A theoretic triangle is not a physical object. – Nick Anderegg Jan 27 at 0:10 @NickAnderegg How about your set square? – Calvin Lin Jan 27 at 0:10 The question is asking more about precision. It's more along the lines of "Can I have a physical triangle with a hypotenuse of $\sqrt[]{2}$. Perhaps it wouldn't be possible to construct a triangle with legs that are exactly 1 unit. Perhaps one leg is so slightly shorter in a way as to allow a hypotenuse near $\sqrt[]{2}$. – Nick Anderegg Jan 27 at 0:14 ## protected by Qmechanic♦May 10 at 18:59 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408726692199707, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/215492/how-does-the-frobenius-map-permute-the-roots
# How does the Frobenius map permute the roots How can a Frobenius map permute the roots of an algebraic group? According to Carter (in Finite groups of Lie type), a root subgroup $X_{\alpha}$ is the 1-dimensional unipotenet subgroup giving rise to the root $\alpha$. The permutation $\rho$ on the roots, which is given by a Frobenius map $F: G \rightarrow G$ is defined to be $F(X_{\alpha}) = X_{\rho(\alpha)}$. But I don't understand the permutation. For example, according to the figure on page 37, there is certain Frobenius map for a group of type $A_l$, $l \geq 2$, such that two of the simple roots are permuted. But I don't know how can this take place. Now I am considering a special case. Let $K = \bar{\mathbb F_2}$, the algebraic closure of the field of two elements, and $G = SL(3,K)$, the group of $3 \times 3$ special linear group over $K$, which is of type $A_2$. If $F: G \rightarrow G$, $(a_{ij}) \mapsto (a_{ij}^2)$, and for two simple roots $\alpha, \beta$ of $G$, $X_{\alpha} =\left\{ \begin{pmatrix} 1 & a & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} | a \in K \right\}$, $X_{\beta} =\left\{ \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & b\\ 0 & 0 & 1 \end{pmatrix} | b \in K \right\}$. Then both $F(X_{\alpha})$ and $F(X_{\beta})$ are all themselves, with no permutation done. From this, I can see no hint as to the case when the permutation can be caused by a Frobenius map. Is there any concerete example of this permutation? Any type of algebraic groups is OK. Thanks very much. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217805862426758, "perplexity_flag": "head"}
http://mathoverflow.net/questions/91220?sort=oldest
## Doubly covering an even lattice ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have read that there is a way to construct a group which is a double cover of an even lattice. The very tantalizing thing about this is that if the even lattice is chosen to be the Leech lattice, the resulting double cover is supposed to admit a natural action of the Monster group. (i) What is a good place to read about how to construct this double cover? (ii) What is a good place to read about how to define the Monstrous action on the double cover of the Leech lattice? - ## 2 Answers Try to take a look to the references in: Yongchang Zhu "Modular invariance of characters of vertex operator algebras" J. Amer. Math. Soc. 9 (1996), 237-302. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a brief description of the double cover of an even lattice on page 2 of Borcherds's paper Vertex algebras, Kac-Moody algebras and the monster (number 4 on the page). It is introduced there as a set of properties that uniquely define it up to isomorphism, but without a construction. For a detailed construction, see Chapters 5 and 7 of Vertex Operator Algebras and the Monster by Frenkel, Lepowky, and Meurman. The automorphism group of the double cover of an even lattice $L$ is an extension of $\operatorname{Aut}(L)$ by $(\mathbb{Z}/2\mathbb{Z})^{\text{rank}(L)}$, and it is usually non-split. For the Leech lattice, you do not get the monster, but you get something closely related to a large subgroup. In more detail, the automorphism group of the double cover of Leech naturally acts on the Leech lattice vertex algebra, and this vertex algebra in turn can be used to construct the monster vertex algebra using a "twisted module". The automorphism groups then yield a diagram of the following form: $$2^{24}.Co_0 \to 2^{24}.Co_1 \leftarrow 2^{1+24}.Co_1 \hookrightarrow \text{Monster}.$$ The leftmost group is the automorphism group of the double cover of Leech, the second group is the image of its action on a fixed point subalgebra of the Leech lattice vertex algebra under an involution, the third group is a central extension that acts on a fixed point submodule of the twisted module, and it is the centralizer of an element of order 2 in the monster. Frenkel, Lepowsky, and Meurman constructed the monster action by extracting extra symmetry from the direct sum of the fixed point subalgebra and the fixed point submodule - this required a large fraction of a book. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088255167007446, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Loss_of_significance
# Loss of significance Loss of significance is an undesirable effect in calculations using floating-point arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two nearly equal numbers (known as catastrophic cancellation). The effect is that the number of accurate (significant) digits in the result is reduced unacceptably. Ways to avoid this effect are studied in numerical analysis. ## Demonstration of the problem The effect can be demonstrated with decimal numbers. The following example demonstrates loss of significance for a decimal floating-point data type with 10 significant digits: Consider the decimal number ``` 0.1234567891234567890 ``` A floating-point representation of this number on a machine that keeps 10 floating-point digits would be ``` 0.1234567891 ``` which is fairly close – the difference is very small in comparison with either of the two numbers. Now perform the calculation ``` 0.1234567891234567890 − 0.1234567890 ``` The answer, accurate to 10 digits, is ``` 0.0000000001234567890 ``` However, on the 10-digit floating-point machine, the calculation yields ``` 0.1234567891 − 0.1234567890 = 0.0000000001 ``` Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first nonzero digit. This amounts to loss of significance. ## Workarounds It is possible to do computations using an exact fractional representation of rational numbers and keep all significant digits, but this is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the data is accurate to only ten digits? The same effect will occur. One of the most important parts of numerical analysis is to avoid or minimize loss of significance in calculations. If the underlying problem is well-posed, there should be a stable algorithm for solving it. The art is in finding it. ## Loss of significant bits Let x and y be positive normalized floating point numbers. In the subtraction x − y, r significant bits are lost where $q \le r \le p$ $2^{-p} \le 1 - \frac{y}{x} \le 2^{-q}$ for some positive integers p and q. ## Instability of the quadratic equation For example, consider the venerable quadratic equation: $a x^2 + b x + c = 0$, with the two exact solutions: $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}.$ This formula may not always produce an accurate result. For example, when c is very small, loss of significance can occur in either of the root calculations, depending on the sign of b. The case $a = 1$, $b = 200$, $c = -0.000015$ will serve to illustrate the problem: $x^2 + 200 x - 0.000015 = 0.$ We have $\sqrt{b^2 - 4 a c} = \sqrt{200^2 + 4 \times 1 \times 0.000015} = 200.00000015...$ In real arithmetic, the roots are $( -200 - 200.00000015 ) / 2 = -200.000000075,$ $( -200 + 200.00000015 ) / 2 = 0.000000075.$ In 10-digit floating-point arithmetic, $( -200 - 200.0000001 ) / 2 = -200.00000005,$ $( -200 + 200.0000001 ) / 2 = 0.00000005.$ Notice that the solution of greater magnitude is accurate to ten digits, but the first nonzero digit of the solution of lesser magnitude is wrong. Because of the subtraction that occurs in the quadratic equation, it does not constitute a stable algorithm to calculate the two roots. ## A better algorithm A better algorithm for solving quadratic equations is based on two observations: that one solution is always accurate when the other is not, and that given one solution of the quadratic, the other is easy to find. If $\begin{alignat}{3} & x_1 && = \frac{-b + \sqrt{b^2 - 4ac}}{2a} \qquad & \text{(1)} \\ \end{alignat}$ and $\begin{alignat}{3} & x_2 && = \frac{2c}{-b + \sqrt{b^2 - 4ac}} \qquad & \text{(2)} \\ \end{alignat}$ then we have the identity (one of Viète's formulas for a second degree polynomial) $x_1 x_2 = c / a \$. The above formulas (1) and (2) work perfectly for a quadratic equation whose coefficient 'b' is negative (b < 0). If 'b' is negative then '-b' in the formulas get converted to a positive value as -(-b) is equal to 'b'. Hence, we can avoid subtraction and loss of significant digits caused by it. But if the coefficient 'b' is positive then we need to use a different set of formulas. The second set of formulas that are valid for finding roots when coefficient 'b' is positive are mentioned below. $\begin{alignat}{3} & x_1 && = \frac{-b - \sqrt{b^2 - 4ac}}{2a} \qquad & \text{(3)} \\ \end{alignat}$ and $\begin{alignat}{3} & x_2 && = \frac{2c}{-b - \sqrt{b^2 - 4ac}} \qquad & \text{(4)} \\ \end{alignat}$ In the above formulas (3) and (4) when 'b' is positive the formula converts it to negative value as -(+b) is equal to -b. Now, as per the formulas '-b' is subtracted by square root of (b*b - 4ac) so basically it's an addition operation. In our example, coefficient 'b' of quadratic equation is positive . Hence, we need to use the second set of formulas i.e. formula (3) and (4). The algorithm is as follows. Use the quadratic formula to find the solution of greater magnitude, which does not suffer from loss of precision. Then use this identity to calculate the other root. Since no subtraction is involved, no loss of precision occurs. Applying this algorithm to our problem, and using 10-digit floating-point arithmetic, the solution of greater magnitude, as before, is $x_1 = -200.00000005.$ The other solution is then $x_2 = c / (-200.00000005) = 0.000000075,$ which is accurate. However, there remains a further possible source of cancellation when computing $b^2 - 4ac$ and indeed this can lead to up to half of significant figures being lost: to correct for this requires $b^2 - 4ac$ to be computed in extended precision, of twice the precision of the final result [1] (see quadratic equation for details). ## References 1. Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. p. 10.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004935026168823, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/83551/list
## Return to Answer 2 added emphasis I believe that the answer is NO. If you look at Gutiérrez, Mauricio A.; Ratcliffe, John G. On the second homotopy group. Quart. J. Math. Oxford Ser. (2) 32 (1981), no. 125, 45–55. Corollary 3 states that a "reduced 2-complex $K(X; R)$ is aspherical if and only if each element of $R$ is independent and not a proper power." Now, "reduced" means that there is (a) only one 0-cell (true in your case), and the one cells represent distinct nontrivial elements of $\pi_1(K^1),$ where $K^1$ is the one-skeleton. Again seems to be true under your assumptions. $R$ are the relations (given by attaching maps of the 2-cells, I imagine), "independent" is too complicated to explain here (look at the paper), but in any case, the "not a proper power" condition is easy to violate. EDIT Actually, independent is not too hard to explain. The definition is: a relator $r$ is independent if, setting $M$ to be the normal closure of $r,$ and $N$ the normal closure of $R - r,$ $M \cap N = [ M, N].$ As @Benjamin points out, above I am answering the complementary question, so to get the example that the OP wants, we need three independent elements in the free group on two generators which are not proper powers. 1 I believe that the answer is NO. If you look at Gutiérrez, Mauricio A.; Ratcliffe, John G. On the second homotopy group. Quart. J. Math. Oxford Ser. (2) 32 (1981), no. 125, 45–55. Corollary 3 states that a "reduced 2-complex $K(X; R)$ is aspherical if and only if each element of $R$ is independent and not a proper power." Now, "reduced" means that there is (a) only one 0-cell (true in your case), and the one cells represent distinct nontrivial elements of $\pi_1(K^1),$ where $K^1$ is the one-skeleton. Again seems to be true under your assumptions. $R$ are the relations (given by attaching maps of the 2-cells, I imagine), "independent" is too complicated to explain here (look at the paper), but in any case, the "not a proper power" condition is easy to violate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290184378623962, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76945/threading-pinholes-in-the-wall-of-cylinder-to-pass-through-an-internal-coordinate/76961
## Threading pinholes in the wall of cylinder to pass through an internal coordinate ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Imagine I take a sheet of paper and use a pin to generate an $N$x$M$ rectangular array of small holes. I then fold the sheet to form a cylinder of radius $r_c$ and height $h_c$, where there are $N$ pinholes around its circumference and $M$ pinholes from the top of the cylinder to the bottom. No edge-effects from the folding process are discernible. Now, say I pick a coordinate, $C$, in the three-dimensional space inside the cylinder. $C$ is some distance from the bottom of the cylinder, $A$, and some distance from the central-axis of the cylinder, $B$. I then proceed to shine a laser, or thread a very thin string between two pinholes, $(p_1, p_2)$, such that the beam or the string is as close as possible to $C$. Here, the laser or the string can be treated as a one-dimensional chord in the interior of the cylinder. How do I choose $(p_1, p_2)$ to generate a line containing a coordinate $C*$ as close as possible to my chosen coordinate $C$? In general, how well can I do as a function of the density of the pinhole array and the position of the coordinate in terms of $A$ and $B$? Pressing my luck, in terms of minimizing the (straight-line) difference between $C$ and $C^*$, are there better geometries for the pinholes than a rectangular array? Update: First of all, thanks to Joseph O'Rourke for the awesome graphic! Secondly, I would be very interested in an analysis of worst-case delta with excluded regions, say, at the top and bottom of the cylinder (as Gerhard Paseman suggested). Update 2: Joseph O'Rourke states a two-dimensional variant of this problem in Part 2 (P2) of his question "Chord arrangement that avoids confining small or large disks", (http://mathoverflow.net/questions/76980/chord-arrangement-that-avoids-confining-small-or-large-disks). - Since Joseph O'Rourke was kind enough to provide one picture of your problem, you might ask him for five more: one where the array is based on diamonds instead of a rectangular grid, one using a hexagonal array, and then copies of each of these with the lines given a thickness of delta/4, where delta is the largest distance between any point inside the cylinder and its nearest line. Hopefully he can compute delta for some reasonable spacing of the N vertical points in the three configurations. Gerhard "Ask Me About System Design" Paseman, 2011.10.01 – Gerhard Paseman Oct 2 2011 at 4:51 1 Seems unlikely to have a closed-form solution. Even the 2-dimensional projection (find a diagonal of a regular $N$-gon nearest to a given interior point) leads to an exotic Diophantine problem (more-or-less finding the point $(x/N,y/N)$ nearest to a given transcendental curve). Where does this question arise, and what are typical sizes of $M$ and $N$?A lower bound, and possibly a reasonable approximation, for the typical minimal distance is the radius of cylinders about each string the sum of whose volumes is within a constant factor of the volume of the cylinder. – Noam D. Elkies Oct 2 2011 at 4:59 Thinking some more, it seems to me that the largest distance needed will be at the top or at the bottom of the cylinder: I see this by looking at the graph on 2n vertices mentioned in another comment. If the original poster is willing to exclude such regions, he or she may find delta quite small even for small values of n. Gerhard "Ask Me About System Design" Paseman, 2011.10.01 – Gerhard Paseman Oct 2 2011 at 5:12 I wonder if this problem is motivated by radiation therapy? – Joseph O'Rourke Oct 2 2011 at 13:11 As Noam suggests, this question seems interesting already in 2D. I've taken the liberty of posing a version (actually, two versions) separately. – Joseph O'Rourke Oct 2 2011 at 15:05 ## 2 Answers This adds nothing to your interesting question, but I couldn't resist illustrating the network of lines ($n=8$, $m=4$): - 1 Actually it brings to mind a work of Poonen et al on the number and arrangement of diagonals of regular n-gons. I do not recall the exact reference, but something about there being at most 6 concurrent diagonals comes to mind, excepting the center. It might say how many interior regions and may give other clues to an answer. Perhaps someone else will provide the reference and more detail. Gerhard "Ask Me About System Design" Paseman , 2011.10.01 – Gerhard Paseman Oct 2 2011 at 4:32 Also, one can devise a coordinate system and break the problem into vertical (2n complete bipartite plus regular grid graph) and horizontal (m complete regular polygonal graph). components. Although I had the idea before, your picture nicely suggests such a break down of the problem. Gerhard "Likes To See Pretty Pictures" Paseman, 2011.10.01 – Gerhard Paseman Oct 2 2011 at 4:39 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I shall outline an approach to answering some forms of this question, and leave verification and actual computations to others who are more energetic than I feel at present. The original presentation of points arranged in an n by m rectangular grid makes for a particularly pleasing analysis of the quantity delta, which I define as the maximum over all points p inside the cylinder of d(p) which is the minimum over all lines between two points in the pinhole grid of the distance between p and such a line. Namely, when you project the point and configuration on the horizontal plane to get a point q in or near a regular n-gon, and choose the three or more nearest n-gon diagonals to the projected point, compute d_i which is q's distance for each of these, and then use these choices to look at the projection of p onto each of the vertical planes containing these lines to determine some distances d_j among the grid graphs with their diagonals. delta can then be figured or at least approximated using d_i and d_j, likely delta^2 = minimum of d_i^2 + d_j^2 over an appropriate set of choices for i and j. I expect the d_is for large n to be O(n^(-3/2)) times the radius of the cylinder and the d_js to be at most O(1/m) times the minimum of height and length of the smallest rectangle in the largest of the grid graph projections used, unless one is looking at the top or bottom region, in which case replace 1/m by 1. For some of the alternate geometries I suggested in a comment, one does not get as nice a decomposition, but they contain one or more copies of the originally suggested geometry, so one can get a rough value of delta using a sub configuration of pinholes, and then refine that estimate by using more pinholes as needed. An O(nm) way of doing that is by picking a pinhole, computing the interesting intersection of the line containing the interior point and the pinhole with the cylinder, and then finding the k closest pinholes to that intersection; doubtless there are refinements to this approach that will allow a speedier estimation of delta. In fact, this approach (peeking through each pinhole at the point p) suggests that maximal delta will be near the center of the cylinder and will still be a value something like 1/4 the minimum of the height and width of the bounding rectangle (the four pinholes one sees as being "nearest" to the image of the chosen point). The nice thing about this approach is that it can be used on arbitrary pinhole arrangements, and choosing a random selection of pinholes from which to view will often get a quick and good upper bound on d(p) for a given point p.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191793203353882, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/31151/can-cannonballs-go-through-water
# Can cannonballs go through water? In the recent Spielberg/Jackson Tintin movie, there is a scene where Red Rackham and Captain Haddock's ships are fighting, and cannons are fired. The cannonball is shown at one point to go through a wave, and inflict serious damage on the other ship. I know that bullets stop in water; do cannonballs, with their greater weight, continue with enough force to inflict damage? - I dont have the time to work out a complete answer, but generally, yes, the momentum of a cannonball would likely be sufficient to both pass through a wave and damage a ship. Of course, it depends on the relative sizes, but, think of it as the wave serving as a shield; the momentum would be reduced, but the ball likely wouldn't stop. – AdamRedwine Jul 2 '12 at 17:34 2 It also depends on the length of water the cannonball goes through and on the projectile's shape: bullets are far more aerodynamically shaped than cannonballs. – Emilio Pisanty Jul 3 '12 at 9:53 However, keep in mind that ship cannons were originally used to break the other ship's masts or damage its hull below the waterline in order to sink it. – Emilio Pisanty Jul 3 '12 at 9:54 ## 1 Answer What distance can a cannonball traverse thru water without losing too much kinetic energy? For a back-of-the-envelope calculation we start from the observation that this distance scales with the ratio of the kinetic energy of the cannonball and the drag force exerted on the cannonball. Let's denote the ball's radius by $R$, its speed by $v$, and its mass density by $\rho_{ball}$. The kinetic energy $E_k$ equals $\frac 1 2 M v^2 = \frac{2 \pi}{3} \rho_{ball} R^3 v^2$. The drag force $F_d$ is given by $\frac 1 2 C_d \rho_{water} v^2 A = \frac {\pi}{2} C_d \rho_{water} v^2 R^2$. Here, $C_d$ denotes the drag coefficient for a sphere. The maximum distance $L _{max}$ that can be traversed by a cannonball $L_{max} = E_k/F_d$ is therefore $\frac 4 3 \frac {R}{C_d} \frac {\rho_{ball}}{\rho_{water}}$. For typical values ( $\frac{\rho_{ball}}{\rho_{water}} < 8$ and $C_d > 0.1$, see here), we find $L_{max} < 100 R$. In other words, a cannonball loses much of its kinetic energy when it traverses a layer of water larger than about fifty times its diameter. - Thank you so much! – NiceOrc May 13 at 11:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408155083656311, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/9329/solving-a-simple-recurrence
Solving a simple recurrence I'm having a real hard time solving recurrences using the substitution method. Show that: $T(n) = T(n/2) + 1$ is $O(\lg n)$ I thought this to be relatively easy: We have to show that $T(n) \leq c \lg n$ Substitution gives me: $\qquad \begin{align} T(n) &\leq c \lg(n/2) + 1 \\ &= c \lg n - c \lg 2 + 1 \\ &= c \lg n - c + 1 \\ &\leq c \lg n \end{align}$ for every c. I was under the impression this was it, but when I was looking for an answer, I came around a much more elaborate answer on the web, given involving subtracting a constant. I don't get why that's needed, I thought I had shown what was needed. Any help would be greatly appreciate, starting Monday I'm enrolled in an algorithms class and I don't want to get behind! We are using the CLRS book (surprise) and though I appreciate the amount of information in it, I'd rather have some more resources. I've really enjoyed a datastructures class and I really think I can enjoy this as well, but more resources would be very much appreciated. - 1 We have a reference question with ample material about solving recurrences, in particular this answer. – Raphael♦ Jan 30 at 20:54 Your substitution proves nothing. You use the claim to derive the claim -- that's not very helpful. – Raphael♦ Jan 30 at 20:56 1 @Raphael This is proof by induction. – Yuval Filmus Jan 31 at 0:27 Your solution looks fine, though you'd better write it as an induction, i.e $T(n) = T(n/2) + 1 \leq c\lg(n/2) + 1$ and so on. You also need to take care of the base case, and notice that you get a condition on $c$ (not all $c$ work). – Yuval Filmus Jan 31 at 0:28 1 @YuvalFilmus No, it's not. What is written there could be used as the inductive step, true. Given the level of the question, I would not assume that Oxymoron knows what happens there; the question even says "substitution method". – Raphael♦ Jan 31 at 9:34 show 2 more comments 1 Answer Luckily i had 2 day ago the algorithm exam and so i was able to solve your question :-) When solving recurrences try first to use the Master method, if you can't succeed than try other methods. - – Raphael♦ Jan 31 at 9:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9707321524620056, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/209339-sign-sine-3d-rotation.html
# Thread: 1. ## Sign of sine in 3d rotation (x,y,z) and (X,Y,Z) are two unit vectors. Both are perpendicular to a unit vector (u,v,w) and -1 < Y < 1. Is there any simple method to determine the sign of $sin(\theta)$ (theta is angle for rotating X,Y,Z about u,v,w into x,y,z). When I solve the equations for rotating (X,Y,Z) about (u,v,w), i always get possible null-division, so I need two conditional expressions. Couldn't there be a simpler method like the sign of a projection or something like that? 2. ## Re: Sign of sine in 3d rotation The scalar product of (x,y,z) and (X,Y,Z) gets you the cosine of the angle between them doesn't it ? From that you can deduce the sine. Or am I misreading the question ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8879801034927368, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/97417/is-there-constructive-proof-of-the-fact-that-every-recursive-set-a-ne-varnoth/97675
# Is there constructive proof of the fact that every recursive set $A \ne \varnothing$ is recursively enumerable in non-decreasing order? Every proof I've read about this fact considers two cases: $A$ - finite and $A$ - infinite but this is undecidable. So, is there constructive proof? - ## 2 Answers How about: $$f(n) = \cases{\max \{i\in A\mid i \le n\} & \text{if }n \ge \min A\\ \min A & \text{otherwise}}$$ where $f(0) \le f(1) \le f(2) \le f(3) \ldots$ enumerate $A$. This can be computed by counting $n$ downwards until an element of $A$ is found. If we hit $0$, switch to counting upwards to find the minimum instead. - This was very recently asked on MathOverflow in this question. The answer is that, no, it cannot be proved constructively if phrased as "every nonempty decidable subset of $\mathbb{N}$" but the similar result "every inhabited decidable subset of $\mathbb{N}$ can be enumerated in nondecreasing order" can be proved constructively. (At the very least, it will come down to the exact definition of "constructively", and exactly what principles are allowable.) There are many delicate issues of this sort when we try to move from classical recursion theory to constructive mathematics. Separately, the classical result "every nonempty recursive set can be enumerated in nondecreasing order" can be proved in a completely uniform way, there is no need for cases. What I mean by this is that there is a computable function $f$ that takes an index $e$ that decides membership in the computable set and returns an index $f(e)$ of a non-decreasing enumeration of the set. The index $f(e)$ simply checks whether $\phi_e(0) = 1$, then whether $\phi_e(1) = 1$, then whether $\phi_e(2) = 1$, and so forth, and enumerates in order the $i$ such that $\phi_e(i) = 1$. This is not much different than the method given by Henning Makholm in his answer. The fact that the classical result can be proved in this uniform way follows, actually, from the fact that the modified version is provable constructively in the way that it is. - How do you distinguish between "nonempty" and "inhabited" here? Is it something like $\neg\neg\exists x\in A$ versus $\exists x\in A$? – Henning Makholm Jan 9 '12 at 23:25 @Henning: Yes, that is exactly the difference. Constructivists tend to use "nonempty" to mean the former, and "inhabited" the latter. The difference between them is a special case of "Markov's principle". – Carl Mummert Jan 10 '12 at 1:37 By the way, funny how the asker at MO and the one here have identical gravatars... – Henning Makholm Jan 10 '12 at 15:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272498488426208, "perplexity_flag": "head"}