url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/7871/angular-momentum-in-string-theory
Angular momentum in string theory Since strings are extended objects, is all angular momentum in string theory essentially "orbital" angular momentum? Or is there still a kind of intrinsic angular momentum assigned to a string? Either way, is there anything that prevents the "intrinsic spin" of a particle represented by a string from being arbitrarily large? - 1 Answer The orbital momentum of a string may be arbitrarily large. Whether it should be called "orbital" or "intrinsic" depends on the perspective. The right answer is the formula, such as $$J_{ij} = \int d\sigma [x_i (\sigma) p_j (\sigma) - p_i(\sigma) x_j(\sigma) + \gamma_{ij}^{ab} \theta_a(\sigma) \theta_b(\sigma)]$$ I added some superstring term, too. The $xp$ terms may be viewed as a density of the orbital angular momentum on the string; the fermionic term is its most direct fermionic generalization. However, both of these terms, and especially the latter, become "intrinsic angular momentum" when you expand the fields $x,p,\theta$ to Fourier modes and interpret the string as a particle with internal oscillations. In particular, the intrinsic spin-1/2 always comes from the quantization and/or excitations of the fermionic degrees of freedom. The formula says much more than just some confusing dogmatic word "intrinsic" vs "orbital", at least to a person who wants to understand the terms accurately enough - at the level of maths. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236936569213867, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/42564/algorithm-for-weierstrass-preparation-theorem-for-formal-power-series
## Algorithm for Weierstrass Preparation Theorem for Formal Power Series ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Weierstrass preparation theorem for formal power series rings guarantees that if a given formal series $f(z) = \sum a_k z^k \in R[[z]]$ where $R$ is a complete local ring with maximal ideal $M$ has $a_k \in M$ for $k < n$ and $a_n \in R^* = M^c$, then $$f = (z^n + b_{n-1}z^{n-1} + \cdots + b_0)u$$ where $b_k \in M$ and $u$ is a unit in $R[[z]]$. I need an explicit algorithm for calculating this Weierstrass polynomial (or distinguished polynomial) for a given $f$. In my case the coefficient ring is $R = \mathbb Z_3[[x]]$, formal power series over the 3-adics. So any algorithm would have to be robust enough to handle these coefficients. Does anyone know of such an algorithm for a math software package? - Just a comment, since I don't know and don't have the time to check: maybe you want to look at the documentation of the constructor WEIER in FriCAS whether this is what you want. If so, the file weier.spad.pamphlet in the FriCAS distribution (fricas.sourceforge.net) contains the source. Don't hesitate to ask on fricas-devel@googlegroups.com – Martin Rubey Oct 18 2010 at 7:22 I don't think either MAGMA or SAGE have such functionality built in. But it should be trivial to code in any package which handles power series with not-necessarily-field coefficients. Look at Manin's proof in Ch.5 sec.2 of Lang's "Cyclotomic Field", (p.130 of the combined edition), which gives an explicit formula. – Tony Scholl Oct 18 2010 at 14:23 It seems that Manin's method suggests an algoriithm to approximate the unit power series (that I called "u" above) to any finite degree of precision. But being interested in the polynomial I would need to calculate u exactly so that I could find f*(u^-1). The package WEIER seems to only work for field coefficients, though I had trouble executing it. Entreaties to the FriCAS development group haven't produced a response. – R. Nendorf Nov 8 2010 at 18:37 ## 2 Answers If you're still interested in the answer to this...I also needed an explicit algorithm for calculating associated Weierstrass polynomials and provide two such algorithms in http://arxiv.org/abs/1107.4860v2, Algorithms 5.2 and 5.4. The first one is simplest and is based on Manin's method and a result by Sumida. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here's an algorithm that I use. Let's call $S$ the degree-$n$ shift operation, sending $\sum c_kz^k$ to $\sum c_{n+k}z^k$, in other words the quotient when you divide a power series by $z^n$. Step 0: divide $f$ by $Sf$, giving you a power series $f_1$ such that $Sf_1\equiv 1$ modulo $M$. Step $i$, for $i > 0$: repeat. At each stage, you get a power series $f_i$ for which $Sf_i\equiv 1$ modulo $M^i$. For a quicker variant of Step $i$ (for $i > 0$), instead multiply by $2-Sf_i$. It works because you've constructed a convergent infinite product. - Dear Prof. Lubin, I would also like to thank you for this answer. I am now using this in a small computational project I'm working on. Though, I was wondering, in the "quicker variant", is the rate of convergence faster, or is it just quicker because multiplication is quicker than division? – Dror Speiser Mar 8 2012 at 0:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342930316925049, "perplexity_flag": "middle"}
http://nrich.maths.org/2074/index
### Be Reasonable Prove that sqrt2, sqrt3 and sqrt5 cannot be terms of ANY arithmetic progression. ### Janusz Asked In y = ax +b when are a, -b/a, b in arithmetic progression. The polynomial y = ax^2 + bx + c has roots r1 and r2. Can a, r1, b, r2 and c be in arithmetic progression? ### Summats Clear Find the sum, f(n), of the first n terms of the sequence: 0, 1, 1, 2, 2, 3, 3........p, p, p +1, p + 1,..... Prove that f(a + b) - f(a - b) = ab. # Polite Numbers ##### Stage: 5 Challenge Level: A polite number is a number which can be written as the sum of two or more consecutive positive integers. Find the two consecutive sums which produce the polite numbers $544$ and $424$. How would you represent these sums using a number line? Use this visualisation approach to decide which consecutive sums would give rise to the polite numbers $1000$ and $1001$. Do these numbers arise as more than one consecutive sum? How do these numbers relate to the formula for the sum of an arithmetical progression? Can you find any numbers which are not polite? There is actually a rather simple rule which determines whether a given number is polite. Can you find this rule? Can you prove that this is the case? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9040090441703796, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/104438/list
## Return to Answer 5 added 222 characters in body It is a consequence of the generalized Sato-Tate conjecture, that given a non-CM elliptic curve over $\mathbb Q$, any element of $Gal(\bar{\mathbb Q}/\mathbb Q)$, and a real number $\in[−1,1]$, one can construct an ultrafilter on the primes such that Frobenius converges to that element and the angle of Frobenius converges to that real number. Thus any attempt to answer this question must somehow make use of the transcendentals. I have no idea how one might do that. (I had a different EDIT: By ACL's answer earlierto my question, but this comment the nonstandard angle of Frobenius is a better one.)totally independent from all first-order statements about the nonstandard elliptic curve. 4 deleted 916 characters in body The trace of Frobenius of an elliptic curve and its the number of points over the finite field it is over are both integers, $a_p$ and $p$, satisfying the relation $a_p^2\leq 4p$. It seems to me that a nonstandard angle should come from is a pair consequence of nonstandard numbers, $a_u$ and $u$, satisfying $a_u^2\leq 4u$. Then with the natural map from nonstandard integers to nonstantard realsgeneralized Sato-Tate conjecture, $a_u/2\sqrt{u}$ would be a nonstandard real number in the interval $[-1,1]$, thus that given a real number, and the $\cos$ of some real number. In particular you're working over an ultrafilter non-CM elliptic curve over the set of primes, so you know what you want $u$ to be already. So the question is for ways to find nonstandard numbers $a_u$ satisfying this inequality. The Tate module allows you to compute\mathbb Q$, from an any element of$\sigma \in Gal(\bar{\mathbb Q}/\mathbb Q)$, an element of and a real number$\hat{\mathbb Z}$, \in[−1,1]$, one can construct an ultrafilter on the trace, primes such that if $\sigma=Frob_p$, is equal Frobenius converges to $a_p$ everywhere except $\mathbb Z_p$. Since $\hat{\mathbb Z}$ is a compactification that element and the angle of an integers, there is a map Frobenius converges to it from any notion of nonstandard integersthat real number. Thus any notion that sends automorphisms of $\bar{\mathbb Q}$, or $\mathbb C$, attempt to angles or traces of Frobenius, should probably form a commutative diagram with the trace answer this question must somehow make use of the Tate module and that maptranscendentals. The reason this cannot serve as a definition on its own is I have no idea how one might do thatas far as . (I know this map would not be injectivehad a different answer earlier, nor would the traces need to satisfy the relevant inequality.but this comment is a better one.) 3 added 449 characters in body The trace of Frobenius of an elliptic curve and its the number of points over the finite field it is over are both integers, $a_p$ and $p$, satisfying the relation $a_p^2\leq 4p$. It seems to me that a nonstandard angle should come from a pair of nonstandard numbers, $a_u$ and $u$, satisfying $a_u^2\leq 4u$. Then with the natural map from nonstandard integers to nonstantard reals, $a_u/2\sqrt{u}$ would be a nonstandard real number in the interval $[-1,1]$, thus a real number, and the $\cos$ of some real number. In particular you're working over an ultrafilter over the set of primes, so you know what you want $u$ to be already. So the question is for ways to find nonstandard numbers $a_u$ satisfying this inequality. Do The Tate module allows you want different ways to computethe same number , from an elliptic curveelement of $\sigma \in Gal(\bar{\mathbb Q}/\mathbb Q)$, or entirely different ways to compute an element of $\hat{\mathbb Z}$, the number? Do you want trace, that if $\sigma=Frob_p$, is equal to compute $a_p$ everywhere except $\mathbb Z_p$. Since $\hat{\mathbb Z}$ is a compactification of an integers, there is a map to it from an elliptic curve over an ultra-finite field? I'm not sure how an automorphism any notion of the complex numbers relates nonstandard integers. Thus any notion that sends automorphisms of $\bar{\mathbb Q}$, or $\mathbb C$, to angles or traces of Frobenius, should probably form a commutative diagram with the trace of the Tate module and that map. The reason this cannot serve as a definition on its own is that as far as I know this map would not be injective, nor would the traces need to satisfy the relevant inequality. 2 added 540 characters in body The Galois group $Gal(\bar{\mathbb Q}/\mathbb Q)$ trace of Frobenius of an elliptic curve and its the number of points over the finite field it is a profinte groupover are both integers, thus compact. For each $a_p$ and $p$, take satisfying the relation $a_p^2\leq 4p$. It seems to me that a representative nonstandard angle should come from a pair of nonstandard numbers, $Frob_p$ in that groupa_u$and$u$, satisfying$a_u^2\leq 4u$. Thre ultrafilter produces an element Then with the natural map from nonstandard integers to nonstantard reals,$Frob_u$, whose trace a_u/2\sqrt{u}$ would be a nonstandard real number in the interval $l$-adic Galois representation at [-1,1]$, thus a smooth prime real number, and the$l$(a continuous representation) is \cos$ of some real number$a_u$, which we can express as $2 \sqrt{u} \cdot \cos (\theta_u)$, with . In particular you're working over an ultrafilter over the set of primes, so you know what you want $u$ and $\theta_u$ both $l$-adic numbersto be already. However these are So the question is for ways to find nonstandard numbers $l$-adic versionsa_u\$ satisfying this inequality. Do you want different ways to compute the same number from an elliptic curve, or entirely different ways to compute the number? Do you want to compute it from an elliptic curve over an ultra-finite field? I'm not realsure how an automorphism of the complex numbers relates to this. 1 The Galois group $Gal(\bar{\mathbb Q}/\mathbb Q)$ is a profinte group, thus compact. For each $p$, take a representative of $Frob_p$ in that group. Thre ultrafilter produces an element $Frob_u$, whose trace in the $l$-adic Galois representation at a smooth prime $l$ (a continuous representation) is some number $a_u$, which we can express as $2 \sqrt{u} \cdot \cos (\theta_u)$, with $u$ and $\theta_u$ both $l$-adic numbers. However these are $l$-adic versions, not real.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146413803100586, "perplexity_flag": "head"}
http://mathoverflow.net/questions/99777/does-x-embed-in-y-and-y-embed-in-x-always-imply-that-x-isomorphic-on
## Does $X$ embed in $Y$, and $Y$ embed in $X$, always imply that $X$ isomorphic onto $Y$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ and $Y$ be Banach spaces. The notion of the embedded spaces was introduced by D.S. Djordjevic. Say that $X$ embed in $Y$, and write $X \preceq Y$, if there exists a left invertible operator $J:X \rightarrow Y$. My question is the following; If $X$ embed in $Y$, and $Y$ embed in $X$, then $X$ isomorphic onto $Y$? PS: The answer is positive when $X$ and $Y$ are Hilbert spaces. But for general Banach space,I can not find a solution. Is there exists a couterexample? - The answers below show that this is false but it is perhaps worth mentioning that, on the positive side, there are versions under additional conditions which are correct and these are frequently used in the isomorphic theory of Banach spaces. They go under the collective name of "Pelczynski decomposition method". – jbc Apr 4 at 6:45 ## 2 Answers This is called the Schroeder-Bernstein problem, and for Banach spaces there are constructions of nonisomorphic Banach spaces which embed into each other. W. T. Gowers, "A Solution to the Schroeder-Bernstein Problem for Banach Spaces" Bull. London Math. Soc. (1996) 28 (3): 297-304. - 4 There is a noteworthy special case of this problem that is still open: if $X$ and $X^{\ast \ast}$ are isomorphic to complemented subspaces of one another, are they in fact isomorphic? – Philip Brooker Jun 16 at 13:31 Thanks Douglas for you reminding me the Schroeder-Bernstein problem. – Qingping Zeng Jun 17 at 3:11 @Philip: I did not know this was open and I'm surprised it is not obviously true. I suppose one might construct a counterexample using a non-reflexive (so-called Jamesification, coined by Spiros) version of Gowers' space. Also, it's at least possible that Spiros has already constructed a counterexample, perhaps without knowing, in one of his papers. Where did you see this question? – Kevin Beanland Jun 18 at 0:17 2 @Kevin: I first came across the question in a paper of Plichko and Wojtowicz, Note on a Banach space having equal linear dimension with its second dual, Extracta Mathematica 18(3) (2003), p.311--314 (in particular, see the final remark of the paper). I wrote to one of the authors and also to Galego a couple of years ago enquiring as to the status of the problem, and at that time it was still open as far as they knew. More generally, I think it is open whether there exists an integer $n\geq 3$ and a Banach space $X$ such that $X$ is isomorphic to its $n$th dual but not to its $j$th dual – Philip Brooker Jun 18 at 12:47 2 for any $1\leq j \leq n-1$; if such $n$ and $X$ do exist, then $X$ and $X^{\ast\ast}$ are nonisomorphic Banach spaces that are isomorphic to complemented subspaces of one another. – Philip Brooker Jun 18 at 12:48 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The Schroeder-Bernstein problem asked about complemented subspaces. It is a much stronger property for a subspace to be complemented. The only spaces in which every subspace is complemented are isomorphic to Hilbert space (Lindenstrauss and Pelczynski). Also, Hilbert space (and its isomorphs) is the only space isomorphic to all of its subspaces. Therefore, if you consider any space which is not isomorphic to Hilbert space but embedds into its subspaces (e.g. $\ell_p$ for $p \not= 2$; in general, these are called minimal spaces), any non-isomorphic subspace gives a counterexample. Edit: I guess I was confused by the term left invertible. I'll leave my answer up in case someone doesn't know how to do it in this weaker case. - 2 @Kevin: The OP's terminology is confusing and, I think, differs from standard Banach space terminology; what he calls an embedding is what I think you or I would call a complemented embedding. In particular, I presume that, in the OP's terminology, a left inverse for $J$ would be a map $K: Y\longrightarrow X$ such that $KJ$ is the identity on $X$, hence $JK$ is an idempotent operator on $Y$ with range isomorphic to $X$. – Philip Brooker Jun 16 at 23:12 Thanks for clarifying. I should have assumed others would have corrected! – Kevin Beanland Jun 16 at 23:16 2 On another note. I'm a bit surprised at how popular this question (and answer) is given that the solution is simply to quote a major theorem in Banach spaces that (as it happens) was part of the work that won Gowers the Fields. I suppose Banach Theory needs better PR. – Kevin Beanland Jun 16 at 23:31 2 @Kevin, as someone who still gets upvotes for an answer to one question which consisted solely of pointing to work of Knutson and Tao ... so it goes – Yemon Choi Jun 17 at 0:15 1 @Kevin: The theorem you cite about complemented subspace is not by Lindenstrauss and Pelczynski, but by Lindenstrauss and Tzafriri. Precisely in Lindenstrauss, J. and Tzafriri, L. On the complemented subspaces problem, Israel J. Math. 9 (2) (1971) 263-269. – Valerio Capraro Jun 17 at 5:07 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410485625267029, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/82654/derive-mean-and-variance-from-equation
# Derive mean and variance from equation i have given a simplyfied one-dimensional Fokker-Planck equation : $\psi(p,t)=\frac{1}{\sqrt{2\pi vt}}\exp(-\frac{p^2}{2vt})$ My thoughts : ok, this looks pretty similar to the gaussian distribution : $f(x)=\frac{1}{\sqrt{2\pi \sigma^2}}\exp(-\frac{(x-\mu)^2}{2\sigma^2})$ obviously there are parallels ... so is $\sigma^2 = vt$ and $\mu = 0$ How do i derive this mathematically ? - You are right. Your solution $\psi(p,t)$ takes the same form as that of a Gaussian pdf where the Gaussian has mean $\mu = 0$ and variance $\sigma^2 = vt$. Reaching this conclusion by comparing the Gaussian pdf to your equation is perfectly valid and rigorous mathematically and there is nothing more you need to "derive" to justify this. – Dinesh Nov 16 '11 at 11:13 For what it's worth, this is the distribution of particle momentums $p$ in a one-dimensional ideal gas with temperature $t$ and particle masses $m=v/k$, where $k$ is the Boltzmann constant $k=1.38×10^{−23}$ Joules/Kelvin. – Chris Taylor Nov 16 '11 at 12:15 Beyond saying $\sigma^2 = vt$ and $\mu = 0$, what is there to derive? – Michael Hardy Nov 16 '11 at 14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249824285507202, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87279/higher-homotopy-groups-of-slice-disk-complement
Higher homotopy groups of slice disk complement Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $K \subseteq{\mathbb{S}}^3=∂\mathbb{D}^4$ be a non-trivial slice knot, i.e. $K$ bounds a slice disk $\Delta$ in $\mathbb{D}^4$. Let $N(\Delta)$ be a regular neighborhood of $\Delta$ in $\mathbb{D}^4$. What is known about $\pi_i(\mathbb{D}^4-N(\Delta))$, for $i\geq2$? For what it's worth, $\pi_1(\mathbb{D}^4-N(\Delta))$ is known to be normally generated by the meridian of $K$. The knot complement $\mathbb{S}^3 - K$ and $∂(\mathbb{D}^4-N(\Delta))=M_K$ (the zero-framed surgery on the knot $K$) are both known to be aspherical. - I don't think the non-triviality of $K$ has much impact on your question. For example, connect-sum any slice disc with a $2$-knot such as a Cappell-Shaneson knot. – Ryan Budney Feb 1 2012 at 23:54 Ah, I wanted $K$ to be non-trivial since I believe that $\mathbb{S}^3-K$ is aspherical only for non-trivial knots. – Aru Ray Feb 2 2012 at 2:00 You can also do a boundary connect-sum of two slice discs, that would give the connect sum of the two slice knots on the boundary. – Ryan Budney Feb 2 2012 at 2:26 1 @Arunima Ray: If $K$ is trivial, its complement is an open solid torus, which is aspherical. – Richard Kent Feb 2 2012 at 2:54 oops, my bad, I meant the the zero surgery on the knot $M_K$ is not aspherical for a trivial knot - if I've thought this through correctly, the zero surgery on the trivial knot is $\mathbb{S}^2x\mathbb{S}^1$ – Aru Ray Feb 2 2012 at 3:17 2 Answers The homotopy groups can be pretty big things. For example, your $D^4 - N(\Delta)$ class of spaces contains the class of all $2$-knot complements -- simply remove a 4-ball neighbourhood of $S^4$ that intersects the $2$-knot in an unknotted disc. $2$-knot complements have fairly complicated homotopy groups. For example, Cappell-Shaneson knot complements fiber over $S^1$ with fiber a once-punctured $(S^1)^3$. The universal cover of this space is $\mathbb R \times (\mathbb R^3 - \mathbb Z^3)$, so by the Hilton-Milnor theorem, rationally the homotopy groups are a free lie algebra with countably-infinite many generators (up to the action of $\pi_1$ there's just one generator, though). Off the top of my head I don't know if slice disc groups are any more general than knot groups, they're probably not very far from each other. I think Kawauchi may not cover this but the references in his survey book should mention something. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To add to Ryan's answer, $2$--knots usually don't have aspherical complements, see Dyer & Vasquez, The sphericity of higher dimensional knots, Canad. J. Math. 25(1973), 1132-1136. This suggests a complicated answer for slice disks in general. On the other hand, if the disk is ribbon, then the complement is aspherical, see Asano, Marumoto, and Yanagawa, T, Ribbon knots and ribbon disks, Osaka J. Math. 18 (1981), 161-174 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228883981704712, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/102799-easy-trig-question-print.html
# easy trig question Printable View • September 17th 2009, 06:57 AM Raj easy trig question How would i solve questions such as sin(x)= $\frac{1}{2}$ or cos(3x)= $\frac{-\sqrt{3}}{2}$ without using a calculator. Thanks in advance • September 17th 2009, 07:16 AM mathaddict Quote: Originally Posted by Raj How would i solve questions such as sin(x)= $\frac{1}{2}$ or cos(3x)= $\frac{-\sqrt{3}}{2}$ without using a calculator. Thanks in advance $\sin x =-\frac{1}{2}$ first of all , the reference angle is 30 degrees . But sin is negative so it would be in the 3rd and 4th quadrant ie x=210 , 330 for $\cos 3x=-\frac{\sqrt{3}}{2}$ reference angle = 30 degrees and since cos is negative , it will be in the 2nd and 3rd quadrant . ie 3x=150 , 210 , 510 , 570 , 870 , 930 x=50 , 70 , 170 , 190 , 290 , 310 Assuming that 0<x<360 , then 0<3x<1080 • September 17th 2009, 07:20 AM Raj Quote: Originally Posted by mathaddict $\sin x =-\frac{1}{2}$ first of all , the reference angle is 30 degrees . But sin is negative so it would be in the 3rd and 4th quadrant ie x=210 , 330 for $\cos 3x=-\frac{\sqrt{3}}{2}$ reference angle = 30 degrees and since cos is negative , it will be in the 2nd and 3rd quadrant . ie 3x=150 , 210 , 510 , 570 , 870 , 930 x=50 , 70 , 170 , 190 , 290 , 310 Assuming that 0<x<360 , then 0<3x<1080 thanks but i really have no idea how you got the reference angle • September 17th 2009, 07:24 AM mathaddict Quote: Originally Posted by Raj thanks but i really have no idea how you got the reference angle lets say for sin x = -1/2 we just ignore the negative sign first , ie sin x =1/2 So by recalling these special angles , x=30 degree , then from here we only the negative sign by deciding which quadrant should it be in . Another example , cos x = - 0.445 same thing , ignore the sign first , then use the calculator to find the reference angle (63.58) , then now since its negative so cos will be in 2nd and 3rd quadrant .. Clear ? • September 17th 2009, 07:27 AM Grandad Hello Raj Quote: Originally Posted by Raj How would i solve questions such as sin(x)= $\frac{1}{2}$ or cos(3x)= $\frac{-\sqrt{3}}{2}$ without using a calculator. Thanks in advance The answer is: using a combination of experience - you simply recognise certain trig ratios - and technique. For instance, you'll need experience - and the ability to memorise certain facts - to know that $\sin\tfrac{\pi}{6} = \tfrac12$ and that $\cos\tfrac{\pi}{6} = \tfrac{\sqrt3}{2}$. Then you need a technique that will enable you to handle negative signs: for instance, how to use the fact that $\cos\tfrac{\pi}{6} = \tfrac{\sqrt3}{2}$ to enable you to find an angle whose cosine is $-\tfrac{\sqrt3}{2}$. (One such angle is $\tfrac{5\pi}{6}$.) Then you'll need a technique that will enable you to find other angles with the same sine or cosine. For example, the fact that $\sin\tfrac{\pi}{6} = \tfrac12$ means that the sine of all these angles will also equal one-half: $\tfrac{5\pi}{6},\tfrac{13\pi}{6},\tfrac{17\pi}{6}, ...$ Finally, you'll need a technique that will enable you to deal with mutliple angles. For example, if $\sin3x = \tfrac12$, we've just seen that the possible values of $3x$ are $\tfrac{\pi}{6},\tfrac{5\pi}{6},\tfrac{13\pi}{6},\t frac{17\pi}{6},...$. So we'd divide each of these by $3$ to get the values of $x$: $\tfrac{\pi}{18},\tfrac{5\pi}{18},\tfrac{13\pi}{18} ,\tfrac{17\pi}{18},...$ Practice makes perfect! Grandad • September 17th 2009, 09:04 AM Raj @mathaddict Ah special angles, I totally forgot. Thanks. Quote: Originally Posted by Grandad Hello RajThe answer is: using a combination of experience - you simply recognise certain trig ratios - and technique. For instance, you'll need experience - and the ability to memorise certain facts - to know that $\sin\tfrac{\pi}{6} = \tfrac12$ and that $\cos\tfrac{\pi}{6} = \tfrac{\sqrt3}{2}$. Thank-you, this clears things up Technique (check), experience (lacking) :p All times are GMT -8. The time now is 07:05 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8827047944068909, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/176148-geoemtric-interpretation-norm-bounded-linear-functional-print.html
# Geoemtric interpretation of the norm of a bounded linear functional Printable View • March 28th 2011, 11:38 PM raed Geoemtric interpretation of the norm of a bounded linear functional Dear Colleagues, Show that the norm $||f||$ of a bounded linear functional $f\neq 0$ on a normed space $X$ can be interpreted geometrically as the reciprocal of the distance $d=inf\{||x|| \ |f(x)=1\}$ of the hyperplane $H_{1}=\{x\in X \ |f(x)=1\}$ from the origin. Regards, Raed. • April 7th 2011, 06:19 PM mr fantastic Quote: Originally Posted by raed Dear Colleagues, Show that the norm $||f||$ of a bounded linear functional $f\neq 0$ on a normed space $X$ can be interpreted geometrically as the reciprocal of the distance $d=inf\{||x|| \ |f(x)=1\}$ of the hyperplane $H_{1}=\{x\in X \ |f(x)=1\}$ from the origin.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8736452460289001, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27080/direct-sum-of-anyons?answertab=votes
# direct sum of anyons? In the topological phase of a fractional quantum Hall fluid, the excitations of the ground state (quasiparticles) are anyons, at least conjecturally. There is then supposed to be a braided fusion category whose irreducible objects are in 1-1 correspondence with the various types of elementary quasiparticles. The tensor product of objects has an obvious physical meaning: it's the operation of colliding (fusing) quasiparticles... ... but what about direct sum? • The tensor product of two irreducible objects might be a direct sum of irreducible ones: what does this means physically in terms of the outcome of a collision of quasiparticles? • Let $X$ be an irreducible object of the fusion category. Is there any physical difference between (the physical states corresponding to) $X$ and to $X\oplus X$? - ## 4 Answers The simple objects in the braided fusion category correspond to the possible particle types. In the simplest important example there are two particle types 1 and $\phi$. (Well, 1 is the vacuum so it's a slightly odd sort of particle type.) The non-simple objects don't have any intrinsic physical meaning, $\phi \oplus \phi$ just means any system "that can be a single particle but in two different ways" but makes no claims about what those two different ways are. Tensor product of simple objects does have an intrinsic meaning, it means looking at a system with several particles in it. Since the underlying category only has finitely many objects, any time you have a multi-particle system you can break up the Hilbert space as a direct sum of states where you've fused them all together into a single particle (either 1 or $\phi$). For example, since $\phi \otimes \phi \otimes \phi \cong \phi \oplus \phi \oplus 1$ this means that the Hilbert space for the 3 particle system is 3-dimensions, and splits up into a two-dimensional space of things that behave like a single particle (this is the $\phi \oplus \phi$ part) and a one-dimensional space of things that behave like the vacuum (this is the 1 part). In this case $\phi \oplus \phi$ has a physical meaning imbued by virtue of its appearing as a summand of $\phi^{\otimes 3}$, but other appearances of $\phi \oplus \phi$ inside other tensor products have different physical meanings. In general, the Hilbert space assigned to the system of k particles $X_{a_1} \otimes X_{a_2} \otimes \ldots \otimes X_{a_k}$ is the direct sum over all particle types $X_i$ $$\bigoplus_{X_i} \mathrm{Hom}(X_{a_1} \otimes X_{a_2} \otimes \ldots \otimes X_{a_k}, X_i).$$ - From our discussions in the comments on my answer, the Hilbert space of a system with $n$ identical $X$-type particles is not $\mathrm{End}(X^{\otimes n})$ but $\mathrm{Hom}(X^{\otimes n}, I)$. – Peter Shor Oct 16 '11 at 16:28 I totally rewrote the answer based on the above discussion. Hopefully it's less wrong now. – Noah Snyder Oct 16 '11 at 16:30 @Peter Just a minor detail. Not all anyons are self-dual and thus $\text{Hom}(X^{\otimes n},I)$ can be trivial. For fusion rules $X\otimes X = \bigoplus_Y Y$, the full Hilbert space for $n$ anyons of type $X$, is given by $V_{n} = \bigoplus_Y\text{Hom}(X^{\otimes n},Y)$. – Heidar Oct 16 '11 at 16:33 Damn, I'm too slow. – Heidar Oct 16 '11 at 16:34 @Heidar You didn't take your nitpick far enough, as $X^{\otimes n}$ can also contains summands which don't appear in $X \otimes X$. I think the formula in my answer is right: you want to sum over all particle types. – Noah Snyder Oct 16 '11 at 16:37 show 2 more comments This was originally a comment on Joe's excellent answer, but it got too long. I'm trying to address the question of what φ ⊕ φ means. Suppose you look at the equation φ ⊗ φ ⊗ φ = φ ⊕ φ ⊕ I. What this says is that when you fuse three φ particles, there are two different ways of producing φ, and one way of producing I. The two ways are (a) and (b) below; the one way to produce I is (c): (a) fuse φ ⊗ φ to get I, and then fuse φ ⊗ I to get φ; (b) fuse φ ⊗ φ to get φ, and then fuse φ ⊗ φ to get φ. (c) fuse φ ⊗ φ to get φ, and then fuse φ ⊗ φ to get I; These three states are orthogonal, and you can take them to be basis states of the Hilbert space φ ⊗ φ ⊗ φ. When counting these different ways, you have to keep the order in which you fuse the particles fixed. If you want to change this order, you have to apply what physicists call the F matrix (possibly repeatedly) to make this basis change. One way of thinking about this is that the tensor product corresponds to the joint state of two systems, and the direct product to the superposition of states. When you fuse particles, you do a measurement. The above equation implies that if you have three Fibonacci anyons, their Hilbert space breaks up into two sectors. In one of these (the two-dimensional one), when you fuse all three anyons, you'll get a Fibonacci anyon. In the other (the one-dimensional one), when you fuse all three anyons, you'll get the vacuum state. What you get when you fuse all three anyons does not depend on the order in which you do it (unless you braid these anyons with other anyons before fusing them, which is how you do quantum computation with anyons). - +1: A far clearer answer than mine. – Joe Fitzsimons Oct 16 '11 at 13:16 This is again a very nice answer indeed and it's getting closer to what I'm looking for. But I still have the feeling that you are more explaining what φ ⊗ φ ⊗ φ means, and not so much what φ ⊕ φ means. I'm looking for something of the form: "If you have a physical system whose Hilbert space is φ ⊕ φ, then you can do the following measurement and you'll get result A, whereas if you have a physical system whose Hilbert space is φ and you do that same measurment you'll get result B". – André Oct 16 '11 at 14:01 2 @André: That's not what the notation means. $\phi + \phi$ is not adding two vectors, but rather creating a 2D vector space from 2 1D vector spaces. – Joe Fitzsimons Oct 16 '11 at 14:38 1 @André: the resulting space $\phi\oplus \phi$ can be thought of as an internal, but non-local part of the Hilbert space. Certain operations will induce internal rotations in this extra space. That means we can have state vectors in this internal space which are orthogonal. Since probabilities are related to squared amplitudes, this multidimensionality of the Hilbert space will be important when you look at interference-like experiments (much like the double-slit experiment). – Olaf Oct 16 '11 at 15:11 1 The difference with ordinary spin states is that these topological Hilbert spaces are not a localized property of one particle. The internal space is like a topological part of the Hilbert space, for lack of a better phrasing. The state of the internal space cannot be determined with a local measurement, but requires some sort of non-local process, which is usually braiding. – Olaf Oct 16 '11 at 15:13 show 18 more comments There is a very nice set of lecture notes on the subject by Jiannis Pachos here. (see specifically section 1.3 on fusion and braiding properties). As regards the first question, the tensor product and direct product are basically different ways of divvying up the Hilbert space (see John Baez's illuminating discussion here). When you have a relation like $\phi \otimes \phi = \mathbb{I} \oplus \phi$ (as for Fibonacci anyons), what this is saying is that when two anyons fuse they create either the vacuum or a single anyon. Physically the direct sum is basically enumerating possibilities, whereas the tensor product is basically describing the single possibility for a system composed of several subsystems. So an equation like this is saying that fusing two anyons produces either a single anyon or the vacuum state. As regards the second question, since the direct sum is constructing the Hilbert space by combining the Hilbert spaces of the arguments, $\phi+\phi$ is not the same as $\phi$, but rather is a larger Hilbert space of single anyons. You may want to look at page 17 of the Fibonacci link. You will notice that $\phi\otimes\phi\otimes\phi = \mathbb{I} \oplus \phi \oplus \phi$, which is a 3 dimensional Hilbert space, where as $\phi\otimes\phi=\mathbb{I} \oplus \phi$ which is a 2 dimensional Hilbert space. - +1 A much more precise and cleaner answer than mine. – Heidar Oct 16 '11 at 2:01 2 For physical relevance, let me mention that the Fibonacci anyons are conjectured to show up in the $\nu = 12/5$ plateau in FQH systems. This state is much harder to control experimentally than the $\mu = 5/2$ state, because the gap above the ground state is small. But contrary to the anyons I mentioned, these can perfom universal quantum computation. – Heidar Oct 16 '11 at 2:12 @Heidar: Sorry, hadn't seen your answer when I posted this. I guess we were writing them at the same time. – Joe Fitzsimons Oct 16 '11 at 2:12 No need to apologize, I like your answer much more than mine! – Heidar Oct 16 '11 at 2:14 Nice answer, and thank you for the references... I am still very confused as to what the physical difference should be between $\phi+\phi$ and $\phi$... (also, I think that the dimension -- rather "statistical dimension" -- of the Hilbert space of $\phi$ should be the golden ratio as opposed to one... whatever that means). Could you explain what an experimental setup might be that distinguishes $\phi+\phi$ from $\phi$? – André Oct 16 '11 at 12:23 show 1 more comment If I remember correctly, isomorphism classes of simple objects correspond to different types of particles (which is assumed to be finite), furthermore more is structure is usually needed than a fusion category, for example braiding (which is the reason why anyons are so interesting). Let me be very concrete. A physically (and experimentally) relevant category has three isomorphism classes of simple objects $(\mathbf 1, \psi, \sigma)$ with the non-trivial fusion rules $$\psi\otimes\psi = \mathbf 1, \quad \psi\otimes\sigma =\sigma\quad \text{and}\quad\sigma\otimes\sigma = \mathbf 1 \oplus \psi,$$ where $\sigma$ is the so-called Ising anyon (and $\mathbf 1$ is the unit object). These quasi-particles are conjectured to show up in the $\nu = 5/2$ plateau in fractional quantum Hall systems and in $p+ip$ wave superconductors. These fusion rules can be used to construct the ground state Hilbert space, which are given through the space of morphisms between simple objects. Defining $V_{ab}^c = \text{Hom}(a\otimes b,c)$, the Hilbert space for two Ising anyons is $V_2 = V_{\sigma\sigma}^{\mathbf 1}\oplus V_{\sigma\sigma}^{\psi}$ which is two-dimensional. For $2n$ anyons, the ground state is $\text{dim}V_{2n}= \text{dim}V_{2n\sigma}^{\mathbf 1} + \text{dim}V_{2n\sigma}^{\psi} = 2^{n-1}+2^{n-1} = 2^n$ dimensional (this is nicely seen using a graphical notation for morphisms, se references below). Using the fusion rule $\sigma\otimes\sigma = \mathbf 1 \oplus \psi$ one can solve the pentagon and hexagon equations for the $F$ and $R$ symbols, which when combined gives rise to a representation of the Braid group $B_{2n}$ (more preciely, the mapping class group of the n-punctured sphere=braids group + Dehn twists) on the ground state Hilbert space$V_{2n}$. Thus one physical consequence of these direct sums is that the ground state is degenerate and the anyons have highly non-trivial statistics, the ground state wave function transforms under (higher dimensional) representation of the braid group when the particles are adiabatically moved around each other. This property of (non-abelian) anyons has given rise to the idea of using them for quantum computation (another property is their non-local nature, which partially saves them from decoherence). To get a more physical idea of what fusion (or collision as you call it) of particles mean, one can look at the concrete $p+ip$ wave superconductors. In such superconductors zero (majorana) modes can be bound to the core of Abrikosov vortices, where for 2n vortices there will be $2^n = 2^{n-1} + 2^{n-1}$ fermionic states. This means that it takes two majorana fermions to get one conventional fermion. When the vortices are spatially separated, the state in the core of the vortex cannot be measured by local measurements. In the above notation; $\sigma$ is a vortex, $\psi$ an electron, and $\mathbf 1$ a cooper pair ("the trivial particle"). With this identification, the fusion rules say that fusing two electrons ($\psi\otimes\psi = \mathbf 1$) gives a cooper pair which vanishes in the condensates, while fusing two vortices ($\sigma\otimes\sigma = \mathbf 1\oplus\psi$) give either nothing or an electron. Thus the physical meaning of these direct sums of simple objects has something to do with the possible outcomes when we measure the state after fusing two particles. In this way (non-abelian) anyons can be used to construct qubits, by braiding them one can do a computation, in the end one can fuse them and measure the resulting state. References: You can read appendix B and then chapter four in this thesis, to get a more precise description of how braided ribbon categories and anyons are connected. These lecture notes by John Preskill gives a more physical insight, in section "9.12 Anyon models generalized" category theory is used to formulate the physics (although category theory language is not used, and might be annoying if you are a mathematician). For a mathematician a better reference is Last but not least, the canonical reference for non-abelian anyons is the review paper - First of all, thank you for your small correction: I modified fusion category --> braided fusion category in my post. Now concerning your sentence "Thus the physical meaning of these direct sums of simple objects has something to do with the possible outcomes when we measure the state after fusing two particles", this seems to imply that there is no physical difference between $X\oplus X$ and $X$ as $X$ or $X$ is really the same thing as $X$. Do you agree with that last statement? – André Oct 16 '11 at 12:15 @Andre I think you need to think about it together with fusion. Assume you have the particles $(\mathbf 1, X, Y, Z)$ with fusion rules $Y\otimes Y = \mathbf 1\oplus X$ and $Z\otimes Z = \mathbf 1\oplus X\oplus X$. Then i guess one can rephrase your question to: is fusion of two $Y$ particles physically equivalent to fusing two $Z$ particles? Well, no. It is true that both gives the possibilities $\mathbf 1$ or $X$, but there are more ways to obtain $X$ when fusion $Z\otimes Z$, than $Y\otimes Y$. (cont) – Heidar Oct 16 '11 at 14:14 In other words, the Hilbert space associated to $n$ $Y$ particles is different from the one associated with $n$ $Z$ particles. Therefore there is a physical difference between $X$ and $X\oplus X$. – Heidar Oct 16 '11 at 14:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349043369293213, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/95826-solving-logarithms-difference-base.html
# Thread: 1. ## Solving logarithms of a difference base. How do I solve for x of a logarithm that has two different base numbers? For example: log5 (x - 4) = log7x Can you show me the steps? Thanks. 2. You can use the identity $log_bx = \frac{log_ax}{log_ab}$ In your case you can change $log_7x$ to base 5 as follows $log_7x = \frac{log_5x}{log_57}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952699899673462, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/rationalhomotopy
## Tagged Questions 0answers 1 views ### von Staudt-Clausen for other special values The von Staudt-Clausen theorem expresses that the Bernoulli numbers' denominators have a very special form (see the wikipedia page on the theorem for more details). What interests … 1answer 125 views ### Doubt in the proof of Stickelberger’s Theorem I was going through the proof of Stickelberger's Theorem, as given in the book 'Algebraic Number Theory' by Richard A Mollin, and I am having some problem in understanding the proo … 0answers 6 views ### complex Morse function on a four-manifold If we have a complex Morse function on a complex four-manifold, $f: X\to \mathbb{C}$, can we tell from the function how the genus of inverse images $f^{-1}(z)$ (for regular values … 1answer 17 views ### Algorithm to find exponential map of differential operators acting on function I am trying to write a computer program which computes the action of the exponential of a differential operator on a function, for any given differential operator. Examples: \$\ex … 1answer 33 views ### Importance of separability vs. second-countability For me second-countability always felt like to be the more important and fundamental concept from general topology than separability. I wonder whether there are any points which ca … 0answers 2 views ### Possible diagonal values of a product of matrices with some specific characteristics Hello all, This is a question that might or might not be related to my previous one. Imagine you have two matrices: Matrix \$\mathbf{\Phi}=[\Phi_1,\ldots,\Phi_M]\in\mathbb{R}^{L … 44answers 9k views ### An example of a beautiful proof that would be accessible at the high school level? The background of my question comes from an observation that what we teach in schools does not always reflect what we practice. Beauty is part of what drives mathematicians, but w … 1answer 30 views ### On finite groups with same complex-valued character table What are the necessary and sufficient conditions for two finite groups $G$ and $H$ to have same complex-valued character table? Is there any criterion for which one could know abou … 0answers 18 views ### Who first computed the integral cohomology ring of a weighted projective space (WPS) ? After Jun-Ichi Igusa' talk at ICM 1962, H.J. Tramer computed the ring structure of the integral cohomology of such a space ( not yet called WPS ). In 1971 M.F. Atiyah called it W … 0answers 26 views ### Who first computed the integral cohomology ring of a weighted projective space (WPS) ? After Jun-Ichi Igusa' talk at ICM 1962, H.J. Tramer computed the ring structure of the integral cohomology of such a space ( not yet called WPS ). In 1971 M.F. Atiyah called it W … 0answers 14 views ### In cell-decomposed manifolds, how easy is it to arrange for the tubular neighborhood of a diagonal to contract onto the diagonal? Suppose that you have decomposed a manifold $M$ into cells (I care most, if it matters, about compact oriented smooth manifolds; but if my question can be solved in the PL category … 0answers 31 views ### Connectedness of hyperplane sections (reference request) Dear colleagues, Could you give me a reference (not a proof:) to the following folklore result. If $X\subset\mathbb P^n$ is a smooth irreducible projective variety of dimension \$ … 1answer 80 views ### How to determine the number of a cube within a bigger cube? Hi all, I have a cube, sized 39 x 13 x 8. I need to find out how many of them can fit in a cube of 100 x 100 x 100. I need to find the highest number possible. Do you know of a w … 0answers 37 views ### What is the ring structure of the complex topological K-theory of a non-singular complex quadric? I would like to know the ring structure of $K(Q_n)$ explicitly where $Q_n \subset \mathbb{P}^{n+1}$ is the non-singular $n$-dimensional complex quadric and $K(Q_n) = K^0(Q_n)$ is … 13answers 1k views ### Is there any proof that you feel you do not “understand”? Perhaps the "proofs" of ABC conjecture or newly released weak version of twin prime conjecture or alike readily come to your mind. These are not the proofs I am looking for. Indeed … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8965632319450378, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/advice-on-writing-papers/dont-overoptimise/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Don’t overoptimise We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. (Donald Knuth, “Literate programming”, paraphrasing Tony Hoare) After all my other advice on how to write papers, I should add one counterbalancing note: there is a danger in being too perfectionist, and in trying to make every part of a paper as “optimal” as possible. After all the “easy” improvements have been made to a paper, one encounters a law of diminishing returns, in which any further improvements either require large amounts of time and effort, or else require some tradeoffs in other qualities of the paper. For instance, suppose one has a serviceable lemma that suffices for the task of proving the main theorems of the paper at hand. One can then try to “optimise” this lemma by making the hypotheses weaker and the conclusion stronger, but this can come at the cost of lengthening the proof of the lemma, and obscuring exactly how the lemma fits in with the rest of the paper. In the reverse direction, one could also “optimise” the same lemma by replacing it with a weaker (but easier to prove) statement which still barely suffices to prove the main theorem, but is now unsuitable for use in any later application. Thus one encounters a tradeoff when one tries to improve the lemma in one direction or another. (In this case, one resolution to this tradeoff is to have one formulation of the lemma stated and proved, and then add a remark about the other formulation, i.e. state the strong version and remark that we only use a special case, or state the weak version and remark that stronger versions are possible.) Carefully optimising results and notations in the hope that this will help future researchers in the field is a little risky; later authors may introduce new insights or new tools which render these painstakingly optimised results obsolete. The only time when this is really profitable is when you already know of a subsequent paper (perhaps a sequel to the one you are already writing) which will indeed rely heavily on these results and notations, or when the current paper is clearly going to be the definitive paper in the subject for a long while. If you haven’t already written a rapid prototype for your paper, then optimising a lemma may in fact be a complete waste of time, because you may find later on in the writing process that the lemma will need to be modified anyway to deal with an unforeseen glitch in the original argument, or to improve the overall organisation of the paper. I have sometimes seen authors try to optimise the length of the paper at the expense of all other attributes, in the mistaken belief that brevity is equivalent to simplicity. While it can be that shorter papers are simpler than longer ones, this is generally only true if the shortness of the paper was achieved naturally rather than artificially. If brevity was attained by removing all examples, remarks, whitespace, motivation, and discussion, or by striking out “redundant” English phrases and relying purely on mathematical abbreviations (e.g. $\forall$ instead of “For all”, etc.) and various ungrammatical contractions, then this is generally a poor tradeoff; somewhat ironically, a paper which has been overcompressed may be viewed by readers as being more difficult to read than a longer, gentler, and more leisurely treatment of the same material. (See also “Give appropriate amounts of detail.”) On the other hand, optimising the readability of the paper is always a good thing (except when it is at the expense of rigour or accuracy), and the effort put into doing so is appreciated by readers. ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… ## 5 comments 5 October, 2007 at 1:14 am Radu Grigore I think Knuth never wrote something called “Code Complete”. Thanks for pointing that out! I have corrected the reference. 7 November, 2008 at 2:23 am Anonymous I feel that I am reading an article written by Bertrand Russell. Very well written! “After all the ‘easy’ improvements have been made to a paper, one encounters a law of diminishing returns…” I model this as a logistic curve phenomenon. No writing is ever done; merely asymptotic to done. The late Dean of American Science Fiction, Robert A. Heinlein, gave a well-known list of 5 Rules for professional authors. The last rule is, and I paraphrase, after the piece is finished and submitted to the target market(s), and is rejected, don’t assume that you’ve written badly, and start another round of rewrites. If an editor is willing to pay for something if a rewrite is done, then this is a necessary and sufficient reason to rewrite. Instead, keep resubmitting it to other markets, and put your energy into writing new pieces. Never underestimate the value of a good editor, even when the editor seems to say bad things. Of course, Heinlein was mostly talking about commercial fiction and nonfiction sales, and not the strange world of academic publishing in peer reviewed journals. [...] paper, and in particular in selecting good notation and giving appropriate amounts of detail. But one should not over-optimise the [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246708750724792, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/80277-galois-check-solution.html
Thread: 1. galois -check solution can someone check my solutions please f is poly t^5 -3 in Q[t], where alpha= 3^1/5 in R, epsilon = e^(2pi/5) (a) need to find zeros of f in terms of a and epsilon my solution to (a) alpha, alpha*epsilon, alpha*epsilon^2, alpha*epsilon^3, alpha*epsilon^4 (b) L = (alpha, epsilon). want to show that L is a splitting field of f over Q. my solution to (b) zeros of f are given by (t -alpha)*(t -alpha*epsilon)*(t -alpha*epsilon^2)*(t -alpha*epsilon^3)*(t -alpha*epsilon^4) so L = (alpha, epsilon) is splitting field (c) want to state min polynomial of alpha over Q ane epsilon over Q(alpha), and from this i want to write down the bases of the extensions Q(alpha):Q and L:Q(alpha) and then find the value of [L:Q] my solution to (c) min poly of alpha t^5 - 3 min poly of epsilon t^4 + t^3 + t^3 + t^2 + t + 1 a basis for Q(alpha) over Q is {1, alpha, alpha^2, alpha^3,alpha^4} a basis for Q(alpha, epsilon) over Q(alpha) is {1, epsilon, epsilon^2, epsilon^3} therefore [L:Q] = 9 thank you 2. (a) is correct. For (b), you need to invoke more than the factorization. Namely, $\mathbb{Q}(\alpha)$ is not the splitting field because there are complex roots of the polynomial and this field is real. The only other possible proper subfield is $\mathbb{Q}(\epsilon)$, but if it is indeed a proper subfield of $L$, then it does not contain $\alpha$ and so cannot split the polynomial. Thus $L$ is the splitting field. For (c), everything but your calculation of $[L:\mathbb{Q}]$ is correct. You seem to have added instead of multiplied. $[L:\mathbb{Q}]=[L:\mathbb{Q}(\alpha)][\mathbb{Q}(\alpha):\mathbb{Q}]$. But $[L:\mathbb{Q}(\alpha)]=4$ since the minimal polynomial for epsilon is still of degree 4 and $[\mathbb{Q}(\alpha):\mathbb{Q}]=5$ similarly. Thus $[L:\mathbb{Q}]=4*5=20$. You multiply the degrees because the basis is basically the product of bases, i.e. $1, \alpha, \ldots, \alpha^4, \epsilon, \epsilon\alpha, \ldots, \epsilon\alpha^4, \ldots, \epsilon^3\alpha^4$. 3. Originally Posted by dopi can someone check my solutions please f is poly t^5 -3 in Q[t], where alpha= 3^1/5 in R, epsilon = e^(2pi/5) (a) need to find zeros of f in terms of a and epsilon my solution to (a) alpha, alpha*epsilon, alpha*epsilon^2, alpha*epsilon^3, alpha*epsilon^4 This is good. (b) L = (alpha, epsilon). want to show that L is a splitting field of f over Q. my solution to (b) zeros of f are given by (t -alpha)*(t -alpha*epsilon)*(t -alpha*epsilon^2)*(t -alpha*epsilon^3)*(t -alpha*epsilon^4) so L = (alpha, epsilon) is splitting field This is good. (c) want to state min polynomial of alpha over Q ane epsilon over Q(alpha), and from this i want to write down the bases of the extensions Q(alpha):Q and L:Q(alpha) and then find the value of [L:Q] my solution to (c) min poly of alpha t^5 - 3 min poly of epsilon t^4 + t^3 + t^3 + t^2 + t + 1 a basis for Q(alpha) over Q is {1, alpha, alpha^2, alpha^3,alpha^4} a basis for Q(alpha, epsilon) over Q(alpha) is {1, epsilon, epsilon^2, epsilon^3} therefore [L:Q] = 9 thank you There is a problem here. You need to verify that these are minimal polynomials by showing they are irreducible. It is easy to see that $t^5-3$ is irreducible over $\mathbb{Q}$ by Eisenstein. Now since $[\mathbb{Q}(\alpha):\mathbb{Q}]=5$ and $f(t) =t^4+t^3+t^2+t+1$ is irreducible over $\mathbb{Q}$ with $\gcd(4,5)=1$ it follows that $f(t)$ is irreducible over $\mathbb{Q}(\alpha)$. Therefore, $t^4+t^3+t^2+t+1$ is minimal polynomial over $\mathbb{Q}(\alpha)$. We have proven that $\alpha$ has degree $5$ over $\mathbb{Q}$ and $\varepsilon$ has degree $4$ over $\mathbb{Q}(\alpha)$. Therefore, a basis for $L/\mathbb{Q}(\alpha)$ is $\{1,\varepsilon,\varepsilon^2,\varepsilon^3\}$ and a basis for $\mathbb{Q}(\alpha)/\mathbb{Q}$ is $\{1,\alpha,\alpha^2,\alpha^3,\alpha^4\}$. Thus, $L/\mathbb{Q}$ has a basis consisting of all these possible products between these bases and so there are $4\cdot 5=20$ elements which means $[L:\mathbb{Q}]=20$. Just in case you are interested the Galois group is the Frobenius group $F_{20}$. 4. Originally Posted by ThePerfectHacker This is good. This is good. There is a problem here. You need to verify that these are minimal polynomials by showing they are irreducible. It is easy to see that $t^5-3$ is irreducible over $\mathbb{Q}$ by Eisenstein. Now since $[\mathbb{Q}(\alpha):\mathbb{Q}]=5$ and $f(t) =t^4+t^3+t^2+t+1$ is irreducible over $\mathbb{Q}$ with $\gcd(4,5)=1$ it follows that $f(t)$ is irreducible over $\mathbb{Q}(\alpha)$. Therefore, $t^4+t^3+t^2+t+1$ is minimal polynomial over $\mathbb{Q}(\alpha)$. We have proven that $\alpha$ has degree $5$ over $\mathbb{Q}$ and $\varepsilon$ has degree $4$ over $\mathbb{Q}(\alpha)$. Therefore, a basis for $L/\mathbb{Q}(\alpha)$ is $\{1,\varepsilon,\varepsilon^2,\varepsilon^3\}$ and a basis for $\mathbb{Q}(\alpha)/\mathbb{Q}$ is $\{1,\alpha,\alpha^2,\alpha^3,\alpha^4\}$. Thus, $L/\mathbb{Q}$ has a basis consisting of all these possible products between these bases and so there are $4\cdot 5=20$ elements which means $[L:\mathbb{Q}]=20$. Just in case you are interested the Galois group is the Frobenius group $F_{20}$. what would be the order of $\Gamma Q(t^5 - 3)$ i thought it might be 5, because the basis of the roots has exactly 5 elements? is this correct 5. Originally Posted by dopi $\Gamma Q(t^5 - 3)$ How is this defined? 6. Originally Posted by ThePerfectHacker How is this defined? f = t^5 - 3 in Q[t] (Q is rationals) $<br /> \Gamma Q(f)<br />$ this is all i was given, apart from all the other parts answered above thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 62, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361003041267395, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/10/14/the-character-of-a-representation/?like=1&source=post_flair&_wpnonce=9916d21231
# The Unapologetic Mathematician ## The Character of a Representation Now we introduce a very useful tool in the study of group representations: the “character” of a representation. And it’s almost effortless to define: the character $\chi$ of a matrix representation $X$ of a group $G$ is a complex-valued function on $G$ defined by $\displaystyle\chi(g)=\mathrm{Tr}(X(g))$ That is, the character is “the trace of the representation”. But why this is interesting is almost completely opaque at this point. I’m still not entirely sure why this formula has so many fabulous properties. First of all, we need to recall something about the trace: it satisfies the “cyclic property”. That is, given an $m\times n$ matrix $A$ and an $n\times m$ matrix $B$, we have $\displaystyle\mathrm{Tr}(AB)=\mathrm{Tr}(BA)$ Indeed, if we write out the matrices in components we find $\displaystyle\begin{aligned}(AB)_i^j&=\sum\limits_{k=1}^nA_i^kB_k^j\\(BA)_k^l&=\sum\limits_{i=1}^mB_k^iA_i^l\end{aligned}$ Then since the trace is the sum of the diagonal elements we calculate $\displaystyle\begin{aligned}\mathrm{Tr}(AB)&=\sum\limits_{i=1}^m\sum\limits_{k=1}^nA_i^kB_k^i\\\mathrm{Tr}(BA)&=\sum\limits_{k=1}^n\sum\limits_{i=1}^mB_k^iA_i^k\end{aligned}$ but these are exactly the same! We have to be careful, though, that we don’t take this to mean that we can arbitrarily reorder matrices inside the trace. If $A$, $B$, and $C$ are all $n\times n$ matrices, we can conclude that $\displaystyle\begin{aligned}\mathrm{Tr}(ABC)&=\mathrm{Tr}(BCA)&=\mathrm{Tr}(CAB)\\\mathrm{Tr}(ACB)&=\mathrm{Tr}(CBA)&=\mathrm{Tr}(BAC)\end{aligned}$ but we cannot conclude in general that any of the traces on the upper line are equal to any of the traces on the lower line. We can “cycle” matrices around inside the trace, but not rearrange them arbitrarily. So, what good is this? Well, if $A$ is an invertible $n\times n$ matrix and $X$ is any matrix, then we find that $\mathrm{Tr}(AXA^{-1})=\mathrm{Tr}(XA^{-1}A)=\mathrm{Tr}(X)$. If $A$ is a change of basis matrix, then this tells us that the trace only depends on the linear transformation $X$ represents, and not on the particular matrix. In particular, if $X$ and $Y$ are two equivalent matrix representations then there is some intertwining matrix $A$ so that $AX(g)=Y(g)A$ for all $g\in G$. The characters of $X$ and $Y$ are therefore equal. If $V$ is a $G$-module, then picking any basis for $V$ gives a matrix $X(g)$ representing each linear transformation $\rho(g)$. The previous paragraph shows that which particular matrix representation we pick doesn’t matter, since they’re all give us the same character $\chi(g)$. And so we can define the character of a $G$-module to be the character of any corresponding matrix representation. ## 4 Comments » 1. [...] Our first observation about characters takes our work from last time and spins it in a new [...] Pingback by | October 15, 2010 | Reply 2. [...] we’re dealing with characters, there’s something we can do to rework our expression for the inner product on the space of [...] Pingback by | October 18, 2010 | Reply 3. [...] let’s try to compute the character of this representation. If we write the representing homomorphisms and , then we get a [...] Pingback by | November 4, 2010 | Reply 4. [...] Characters of Induced Representations We know how to restrict and induce representations. Now we want to see what this looks like on the level of characters. [...] Pingback by | November 29, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118859767913818, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Wavelets
# Wavelet (Redirected from Wavelets) A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "reverse, shift, multiply and sum" technique called convolution, with portions of a known signal to extract information from the unknown signal. For example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly a 32nd note. If this wavelet were to be convolved at periodic intervals with a signal created from the recording of a song, then the results of these convolutions would be useful for determining when the Middle C note was being played in the song. Mathematically, the wavelet will resonate if the unknown signal contains information of similar frequency – just as a tuning fork physically resonates with sound waves of its specific tuning frequency. This concept of resonance is at the core of many practical applications of wavelet theory. As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets will deconstruct data without gaps or overlap so that the deconstruction process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss. In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square integrable functions. ## Name The word wavelet has been used for decades in digital signal processing and exploration geophysics.[1] The equivalent French word ondelette meaning "small wave" was used by Morlet and Grossmann in the early 1980s. ## Wavelet theory Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Almost all practically useful discrete wavelet transforms use discrete-time filterbanks. These filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: Given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle. Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based. ### Continuous wavelet transforms (continuous shift and scale parameters) In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the Lp function space L2(R) ). For instance the signal may be represented on every frequency band of the form [f, 2f] for all positive frequencies f > 0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components. The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function ψ in L2(R), the mother wavelet. For the example of the scale one frequency band [1, 2] this function is $\psi(t)=2\,\operatorname{sinc}(2t)-\,\operatorname{sinc}(t)=\frac{\sin(2\pi t)-\sin(\pi t)}{\pi t}$ with the (normalized) sinc function. That, Meyer's, and two other examples of mother wavelets are: The subspace of scale a or frequency band [1/a, 2/a] is generated by the functions (sometimes called child wavelets) $\psi_{a,b} (t) = \frac1{\sqrt a }\psi \left( \frac{t - b}{a} \right),$ where a is positive and defines the scale and b is any real number and defines the shift. The pair (a, b) defines a point in the right halfplane R+ × R. The projection of a function x onto the subspace of scale a then has the form $x_a(t)=\int_\R WT_\psi\{x\}(a,b)\cdot\psi_{a,b}(t)\,db$ with wavelet coefficients $WT_\psi\{x\}(a,b)=\langle x,\psi_{a,b}\rangle=\int_\R x(t){\psi_{a,b}(t)}\,dt.$ See a list of some Continuous wavelets. For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal. ### Discrete wavelet transforms (discrete shift and scale parameters) It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a > 1, b > 0. The corresponding discrete subset of the halfplane consists of all the points (am, namb) with m, n in Z. The corresponding baby wavelets are now given as $\psi_{m,n}(t)=a^{-m/2}\psi(a^{-m}t-nb). \,$ A sufficient condition for the reconstruction of any signal x of finite energy by the formula $x(t)=\sum_{m\in\Z}\sum_{n\in\Z}\langle x,\,\psi_{m,n}\rangle\cdot\psi_{m,n}(t)$ is that the functions $\{\psi_{m,n}:m,n\in\Z\}$ form a tight frame of L2(R). ### Multiresolution discrete wavelet transforms D4 wavelet In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. To avoid this numerical complexity, one needs one auxiliary function, the father wavelet φ in L2(R). Further, one has to restrict a to be an integer. A typical choice is a = 2 and b = 1. The most famous pair of father and mother wavelets is the Daubechies 4-tap wavelet. From the mother and father wavelets one constructs the subspaces $V_m=\operatorname{span}(\phi_{m,n}:n\in\Z),\text{ where }\phi_{m,n}(t)=2^{-m/2}\phi(2^{-m}t-n)$ $W_m=\operatorname{span}(\psi_{m,n}:n\in\Z),\text{ where }\psi_{m,n}(t)=2^{-m/2}\psi(2^{-m}t-n).$ From these one requires that the sequence $\{0\}\subset\dots\subset V_{-1}\subset V_0\subset V_{+1}\subset\dots\subset L^2(\mathbf{R})$ forms a multiresolution analysis of am and that the subspaces $\dots,W_1,W_0,W_{-1},\dots\dots$ are the orthogonal "differences" of the above sequence, that is, Wm is the orthogonal complement of Vm inside the subspace Vm−1. In analogy to the sampling theorem one may conclude that the space Vm with sampling distance 2m more or less covers the frequency baseband from 0 to 2−m-1. As orthogonal complement, Wm roughly covers the band [2−m-1, 2−m]. From those inclusions and orthogonality relations follows the existence of sequences $h=\{h_n\}_{n\in\Z}$ and $g=\{g_n\}_{n\in\Z}$ that satisfy the identities $h_n=\langle\phi_{0,0},\,\phi_{-1,n}\rangle$ and $\phi(t)=\sqrt2 \sum_{n\in\Z} h_n\phi(2t-n),$ $g_n=\langle\psi_{0,0},\,\phi_{-1,n}\rangle$ and $\psi(t)=\sqrt2 \sum_{n\in\Z} g_n\phi(2t-n).$ The second identity of the first pair is a refinement equation for the father wavelet φ. Both pairs of identities form the basis for the algorithm of the fast wavelet transform. Note that not every discrete wavelet orthonormal basis can be associated to a multiresolution analysis; for example, the Journe wavelet set wavelet admits no multiresolution analysis.[2] ## Mother wavelet For practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of the space $L^1(\R)\cap L^2(\R).$ This is the space of measurable functions that are absolutely and square integrable: $\int_{-\infty}^{\infty} |\psi (t)|\, dt <\infty$ and $\int_{-\infty}^{\infty} |\psi (t)|^2 \, dt <\infty.$ Being in this space ensures that one can formulate the conditions of zero mean and square norm one: $\int_{-\infty}^{\infty} \psi (t)\, dt = 0$ is the condition for zero mean, and $\int_{-\infty}^{\infty} |\psi (t)|^2\, dt = 1$ is the condition for square norm one. For ψ to be a wavelet for the continuous wavelet transform (see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform. For the discrete wavelet transform, one needs at least the condition that the wavelet series is a representation of the identity in the space L2(R). Most constructions of discrete WT make use of the multiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is solution to a functional equation. In most situations it is useful to restrict ψ to be a continuous function with a higher number M of vanishing moments, i.e. for all integer m < M $\int_{-\infty}^{\infty} t^m\,\psi (t)\, dt = 0.$ The mother wavelet is scaled (or dilated) by a factor of a and translated (or shifted) by a factor of b to give (under Morlet's original formulation): $\psi _{a,b} (t) = {1 \over {\sqrt a }}\psi \left( {{{t - b} \over a}} \right).$ For the continuous WT, the pair (a,b) varies over the full half-plane R+ × R; for the discrete WT this pair varies over a discrete subset of it, which is also called affine group. These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat). Restriction: (1) $\frac{1}{\sqrt{a}} \int_{-\infty}^{\infty} \varphi_{a1,b1}(t)\varphi(\frac{t-b}{a}) \, dt$ when a1 = a and b1 = b, (2) $\Psi (t)$ has a finite time interval ## Comparisons with Fourier transform (continuous-time) The wavelet transform is often compared with the Fourier transform, in which signals are represented as a sum of sinusoids. The main difference is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. The Short-time Fourier transform (STFT) is more similar to the wavelet transform, in that it is also time and frequency localized, but there are issues with the frequency/time resolution trade-off. Wavelets often give a better signal representation using Multiresolution analysis, with balanced resolution at any time and frequency. The discrete wavelet transform is also less computationally complex, taking O(N) time as compared to O(N log N) for the fast Fourier transform. This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT(Fast Fourier Transform) which uses the same basis functions as DFT (Discrete Fourier Transform).[3] It is also important to note that this complexity only applies when the filter size has no relation to the signal size. A wavelet without compact support such as the Shannon wavelet would require O(N2). (For instance, a logarithmic Fourier Transform also exists with O(N) complexity, but the original signal must be sampled logarithmically in time, which is only useful for certain types of signals.[4]) ## Definition of a wavelet There are a number of ways of defining a wavelet (or a wavelet family). ### Scaling filter An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined. For analysis with orthogonal wavelets the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters are the time reverse of the decomposition filters. Daubechies and Symlet wavelets can be defined by the scaling filter. ### Scaling function Wavelets are defined by the wavelet function ψ(t) (i.e. the mother wavelet) and scaling function φ(t) (also called father wavelet) in the time domain. The wavelet function is in effect a band-pass filter and scaling it for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See [1] for a detailed explanation. For a wavelet with compact support, φ(t) can be considered finite in length and is equivalent to the scaling filter g. Meyer wavelets can be defined by scaling functions ### Wavelet function The wavelet only has a time domain representation as the wavelet function ψ(t). For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few Continuous wavelets. ## Applications of discrete wavelet transform Generally, an approximation to DWT is used for data compression if signal is already sampled, and the CWT for signal analysis. Thus, DWT approximation is commonly used in engineering and computer science, and the CWT in scientific research. Wavelet transforms are now being adopted for a vast number of applications, often replacing the conventional Fourier Transform. Many areas of physics have seen this paradigm shift, including molecular dynamics, ab initio calculations, astrophysics, density-matrix localisation, seismology, optics, turbulence and quantum mechanics. This change has also occurred in image processing, blood-pressure, heart-rate and ECG analyses, brain rhythms, DNA analysis, protein analysis, climatology, general signal processing, speech recognition, computer graphics and multifractal analysis. In computer vision and image processing, the notion of scale space representation and Gaussian derivative operators is regarded as a canonical multi-scale representation. One use of wavelet approximation is in data compression. Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example, JPEG 2000 is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is a tight frame (see types of Frame of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details see wavelet compression. A related use is for smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed. Wavelet transforms are also starting to be used for communication applications. Wavelet OFDM is the basic modulation scheme used in HD-PLC (a power line communications technology developed by Panasonic), and in one of the optional modes included in the IEEE 1901 standard. Wavelet OFDM can achieve deeper notches than traditional FFT OFDM, and wavelet OFDM does not require a guard interval (which usually represents significant overhead in FFT OFDM systems).[5] ## History The development of wavelets can be linked to several separate trains of thought, starting with Haar's work in the early 20th century. Later work by Dennis Gabor yielded Gabor atoms (1946), which are constructed similarly to wavelets, and applied to similar purposes. Notable contributions to wavelet theory can be attributed to Zweig’s discovery of the continuous wavelet transform in 1975 (originally called the cochlear transform and discovered while studying the reaction of the ear to sound),[6] Pierre Goupillaud, Grossmann and Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work on discrete wavelets (1983), Daubechies' orthogonal wavelets with compact support (1988), Mallat's multiresolution framework (1989), Akansu's Binomial QMF (1990), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland's harmonic wavelet transform (1993) and many others since. ### Timeline • First wavelet (Haar wavelet) by Alfréd Haar (1909) • Since the 1970s: George Zweig, Jean Morlet, Alex Grossmann • Since the 1980s: Yves Meyer, Stéphane Mallat, Ingrid Daubechies, Ronald Coifman, Ali Akansu, Victor Wickerhauser, ## Wavelet transforms A wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals. Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid. There are a large number of wavelet transforms each suitable for different applications. For a full list see list of wavelet-related transforms but the common ones are listed below: ### Generalized transforms There are a number of generalized transforms of which the wavelet transform is a special case. For example, Joseph Segman introduced scale into the Heisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume. Another example of a generalized transform is the chirplet transform in which the CWT is also a two dimensional slice through the chirplet transform. An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example, darkfield electron optical transforms intermediate between direct and reciprocal space have been widely used in the harmonic analysis of atom clustering, i.e. in the study of crystals and crystal defects.[7] Now that transmission electron microscopes are capable of providing digital images with picometer-scale information on atomic periodicity in nanostructure of all sorts, the range of pattern recognition[8] and strain[9]/metrology[10] applications for intermediate transforms with high frequency resolution (like brushlets[11] and ridgelets[12]) is growing rapidly. Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. This transform is capable of providing the time- and fractional-domain information simultaneously and representing signals in the time-fractional-frequency plane.[13] ## List of wavelets ### Discrete wavelets • Beylkin (18) • BNC wavelets • Coiflet (6, 12, 18, 24, 30) • Cohen-Daubechies-Feauveau wavelet (Sometimes referred to as CDF N/P or Daubechies biorthogonal wavelets) • Daubechies wavelet (2, 4, 6, 8, 10, 12, 14, 16, 18, 20) • Binomial-QMF (Also referred to as Daubechies wavelet) • Haar wavelet • Mathieu wavelet • Legendre wavelet • Villasenor wavelet • Symlet[14] ## Notes 1. Ricker, Norman (1953). "WAVELET CONTRACTION, WAVELET EXPANSION, AND THE CONTROL OF SEISMIC RESOLUTION". Geophysics 18 (4). doi:10.1190/1.1437927. 2. Larson, David R. (2007). "Unitary systems and wavelet sets". Wavelet Analysis and Applications. Appl. Numer. Harmon. Anal. Birkhäuser. pp. 143–171. 3. Stefano Galli, O. Logvinov (July 2008). "Recent Developments in the Standardization of Power Line Communications within the IEEE". IEEE Communications Magazine 46 (7): 64–71. doi:10.1109/MCOM.2008.4557044.  An overview of P1901 PHY/MAC proposal. 4. P. Hirsch, A. Howie, R. Nicholson, D. W. Pashley and M. J. Whelan (1965/1977) Electron microscopy of thin crystals (Butterworths, London/Krieger, Malabar FLA) ISBN 0-88275-376-2 5. P. Fraundorf, J. Wang, E. Mandell and M. Rose (2006) Digital darkfield tableaus, Microscopy and Microanalysis 12:S2, 1010–1011 (cf. arXiv:cond-mat/0403017) 6. M. J. Hÿtch, E. Snoeck and R. Kilaas (1998) Quantitative measurement of displacement and strain fields from HRTEM micrographs, Ultramicroscopy 74:131-146. 7. Martin Rose (2006) Spacing measurements of lattice fringes in HRTEM image using digital darkfield decomposition (M.S. Thesis in Physics, U. Missouri – St. Louis) 8. F. G. Meyer and R. R. Coifman (1997) Applied and Computational Harmonic Analysis 4:147. 9. A. G. Flesia, H. Hel-Or, A. Averbuch, E. J. Candes, R. R. Coifman and D. L. Donoho (2001) Digital implementation of ridgelet packets (Academic Press, New York). 10. Matlab Toolbox – URL: http://matlab.izmiran.ru/help/toolbox/wavelet/ch06_a32.html 11. Erik Hjelmås (1999-01-21) Gabor Wavelets URL: http://www.ansatt.hig.no/erikh/papers/scia99/node6.html ## References • Paul S. Addison, The Illustrated Wavelet Transform Handbook, Institute of Physics, 2002, ISBN 0-7503-0692-0 • Ali Akansu and Richard Haddad, Multiresolution Signal Decomposition: Transforms, Subbands, Wavelets, Academic Press, 1992, ISBN 0-12-047140-X • B. Boashash, editor, "Time-Frequency Signal Analysis and Processing – A Comprehensive Reference", Elsevier Science, Oxford, 2003, ISBN 0-08-044335-4. • Tony F. Chan and Jackie (Jianhong) Shen, Image Processing and Analysis – Variational, PDE, Wavelet, and Stochastic Methods, Society of Applied Mathematics, ISBN 0-89871-589-X (2005) • Ingrid Daubechies, Ten Lectures on Wavelets, Society for Industrial and Applied Mathematics, 1992, ISBN 0-89871-274-2 • Ramazan Gençay, Faruk Selçuk and Brandon Whitcher, An Introduction to Wavelets and Other Filtering Methods in Finance and Economics, Academic Press, 2001, ISBN 0-12-279670-5 • Haar A., Zur Theorie der orthogonalen Funktionensysteme, Mathematische Annalen, 69, pp 331–371, 1910. • Barbara Burke Hubbard, "The World According to Wavelets: The Story of a Mathematical Technique in the Making", AK Peters Ltd, 1998, ISBN 1-56881-072-5, ISBN 978-1-56881-072-0 • Gerald Kaiser, A Friendly Guide to Wavelets, Birkhauser, 1994, ISBN 0-8176-3711-7 • Stéphane Mallat, "A wavelet tour of signal processing" 2nd Edition, Academic Press, 1999, ISBN 0-12-466606-X • Donald B. Percival and Andrew T. Walden, Wavelet Methods for Time Series Analysis, Cambridge University Press, 2000, ISBN 0-521-68508-7 • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 13.10. Wavelet Transforms", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 • P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice Hall, 1993, ISBN 0-13-605718-7 • Mladen Victor Wickerhauser, Adapted Wavelet Analysis From Theory to Software, A K Peters Ltd, 1994, ISBN 1-56881-041-5 • Martin Vetterli and Jelena Kovačević, "Wavelets and Subband Coding", Prentice Hall, 1995, ISBN 0-13-097080-8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8654773235321045, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/16180/complexity-of-verifying-proofs?answertab=oldest
# Complexity of verifying proofs My question can be read on many levels and so I welcome answers to any reading. The general question is: What is the computational complexity of verifying a proof? One way of looking at a computational complexity class (for decision problems) is that is is a set of theories (a theory being a set of theorems) whose theorems can be proven with the resources allotted to the class (any computation of a yes/no by a TM can be viewed as a proof that the answer is yes/no). Some complexity classes are defined using provability (at least NP), that is, given a proof (a witness or certificate), it can be checked (verified) by some other (strictly smaller) class's machine. In most proofs of NP-completeness, showing that a problem is actually a member of NP usually is just showing that there is a witness fr any instance that can be verified by a P machine (or, better said, that the verification problem is a member of P). My question comes from my observation that these verification problems (the one's I remember) seem almost always (I can't remember any others) to be very linear (yes, I'm confounding computational complexity with algorithmic running time). Are there some verification algorithms that are (provably) polynomial of a higher degree than 1? Or even super-polynomial for EXP, EXP2, etc? Or in general is the complexity of verification, applied to -any- complexity class, always linear (in P or some smaller class)? (even though my question is about complexity classes, I am also curious about real life proofs where people may rely on lemmas that involve, for example, proving the existence of a particular intersection of hyperplanes (presumably (!) in some subclass of P) - I am a bit confused. How are proofs given to you? Are we talking about formal proofs? A formal proof would be a sequence of sentences, and to verify whether such a sequence is a proof or not is just as hard as verifying whether a sentence is an axiom of the theory in which the proof takes place (and it is decidable iff the theory is computably enumerable). Perhaps you mean something else? – Andres Caicedo Jan 3 '11 at 0:18 @Andres: Thanks for clarifying in asking for clarification. I am mixing domain vocabularies. Even though I am talking about proofs and verifications, I am considering the mechanics not of sequence of applications of rules of inference, but rather sequences of TM rules. And I'm assuming that for a given TM only a finite set of rules. Also, the proof is not given in full, but as a certificate/witness that can be inputted to another TM (of presumably lower complexity) to verify/confirm the truth of the answer of the higher complexity problem. – Mitch Jan 3 '11 at 0:34 ## 1 Answer I will answer your questions in order: 1- The complexity of a proof is the length of the proof. Remember that a proof (witness or certificate) is simply a branch of a Turing machine on some given input. This corresponds to a solution to some problem. For example consider SAT, the input to the machine will be a SAT instance (i.e. a propositional formula), and the proof will be an assignment of 0-1 values to the variables. Some problems can have exponentially long proofs, like PSPACE-complete problems, or proofs of non-existence like in coNP. 2- The second question is not very clear for me. I think you are asking if there can be proofs with super-polynomial lenghts? Like I've said, PSPACE-complete and coNP languages have exponentially long proofs. 3- I don't understand what you mean with linear. You mean that the verifiers are always P machines? Not necessarily, for example, NEXP is also called Exponential-NP. This is because the proofs are verified by deterministic machines that run in exponential time. In real life, you can consider that all the lines of text that a proof have, can be stated in formal logic, and this can be interpreted by a machine. A field in computational complexity that studies formal proofs is proof complexity. For example, you are given a statement written in some theory (e.g. Peano's arithmetic). Then, you calculated what are the necessary resources a machine needs to verify that the statement is true or not. - re 1- yes, under certain circumstances, a proof is certainly just the trace through a TM. But your example points to a diferent circumstance, SAT is an NP-complete problem, but as often presented, to show it's membership in NP, one gives a certificate of a valuation, which one then uses to verify that the SAT instance has a true valuation. That verification process, which one might also call a proof, is known to be in $NC^1$ (the Boolean Formula Value problem, calculating the boolean value of a formula with no variables, just boolean constants). – Mitch Jan 25 '11 at 1:17 continued...but to mix metaphors, that is a linear time algorithm. Anyway, the point is that the verification process produces a proof in the TM (as a sequence of moves), but the verification process, with the certificate as input has a certain nontrivial complexity to it. A certificate for an NEXP machine would be one that needs to be verified on an EXP machine, whose latter running time is, well, exponential (deterministic). So something I'd like to call a proof (simply the certificate) still needs to be supplied to a machine with possibly superlinear running time. – Mitch Jan 25 '11 at 1:23 A verification process is an algorithm, it cannot be in NC1. The language consisting of boolean formulas with no free-variables is in NC1. Of course, the circuit that decides this language has polynomial size with logarithmic depth. Also, a proof is not defined as an algorithm, is another string of bits that tells you if the input is in the language or not. Another example is, find a path with length <= l between nodes s and t in some graph. This problem is in P, and a proof will be a sequence of connected nodes with length l. – Marcos Villagra Jan 25 '11 at 1:34 the complexity of a proof is always defined as the length of an accepting path in the TM. This path is generated by the input. – Marcos Villagra Jan 25 '11 at 1:36 Am I getting your questions right? – Marcos Villagra Jan 25 '11 at 1:37 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446125626564026, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/100635-radical-extension.html
# Thread: 1. ## Radical extension Prove the splitting field of $x^3+x^2+1\in Z_{2}[x]$ is a radical extension... when it comes to $Z_{2}$,I am a little bit confused. Since many theorem is based on $charF=0$.. 2. Originally Posted by ynj Prove the splitting field of $x^3+x^2+1\in Z_{2}[x]$ is a radical extension... when it comes to $Z_{2}$,I am a little bit confused. Since many theorem is based on $charF=0$.. The theorem about determining if something is a radical extension (that is, the polynomial is solvable) is about charachteristic zero fields. However, the definition of radical extensions is more general then that. Remember, $K/F$ is a radical extension (by definition) iff there exists $a_1,...,a_n\in K$ and $e_1,...,e_n\geq 1$ such that $F(a_1,...,a_n) = K$, $a_j^{e_j} \in F(a_1,...,a_{j-1})$. So construct the splitting field of this and argue that you can write it in the above form. 3. Originally Posted by ThePerfectHacker The theorem about determining if something is a radical extension (that is, the polynomial is solvable) is about charachteristic zero fields. However, the definition of radical extensions is more general then that. Remember, $K/F$ is a radical extension (by definition) iff there exists $a_1,...,a_n\in K$ and $e_1,...,e_n\geq 1$ such that $F(a_1,...,a_n) = K$, $a_j^{e_j} \in F(a_1,...,a_{j-1})$. So construct the splitting field of this and argue that you can write it in the above form. Yeah,but I simply know nothing about the structure of the splitting field... 4. Originally Posted by ynj Prove the splitting field of $x^3+x^2+1\in Z_{2}[x]$ is a radical extension... Notice that $x^3+x^2+1$ is irreducible over $F = \mathbb{Z}_2[x]$. Therefore, as you know, there exists an extension field $K$ which has $\alpha\in K$ that solves this polynomial. Therefore, $\alpha^3 = \alpha^2 + 1$. Now $x^3 + x^2 + 1 = (x+\alpha)(x^2 + (\alpha+1)x+\alpha(\alpha+1))$. You need to ask now whether $x^2+(\alpha+1)x+\alpha(\alpha+1)$ has a zero in $F(\alpha)$. Sadly, it does not, this can be confirmed by checking $a\alpha^2+b\alpha + c$ where $a,b,c\in \{0,1\}$. Thus, $F(\alpha)$ is not the splitting field over $x^3+x^2+1$. However, we know there exists $L/F(\alpha)$ such that there is $\beta \in L$ which solves $x^2+(\alpha+1)x+\alpha(\alpha +1 ) \in F(\alpha)[x]$. The extension field $F(\alpha,\beta)$ will therefore become the splitting field over $F$. Now it remains to argue this satisfies the conditions of being a radical extension. 5. this is maybe an easier way: let $p(x)=x^3+x^2+1 \in \mathbb{F}_2[x].$ see that if $p(\alpha)=0,$ then $\alpha^7 = 1 \in \mathbb{F}_2.$ thus the splitting field of $p(x)$ is a radical extension of $\mathbb{F}_2.$ 6. Originally Posted by NonCommAlg this is maybe an easier way: let $p(x)=x^3+x^2+1 \in \mathbb{F}_2[x].$ see that if $p(\alpha)=0,$ then $\alpha^7 = 1 \in \mathbb{F}_2.$ thus the splitting field of $p(x)$ is a radical extension of $\mathbb{F}_2.$ But $F(\alpha)$ is not the splitting field. You need to show that $F(\alpha,\beta)$ satisfies $\beta^{n_1} \in F(\alpha)$ and $\alpha^{n_2} \in F$. You have shown that $n_2=7$ works so it still remains to find $n_1$. 7. Originally Posted by ThePerfectHacker But $F(\alpha)$ is not the splitting field. You need to show that $F(\alpha,\beta)$ satisfies $\beta^{n_1} \in F(\alpha)$ and $\alpha^{n_2} \in F$. You have shown that $n_2=7$ works so it still remains to find $n_1$. Yeah.. $F(\alpha,\beta)$is the splitting field. But we know $\alpha^7\in F,\beta^7\in F\subset F(\alpha)$. Is that right? 8. Originally Posted by ynj Yeah.. $F(\alpha,\beta)$is the splitting field. But we know $\alpha^7\in F,\beta^7\in F \subset F(\alpha)$. Is that right? correct!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 60, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422401785850525, "perplexity_flag": "head"}
http://reference.iucr.org/mediawiki/index.php?title=Refinement&oldid=2761
# Refinement ### From Online Dictionary of Crystallography Revision as of 20:35, 9 April 2008 by MassimoNespolo (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Affinement (Fr); Affinamento (It); 構造精密化 (Ja). ## Definition In structure determination, the process of improving the parameters of an approximate (trial) structure until the best fit is achieved between an observed diffraction pattern and that calculated by Fourier transformation from the numerically parameterized trial structure. ## Least-squares refinement The most common approach in the determination of inorganic or small-molecule structures is to minimize a function $\sum w (Y_o - Y_c)^2$ where Yo represents the strength of an observed diffraction spot or reflection from a lattice plane of the crystal, Yc is the value calculated from the structural model for the same reflection, and w is an assigned weight reflecting the importance that this reflection makes to the sum. The weights usually represent an estimate of the precision of the measured quantity. The sum is taken over all measured reflections. ### Refinable parameters The structural model describes a collection of scattering centres (atoms), each located at a fixed position in the crystal lattice, and with some degree of mobility or extension around that locus. In adjusting the structural model to improve the fit between calculated and observed diffraction patterns, the crystallographer may vary these and other parameters. Refinable parameters are those that may be varied in order to improve the fit. Usually they comprise atomic coordinates, atomic displacement parameters, a scale factor to bring the observed and calculated amplitudes or intensities to the same scale. They may also include extinction parameters, occupancy factors, twin component fractions, and even the assigned space group. Relations between the refinable parameters may be expressed as constraints or restraints that modify the function to be minimized. ### Constraints A constraint is an exact mathematical relationship that reduces the number of free parameters in a model. For example, the position of an atom on a general position is specified by three coordinates, all of which may be varied independently. However, an atom sitting on a special symmetry position has one or more positional coordinates determined by the symmetry (for example, an atom on an inversion centre in the unit cell has all three coordinates fixed). Constraints are rigid mathematical rules which must be adhered to during the refinement; they reduce the number of refinable parameters. A constrained refinement is one that includes constraints other than those arising from space group symmetry (since these are necessarily always present). ### Restraints A restraint is an additional condition that the model parameters must meet to satisfy some additional piece of knowledge appropriate to the structure. For example, if the chemical identities of certain atoms within a molecule are known, their intermolecular distance may be fit to a target value characteristic of bond lengths in other known chemical species of the same type. Restraints are therefore treated as if they were additional experimental observations, and have the effect of increasing the number of refinable parameters. ### Refinement against F, F2 or I? The function to minimize in least-squares refinement was given above in the general form $\sum w (Y_o - Y_c)^2$ and the quantity Y was referred to as a measure of the strength of a reflection. In practice, Y, sometimes known as the structure-factor coefficient, may be either I, the intensity of the measured reflection, | F | , the magnitude of the structure factor, or F2, the square of the structure factor. Refinement against I, the measured intensities, has the merit of using the raw measurements directly, although it requires the incorporation in the refinement of the correction factors (scale factor, Lorentz–polarization and absorption) that are applied during standard data reduction. There are, however, problems of high statistical correlation when refining absorption parameters against anisotropic displacement parameters. Refinement against | F | involves mathematical problems with very weak reflections or reflections with negative measured intensities. There are also difficulties in estimating standard uncertainties σ(F) from the σ(F2) values for weak or zero measured intensities. Refinement against F2 avoids these difficulties, and also reduces the probability of the refinement iterations settling into a local minimum. It also simplifies the treatment of twinned and non-centrosymmetric structures. For these reasons, it is probably currently the most frequently used technique, although it does rely heavily on the assignment of reasonable weights to individual reflections. ## Maximum likelihood The principle of maximum likelihood formalizes the idea that the quality of a model is judged by its consistency with the observations. If a model is consistent with an observation, then -- if the model were correct -- there would be a high probability of making an observation with that value. For a set of relevant observations, the probability of generating such a set is an excellent measure of the quality of the model. For independent observations, the joint probability of making the set of observations is the product of the probabilities of making each independent observation. In crystallography, let P( | Fo | ; | Fc | ) represent the probability of obtaining an observed structure factor Fo given a calculated value Fc. The joint probability is the likelihood function L: | | | | |-----|-----|----------------------| | L = | ∏ | P( | Fo | ; | Fc | ) | | | hkl | | Since it is more convenient to work with sums than products, one typically works with the negative logarithm of the likelihood function $\mathcal{L} = -\sum_{hkl} \log P(|F_o|; |F_c|)$ The mathematical procedure for determining maximum likelihood then becomes that of minimizing $\mathcal{L}$. ## See also Least squares. E. Prince and P. T. Boggs. International Tables for Crystallography (2006). Vol. C, ch. 8.1, pp. 678-688 doi:10.1107/97809553602060000609 Other refinement methods. E. Prince and D. M. Collins. International Tables for Crystallography (2006). Vol. C, ch. 8.2, pp. 689-692 doi:10.1107/97809553602060000610
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909339427947998, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/68721/parametrizing-the-realization-space-of-a-polyhedron-by-its-edges/68914
## Parametrizing the realization space of a polyhedron by its edges ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I alluded to this here, but at that point I hadn't really done enough work to know what I wanted to ask. Call a polyhedron "trihedral" if three faces meet at each vertex. Each of the F faces can be varied according to three degrees of freedom; these can be written explicitly by fixing an origin not lying in the surface of the polyhedron, and representing each face Fj by the point vj = (xj,yj,zj) in its plane which is closest to the origin. However, of these 3F degrees of freedom, three correspond to translation and three to rotation; so there are E = 3F-6 degrees of freedom which determine the polyhedron's shape, where E is the number of edges. Also, E remains the "true" number of degrees of freedom even for non-trihedral polyhedra, since each degree of freedom is lost by contracting an edge to a point. Now, each edge is characterized by its length Li and dihedral angle θi, so to write the corresponding degree of freedom explicitly we need a fixed function f(L,θ). Together with the six values for translation and rotation, the values of fi = f(Li,θi) for each edge form a function ℝ3F → ℝ3F on the realization space of possible shapes, and the key criterion for f is that we want this function to be (at least locally) invertible for all shapes in the space. Since fi is a function of Li and θi which in turn can be calculated explicitly from the (xj,yj,zj), we can write the elements of the top E rows of the Jacobian J in the form ∂Lf∂xL + ∂θf∂xθ etc. So we want to solve |J| ≠ 0 ∀ (xj,yj,zj) in terms of ∂Lf and ∂θf. But I've already encountered a problem here: I can't find ∂xL etc. I've tried by using Fj = {p : p⋅vj = |vj|2} to find the co-ordinates of the vertices, and it's a mess. I think I'm missing something. Any suggestions? Thanks, Robin - Do you want to characterize all admissible $f$ or do you want just to find one that works for all polytopes? – fedja Jun 24 2011 at 11:59 The latter, although the former would be nice since I might want to put some extra constraints on it later. – Robin Saunders Jun 24 2011 at 12:57 ## 4 Answers This is a nice problem is which related to a lot of nice mathematics. You are given a 3-dimensional polytope P and you would like to understand the space of S(P) all gepmetric realization of P. The problem is of interest also in higher dimensions. (Two related miracles for 3 dimensional polytopes are the "Koebe-Abdreev-Thurston" circle packing problem and "Steinitz's theorem".) Indeed some proofs of Steinitz's theorem implies that S(P) is a contractible space wose dimension is the number of edges. while indeed the number of edges is the "right" dimension for this space, this is not obvious: for simplicial polytopes one can relies on "Cauchy's rigidity theorem". Connelly's flexible sphere demonstrates why the degree of freedom argument can fail. Works on rigiditi of polyhedral graphs (Dehn, Alexandrov, and more modern works by Connelly,Whiteley, and many others can be of relevance.) The question is about an explicit parametrization of S(P). I am not aware of an explicit parametization and description. The works of Sabitov and his school and collaborators are highly relevant. Sabitov's "bellow theorem" ( http://www.emis.ams.org/journals/BAG/vol.38/no.1/1.html ) regarding the invariance of the volume for flexes of simplicial 2-sphere is related to the way the colume of the polytope can be algebraically described by the edge lengths. Sabitov, his students and partners have various additional results even closer to the question but I dont remember right now. (Try also this work by V. Alexander. http://www.springerlink.com/index/J4EXVR63M2QB95PP.pdf /) - Dear Gil: You say it is not obvious, so where do I find a calculation of $e$ (the number of edges) as the "right" dimension of $S(P)$? If I substract translations and rotations I come up with $e-6$ (as in Richter-Gebert's REALIZATION SPACES OF POLYTOPES, p. 14, if I did understand him correctly). But what does this mean in the case of the tetrahedron with $e=6$? I for myself came up with $e-2$ (+ 7 degrees of freedom for translations, rotations, and scaling), i.e. $e+5$, but that was only a guess. See here mathoverflow.net/questions/119607/… – Hans Stricker Jan 23 at 17:11 Dear Hans, I believe the count e already took into account translation and rotations. Namely for the tetrahedron you have 3 times 4 degrees of freedom to locate the vertices and when you subtract 6 for translations and rotations you are left with 6. – Gil Kalai Jan 24 at 17:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Just a quick comment: this question reminds me of Andreev's theorem (about the possibility of realizing some hyperbolic polyhedra using some conditions on the dihedral angles). May be the methods used there can help with your particular question? For some recent accounts, see: "Andreev's theorem on hyperbolic polyhedra", by Roeder, Dunbar, Hubbard, and "a characterization of compact convex polyhedra in hyperbolic 3-space" by Rivin and Hodgson, based on several other papers by Rivin. The original article by Andreev contained apparently a mistake, found by Roeder. A last remark: connected to the variation of dihedral angles, there is also a nice result that goes under the name of "Schl\"afli's formula" (see on arxiv an article by Rivin and Schlenker)... - Wait, I have to mention that one: "Shapes of polyhedra and triangulations of the sphere" by W. Thurston, who covers Andreev's theorem in his Notes (msri.org/publications/books/gt3m ) – Sylvain Bonnot Jun 24 2011 at 14:48 Thanks for your response. I've seen Thurston's paper before, in fact. It has in common with other literature I've found that it deals specifically with convex triangulated polyhedra. These are "nice" in the sense that they are determined completely by their edge-lengths, and each vertex has three degrees of freedom; but somehow they are the exact opposite of what I am studying. Trihedral polyhedra are dual to triangulated ones. That in itself might not be an issue, but I have no canonical surjection from triangulated to trihedral polyhedra. Also, I don't want to demand convexity. – Robin Saunders Jun 25 2011 at 17:25 Oh I see, you allow non convex ones... There is one interesting point happening then: imagine one such surface with a protruding tetrahedron,and suddenly you push this tetrahedron inside (from a bump it switched to a dent), then lengths and dihedral angles are the same. How would such a phenomenon be reflected in the parameter space? Perhaps the whole parameter space could be thought as a covering space of the space recording the lengths and dihedral angles, and perhaps this operation of "creating a dent" corresponds to a jump from one sheet of the covering space to another? – Sylvain Bonnot Jun 25 2011 at 20:40 The inner dihedral angles are not the same: they are complementary. – fedja Jun 25 2011 at 23:16 The dihedral angles change sign, so as long as f is not locally an even function of θ everything should be fine. Even if the polyhedron is non-orientable so that there is no global sign for θ, it is still meaningful to say that it changes sign locally, so that the Jacobian at the transition can distinguish between the two directions. – Robin Saunders Jun 25 2011 at 23:19 In cases like this it might be practical to use the square of the lengths of the edges rather than the lengths. If $e_i$ is the edge between vertices $v_j$ and $v_k$ then the derivative of its square length can be written as $2\langle v_j-v_k, v'_j-v'_k\rangle$, which is quite simple. Your problem is nice but probably difficult. If you consider only the dihedral angles and only convex polyhedra, you might want to look at a recent result by Mazzeo and Montcouquiol, http://arxiv.org/abs/0908.2981 They prove a rigidity result relevant to your question but their proof uses some real analysis. - Peter Schroeder et al use mean curvature half-density to navigate the shape space of Riemannian surfaces. They also define a discrete counterpart of this object. The latter might be helpful for the problem because, say, a discrete counterpart of total mean curvature is usually expressed through the edge lengths and dihedral angles. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367233514785767, "perplexity_flag": "middle"}
http://www.adaptiveagents.org/bayesian_causal_induction
Pedro A. Ortega ⚙ ⚒ # Bayesian Causal Induction This talk was first presented at the 2011 NIPS Workshop on Philosophy and Machine Learning. The talk slides are here, and the workshop paper here. Please cite this as: Pedro A. Ortega. Bayesian Causal Induction. 2011 NIPS Workshop in Philosophy and Machine Learning. Abstract: In this presentation, I will show that taking Shafer's idea to represent causal structures using probability trees and Pearl's notion of causal interventions we can do causal induction in a simple and elegant Bayesian way. I am not aware of any other work that does induction in this way. ## Motivation The problem of causal induction, namely the problem of finding out “what causes what”, has puzzled many philosophers such as Aristotle, Kant and especially Hume. • Does the light turn on whenever I snap my fingers? • Will the headache go away if I take a pain killer? • Will I have a bad incident if a black cat crosses my path? • Will my soul be saved if I join this religion? More precisely, causal induction is defined as the ability to generalize from particular causal instances to abstract causal laws. Let’s have a look at another example. • “I had a bad fall on wet floor.” – that’s my experience. • “Therefore, it is dangerous to ride a bike on ice.” – that’s my conclusion. I concluded this because I learned, from my experience, that “a slippery floor can cause a fall”. There are at least two important aspects to this example. First, we need to infer the causal direction: “Did the fall cause the wet floor or vice versa?”. Second, we need to extrapolate this causal knowledge to an unseen situation that shares some similarities. We will not treat this aspect of the induction problem here because it is essentially the same problem we would encounter in a normal sequence prediction setup. The important point is that, for us, causal induction is a natural ability that we apply constantly throughout our lives – but how can we formalize it mathematically? ## Causal Graphical Model Let us rephrase the problem we are tackling in the language of causal graphical models. We have two random variables – $X$ and $Y$ – and we are pondering the plausibility of two competing causal hypotheses: either $X$ causes $Y$ or $Y$ causes $X$, which we label as $h$ and $¬h$ respectively. We assume that both hypotheses model identical joint distributions over $X$ and $Y$. A lot of progress has been made in causal induction using machine learning methods. For example, there are methods based on conditional independence testing like the PC algorithm – that we cannot apply here because they require at least 3 random variables; other methods make additional assumptions, for instance about the nature of the noise. But here we are interested in the general case with no additional assumptions. The first question that we address is: how do we express the causal induction problem using the language of graphical models? Since we do not know the direction of the arrow, we have to treat is as a random variable – controlled by the causal hypothesis. But now we have a problem. The causal hypothesis controls the very structure of the causal graph over the pair $X$ and $Y$, and hence $H$ lives in a meta-level. This is problematic, because we do not know how to do inference when there are meta-levels using the language of graphical models alone. Furthermore, using an analogous argument, it is easy to build a hierarchy of meta-levels. Is there a way to express this situation without recurring to meta-levels? Important: Before you carry on, ask yourself - do you really understand the problem? If you do not, read the previous paragraph again, then think about it a little, and then carry on reading. I'm saying this because I have discovered, from experience, that only 1 out of 5 people seem to get the point. ## Probability Trees Shafer proposed to use the simplest representation of a random experiment – actually the first representation we get taught in a probability course – namely probability trees. In this case, nodes represent mechanisms that resolve the value of a random variable given the history. For instance, the random variable $Y$, given that $h$ is the correct hypothesis and we have observed $\neg x$, takes on the value $y$ with probability $\frac{1}{4}$ and the value $\neg y$ with probability $\frac{3}{4}$. A path corresponds to a realization of the experiment – it tells us a story of how the random variables acquired their values and in what order. Consequently, a tree is a representation of all the potential causal realizations of the experiment. We can even represent alternative causal realizations. For instance, the left branch corresponds to the case where $X$ causally precedes $Y$, and the right branch to the case where $Y$ causally precedes $X$. Of course, a probability tree can also capture the conditional independencies, but it doesn’t do it in such an obvious way as in the case of graphical models. First, note that all the random variables are first class citizens now – there are no meta-levels anymore! ## Inferring the Causal Direction Let us infer the causal direction. Assume that we observe that $X$ takes on $x$, and that $Y$ takes on $y$. What is the probability of the causal hypothesis $H = h$? We use Bayes’ rule, placing a uniform probability distribution over the hypotheses. The posterior probability of $h$ given $x$ and $y$ is thus the likelihood – factorized according to the causal dependencies of the left branch in the tree – multiplied by the prior probability, normalized by the probability of the data. Note how the denominator has two different factorizations: one for each of the two sides of the tree. Replacing the numbers, we obtain… $$P(h|x,y) = \frac{P(y|h,x)P(x|h)P(h)}{P(y|h,x)P(x|h)P(h) + P(x|\neg h,y)P(y|\neg h)P(\neg h)} = \frac{\frac{3}{4} \cdot \frac{1}{2} \cdot \frac{1}{2}}{\frac{3}{4} \cdot \frac{1}{2} \cdot \frac{1}{2} + \frac{3}{4} \cdot \frac{1}{2} \cdot \frac{1}{2}} = \frac{1}{2} = P(h)!$$ …$\frac{1}{2}$! That is, the prior probability of $h$! We haven’t learned anything! This makes sense, because the two causal hypotheses differ in the causal order of the random variables, but they have identical likelihoods! Thus, we invoke a fundamental insight of statistical causality: to extract new causal information, we have to supply old causal information, also paraphrased as “no causes in, no causes out” and “to learn what happens if you kick the system, you have to kick the system”. More specifically, we can introduce causal information by intervening the experiment. Let us explore this. ## Interventions in a Probability Tree Let us quickly revise what we mean when we say that the likelihoods are the same, which I have written under the leaves of the tree. In the tree, the realization $x$, $y$ under hypothesis $h$ has probability $\frac{3}{8}$ – exactly the same as under hypothesis $\neg h$; the realization $x$, $\neg y$ has probability $\frac{1}{8}$ under hypothesis $h$ – again, like in hypothesis $\neg h$, and so forth. We intervene the experiment by setting the value of the random variable $X$ to the value $x$. This means that I’m fixing the value of $X$ for all potential realizations of the experiment. In the tree, this amounts to replacing all the nodes that resolve the value of $X$ with a new node that places all its probability mass on the outcome $X = x$. Note that as a result of the intervention, the two hypotheses have different likelihoods now. The intervention has introduced a statistical asymmetry! As a side note, I would like to point out that in this tree I could do all sort of crazy things, like intervening the very hypothesis! ## Inferring the Causal Direction – 2nd Attempt Let us repeat the experiment, but this time setting the value of $X$ and observing the value of $y$. What is the posterior probability of $H = h$? Again, we write down the posterior probability of $h$ and use Bayes’ rule, and we use Pearl’s hat symbol to denote the fact that we are doing our calculations using an intervened probability tree. Again, we have the likelihood times the prior, where the likelihood has been factorized according to the causal order. Numerically, all the probabilities stay the same as in the un-intervened case, expect the probabilities where the intervened variable is in the argument. $$P(h|\hat{x},y) = \frac{P(y|h,\hat{x}) P(\hat{x}|h) P(h)}{P(y|h,\hat{x}) P(\hat{x}|h)P(h) + P(\hat{x}|\neg h,y)P(y|\neg h)P(\neg h)} = \frac{\frac{3}{4} \cdot 1 \cdot \frac{1}{2}}{\frac{3}{4} \cdot 1 \cdot \frac{1}{2} + 1 \cdot \frac{1}{2} \cdot \frac{1}{2}} = \frac{3}{5} \neq P(h).$$ As we would expect, the resulting posterior probability has changed now. In fact, since it has increased, we have gained evidence for the causal hypothesis “$X$ causes $Y$”. This concludes our explanation of causal induction. We can apply the same technique to deal with more complex cases, for instance to infer causal dependencies in time series. ## Conclusions • Causal induction can be done using purely Bayesian techniques plus a description allowing multiple causal explanations of a random experiment. • Probability trees provide a simple and clean way to encode causal and probabilistic information. • The purpose of an intervention is to introduce statistical asymmetries, rendering the likelihoods different. • The causal information that we can acquire is limited by the interventions we can apply to the system. • Essentially, in this presentation I have shown that taking Shafer's idea of using probability trees and Pearl's idea of interventions that this is enough to do causal induction. I am not aware of any other work that has solved this problem using one representation. bayesian_causal_induction.txt · Last modified: 2013/04/12 04:23 by peortega
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275219440460205, "perplexity_flag": "head"}
http://alanrendall.wordpress.com/2011/04/08/conference-on-modelling-the-immune-system-in-dresden-part-3/
# Hydrobates A mathematician thinks aloud ## Conference on modelling the immune system in Dresden, part 3 In my second ever post on this blog I quoted a celebrated paper of Ho et. al. on HIV therapy. One of the other authors of that paper was Avidan Neumann and on Wednesday I had the opportunity to hear him giving a talk. His subjects were HIV, HBV and HCV, with the greatest emphasis on the last of these. He did briefly mention the case of the man who is apparently the only person ever to be cured of HIV. This took place in Berlin in 2006. The man had both HIV and leukemia and as therapies for both of these he was given radiation treatment and a bone marrow transplant. The transplant was a very special one since the donor was an HIV controller. Since then the patient has not had any treatment against HIV and despite very thorough tests it has been impossible to find any trace of HIV in his body. Coming now to HCV, this virus causes hepatitis C, a liver disease which is often chronic. It often has few or no symptoms but the liver is progressively damaged, frequently resulting in cirrhosis or even liver cancer. In the worst case a liver transplant is required and after the transplant the virus always infects the new liver. This disease affects about 300 million people and no vaccine is available. The standard treatment is to give interferon $\alpha$ and an antiviral drug ribavirin over many months and this can be very hard on patients due to side effects. A new treatment, a protease called telaprevir, may soon be approved by the FDA. It is much more effective in getting rid of the virus than the standard treatment. The reasons why it is effective have been understood using mathematical modelling. Listening to this talk gave me the impression how close medicine and mathematics can be. Arup Chakraborty gave a talk on targets for HIV vaccines which had an essential connection to HIV controllers. He has done statistical analysis of HIV viral genomes looking for a certain type of pattern. He explained the idea by an analogy with the fluctuations of share prices. If the share prices of different companies are examined for positive correlations then it is discovered that they can be grouped into certain sectors. These are the companies which are strongly related to certain activities, for instance those which have some close connection to car production. The genome of HIV virions can be analysed for correlations in an analogous way. This results in the identification of positively correlated groups which may again be called sectors. It is not a priori clear what these groups really mean. Interestingly the group with the strongest correlations (Sector 3 if I remember correctly) contains sequences related to HIV controllers. It turns out that these sequences have to do with the activity of building the viral capsid. A problem with vaccines against HIV is that if a vaccine targets a particular peptide a mutation may change that peptide and destroy the recognition without damaging the virus too much. Thus the virus can escape the immune attack. The special sequences in Sector 3 are such that mutations which affect them are likely to affect the stability of the capsid and hence compromise the reproduction of the virus. An important role is also played by those MHC molecules which can present the special peptides. The MHC molecules which do this optimally, and which occur in controllers are rare in the general population. They are, however, presented in a subleading way by more common MHC molecules. This may be enough to form an element of designing a good vaccine. In analysing this problem Chakraborty is using sophisticated mathematics, in particular the theory of random matrices. To sum up my impressions of the conference, it has convinced me that mathematical immunology is an exciting and dynamic field which I want to be a part of.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9676457047462463, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/68213/families-of-curves-for-which-the-belyi-degree-can-be-easily-bounded
## Families of curves for which the Belyi degree can be easily bounded ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I know (edit: three) families of smooth projective connected curves over $\bar{\mathbf{Q}}$ for which the Belyi degree is not hard to bound from above. 1. The modular curves $X(n)$. They are constructed by compactifying the quotient $Y(n) = \Gamma(n)\backslash \mathbf{H}$. The natural morphism $X(n) \longrightarrow X(1)$ is Belyi of degree $n^2$ (up to a constant factor). This also bounds the Belyi degree of a modular curve given by a congruence subgroup $\Gamma$. In general, Zograf proves that the Belyi degree of a (classical congruence) modular curve is bounded by $128(g+1)$. 2. The Fermat curves $F(n)$. They are given by the equation $x^n+y^n+z^n =0$ in $\mathbf{P}^2$. The morphism $(x:y:z)\mapsto (x^n:z^n)$ is Belyi of degree $n^2$. It is known that $F(n)$ is not a modular curve for $n$ big enough. So this example is really different than the one above. (Also note that $n^2\leq 10g+10$ by the Plucker formula.) 3. Wolfart curves are curves $X$ over $\overline{\mathbf{Q}}$ with a Galois Belyi morphism $X\to \mathbf{P}^1$; I took this terminology from a preprint by Pete L. Clark. Such curves are also called Galois Belyi covers or Galois three-point covers in the literature. The Belyi degree of a Wolfart curve is bounded by $84(g-1)$. (In particular, the latter implies that there are only finitely many Wolfart curves of given genus.) The following family of curves is not so easily dealt with. 1. For an elliptic curve $E$ over the rational numbers, the Belyi degree can be bounded in the height of the $j$-invariant of $E$ following Belyi's proof of his theorem. This was written down explicitly by Khadjavi and Scharaschkin. I'm looking for families of curves for which the Belyi degree is easy to read off''. That is, a collection (finite or infinite) of smooth projective connected curves $X_i$ over $\bar{\mathbf{Q}}$ for which the Belyi degree can be bounded easily. Are there any other nice examples? - Random thought: you could try non-congruence modular curves. – S. Carnahan♦ Jun 20 2011 at 3:37 1 Yes. The Fermat curve F(n) is an example of that. But isn't any smooth projective geometrically connected curves X over Q a non-congruence modular curve? (Choose a Belyi morphism X-->P^1 and identify P^1-{0,1,infty} with Y(2). A topological cover of Y(2) is the quotient of the complex upper half plan by a finite index subgroup of Gamma(2)/{1,-1}. ) So this would be a very nice example to try'' indeed. – Ariyan Javanpeykar Jun 20 2011 at 7:09 3 Also modular curves (classical or Shimura) for subgroups of other arithmetic triangle groups; the list, obtained by Takeuchi, is finite but somewhat large (almost 100 cases): Commensurability classes of arithmetic triangle groups, J. Fac. Sci. Univ. Tokyo 24 (1977), 201-212. – Noam D. Elkies Jun 20 2011 at 16:52 1 For elliptic curves over Q Khadjavi and Scharaschkin show (Thm. 1a of myweb.lmu.edu/lkhadjavi/belyielliptic.pdf) that the curve $y^2 = x^3 + Ax + B$ has Belyi degree $O(|A|^3 + |B|^2)$. Similarly for a curve with full level-2 torsion and invariant $\lambda$, i.e. $cy^2 = x (x-1) (x-\lambda)$ for some rational $c,\lambda$ with $c \neq 0$ and $\lambda \neq 0, 1$, Belyi's construction gives an explicit Belyi map of degree $O(H(\lambda))$. Here $H$ is the height, $H(m/n) = \max(|m|,|n|)$. This is sharp when $H(\lambda)$ is prime, by a theorem of Beckmann (J.Alg. 125(1989), 236-255). – Noam D. Elkies Jun 20 2011 at 23:58 ## 2 Answers I don't think this question is going to have a GREAT answer -- your examples 1 and 2 are HANDED to you as Belyi covers of the line, and I'd think any family that doesn't immediately present itself in this way is unlikely to offer an easy upper bound on Belyi degree. But that's not an answer, so here's one more -- any Hurwitz curve parametrizing covers of P^1 branched at four points will have a Belyi map (namely, the map to M_{0,4}) whose degree you can read off quite directly. - This is already GREAT. Thanks! – Ariyan Javanpeykar Jun 20 2011 at 6:49 1 The work of Couveignes and Granboulan, Dessins from a geometric point of view gives some examples, and notes from character theory there is a bound (bottom page 33). math.univ-toulouse.fr/~couveig/publi/CGdes94.pdf They get "families" from dessins that are growing trees (see the last section), and try to compute with them using Puiseux series, though in genus 0 they use other methods. – Junkie Jun 20 2011 at 7:14 1 And I should add that, by a handsome theorem of Diaz, Donagi, and Harbater, every curve over Qbar is a Hurwitz curve in this sense (The paper is the one titled "Every curve is a Hurwitz curve.") So what you say about non-congruence modular curves applies to this case too. – JSE Jun 20 2011 at 15:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Another example, like JSE's, that comes already equipped with a Belyi map but is not as familiar as modular curves and Fermat curves: For any relatively prime integers $m,n$ with `$0<m<n$`, and any subgroup $G$ of $S_n$, the curve that parametrizes trinomials $x^n + a x^m + b$ up to scaling with Galois group contained in $G$. The Belyi map is the invariant $a^n/b^{n-m}$ of the trinomial, and its degree is $d=[S_n:G]$; it is branched at $0$, $\infty$, and $(-n)^n/(m^m (n-m)^{n-m})$. One may assume $m \leq n/2$ (by symmetry with respect to $x \leftrightarrow 1/x$, $m \leftrightarrow n-m$). Some nontrivial examples with $n=5,7,8$ are given explicitly at http://www.math.harvard.edu/~elkies/trinomial.html; the subsequent paper with N.Bruin on the cases $(m,n) = (1,7)$ and $(1,8)$ with $d = 30$ is Nils Bruin and Noam D. Elkies, Trinomials $ax^7+bx+c$ and $ax^8+bx+c$ with Galois Groups of Order 168 and $8 \cdot 168$, Lecture Notes in Computer Science 2369 (proceedings of ANTS-5, 2002; C.Fieker and D.R.Kohel, eds.), 172-188. (These examples all have $G$ transitive, but the construction works for all subgroups $G$.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283444285392761, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/163345-find-antiderivative-here.html
# Thread: 1. ## Find antiderivative here Is antiderivative of $(6x^2+1)$ is it $2x^3+x+C$? Is antiderivative of $\frac{1}{4z}$ is it $\frac{1}{4}\ln4z+C$? 2. Originally Posted by Critter314 Is antiderivative of $(6x^2+1)$ is it $2x^3+x+C$? Yes (Clap) Is antiderivative of $\frac{1}{4z}$ is it $\frac{1}{4}\ln4z+C$? Yes (Clap) 3. Hello, Critter314! $\text{The antiderivative of }\,\dfrac{1}{4z}$ $\text{Is it }\,\frac{1}{4}\ln4z+C\,?$ Well, yes and no . . . Note that: . $\displaystyle \int\frac{dz}{4z} \;=\;\tfrac{1}{4}\int\frac{dz}{z} \;=\; \tfrac{1}{4}\ln z + C$ Your answer is correct, but it can be simplified. . . $\frac{1}{4}\ln4z + C \;=\; \frac{1}{4}\bigg[\log 4 + \ln z\bigg] + C$ . . . . . . . . . . . $=\;\underbrace{\tfrac{1}{4}\ln4}_{\text{constant}} + \frac{1}{4}\ln z + C$ . . . . . . . . . . . $=\;\tfrac{1}{4}\ln z + C$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7898387312889099, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2588/what-does-it-mean-to-be-long-gamma?answertab=oldest
# What does it mean to be long gamma? When you are "long gamma", your position will become "longer" as the price of the underlying asset increases and "shorter" as the underlying price decreases. source: http://www.optiontradingtips.com/greeks/gamma.html My intuition tells me that if you're long gamma, all that means is that if gamma increases, so does the value of your portfolio. Correct me if I'm wrong, but this seems to conflict with the quoted definition above (it is possible for gamma to decrease while the value of your portfolio goes up). Am I totally wrong? Does being long gamma simply mean your portfolio has a positive gamma as the quoted definition suggests? - Long gamma means that the gamma of your portfolio is positive. Your gamma could get shorter (i.e. smaller, but still positive) while you make money and vice versa. – Tal Fishman Dec 14 '11 at 1:19 I see, that makes perfect sense now. I guess my intuition was wrong (wouldn't be the first time, hehe). – sooprise Dec 14 '11 at 1:26 ## 2 Answers Gamma is the second partial derivative of the change in the price of the option wrt to the change in the underlying. Said another way, it is the change in delta. If you write down the Black-Scholes pricing formula, you's see the gamma term: $$...\frac{1}{2}\frac{\partial^2C}{\partial C^2}(\Delta S)^2...$$ Notice that the $\Delta S$ (change in stock price) term is squared, meaning that the gamma term is positive when long regardless if $\Delta S$ is positive or negative. (This comes from the derivation of BS using Ito's Lemma.) What this means is that if you are long gamma (long a call or put option) then the P/L attributed to your position from gamma will increase regardless of the direction the stock moves. Gamma (convexity) is a gift from God in this regard when the payoff is nonlinear, but remember there is no free lunch. The theta of a long option position is negative and will erode your P/L at the same time - faster than you will accumulate P/L from gamma if you are not careful. - Very good explanation! – sooprise Dec 14 '11 at 17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254058003425598, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/167553/division-and-number-scaling
# Division and number scaling I'm trying to implement an interactive (secure) protocol which operates only on integers. Here's what I have: $$f(x) = \sum_i{a_i K_i} + b$$ $$K_i = \dfrac{1}{1 + \gamma \|x - s_i\|^2}$$ where $0 < i \le M$ and $$\|x - s_i \|^2 = \sum_{j=1}^{N}{(x_j - s_{ij})^2}.$$ Because the server has to work only with encrypted values, it needs to engage in an interactive protocol with the client in order to perform the division. So, basically, the server computes every $1 + \gamma \|x - s\|^2$ value, blinds each of them multiplicatively with a random factor $r_i$ ($0 < r_i < 2^{100}$) and sends the value vector to the client to perform the division. After the client divides 1 by each blinded value, it sends the results back to the server, which multiplies each received value by $r_i$, to remove the blinding, and then it computes $f(x)$. Now, the above protocol works nice, if the numbers are floating point, but, unfortunately, the server can only work with encrypted integers (it performs homomorphic additions on them). In order to overcome this issue, I need to scale all the values accordingly: $$K_{ri} = \dfrac{s_{\gamma} s_f^2 s_k s_r}{r_i (s_{\gamma} s_f^2 + \gamma s_{\gamma} \|x s_f - s_i s_f\|^2)}$$ where: • $s_{\gamma}$ is the scaling applied to $\gamma$ • $s_f$ is the scaling applied to $x$ and $s_i$ • $s_r$ is the scaling needed to compensate for the blinding factor $r_i$ • $s_k$ specifies how many decimals should be preserved after the division, because the value of $K$ itself also needs to be scaled After the client sends back the $K_{ri}$ values, the server computes $f(x)$: $$f(x_i) = \sum_i{a_i s_a K_{ri} r_i} + b s_b$$ where: • $s_a$ is the scaling applied to $a_i$ • $s_b$ is the scaling which needs to be applied to $b$ Since $b$ is usually large enough, it's sufficient to scale it with the scaling that is implied from the other factors. Now, my problem is that I am having a really hard time figuring out how to construct $s_b$ in order to compensate for $s_r$. I think it should be a trivial thing, but I've been staring at the formulas for many hours and I can't figure it out. For starters, it needs to contain $s_a s_k s_{\gamma} s_f^2$, but what do I do about $s_r$, given that $0 < r_i < 2^{100}$? There is no way to communicate the size of each $r_i$ to the client, and the multiplication $K_{ri} r_i$ seems to mess everything up... - ## 1 Answer To avoid confusion, let me use the superscript $\,^*$ to denote the scaled variables. So the unscaled return value $$K_{ri} = K_i / r = \frac{1}{r_i(1 + \gamma\|x - s_i\|^2)}$$ corresponds to the scaled return value $$K^*_{ri} = \frac{s_{\gamma} s_f^2 s_k s_r}{r_i (s_{\gamma} s_f^2 + \gamma s_{\gamma} \|x s_f - s_i s_f\|^2)} = \frac{s_{\gamma} s_f^2 s_k s_r}{s_{\gamma} s_f^2 r_i (1 + \gamma \|x - s_i\|^2)} = s_k s_r K_{ri}.$$ Now, you want to calculate the scaled version of $$f(x) = \sum_i{a_i K_i} + b =\sum_i{a_i K_{ri} r_i} + b,$$ namely $$f^*(x) = \sum_i{a_i s_a K^*_{ri} r_i} + b s_b = \sum_i{a_i s_a s_k s_r K_{ri} r_i} + b s_b.$$ If $b = 0$, we have $f^*(x) = s_a s_k s_r f(x)$. To make this equation apply even when $b \ne 0$, we need to choose $$s_b = s_a s_k s_r.$$ - Thanks for correcting my sloppy latex and thanks for helping me see the obvious. I mixed some stuff up in my implementation and the formulas stopped making sense. – Mihai Todor Jul 8 '12 at 21:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930034339427948, "perplexity_flag": "head"}
http://en.m.wikipedia.org/wiki/Packing_in_a_hypergraph
# Packing in a hypergraph In mathematics, a packing in a hypergraph is a partition of the set of the hypergraph's edges into a number of disjoint subsets such that no pair of edges in each subset share any vertex. There are two famous algorithms to achieve asymptotically optimal packing in k-uniform hypergraphs. One of them is a random greedy algorithm which was proposed by Joel Spencer. He used a branching process to formally prove the optimal achievable bound under some side conditions. The other algorithm is called Rödl nibble and was proposed by Vojtěch Rödl et al. They showed that the achievable packing by Rödl nibble is in some sense close to that of the random greedy algorithm. ## History The problem of finding the number of such subsets in a k-uniform hypergraph was originally motivated through a conjecture by Paul Erdős and Haim Hanani in 1963. Vojtěch Rödl proved their conjecture asymptotically under certain conditions in 1985. Pippenger and Joel Spencer generalized Rödl's results using a random greedy algorithm in 1989. ↑Jump back a section ## Definition and terminology In the following definitions, the hypergraph is denoted by $H=(V,E)$. $H$ is called k-uniform hypergraph if every edge in $E$ consists of exactly $k$ vertices. $P$ is a hypergraph packing if it is a subset of edges in H such that there is no pair of distinct edges with a common vertex. $H$ is a ($D_0$,$\epsilon$)-good hypergraph if there exists a $D_0$ such that for all $x,y \in V$ and $D\geq D_0$ and both of the following conditions hold. $D(1-\epsilon)\leq deg(x)\leq D(1+\epsilon)$ $codeg(x,y)\leq \epsilon D$ where deg of a vertex denotes the number of edges that contain that vertex and codeg of two distinct vertices denotes the number of edges that contain both vertices. ↑Jump back a section ## Theorem There exists an asymptotic packing P of size at least $\frac{n}{K+1}(1-o(1))$ for a $(k+1)$-uniform hypergraph under the following two conditions, 1. All vertices have the degree of $D(1+o(1))$ in which $D$ tends to infinity. 2. For every pair of vertices shares only $o(D)$ common edges. where $n$ is the total number of vertices. This result was shown by Pippenger and was later proved by Joel Spencer. To address the asymptotic hypergraph packing problem, Joel Spencer proposed a random greedy algorithm. In this algorithm, a branching process is used as the basis and it was shown that it almost always achieves an asymptotically optimal packing under the above side conditions. ↑Jump back a section ## Asymptotic packing algorithms There are two famous algorithms for asymptotic packing of k-uniform hypegraphs which are random greedy algorithm via branching process and Rödl nibble. ### Random greedy algorithm via branching process Every edge $E \in H$ is independently and uniformly assigned a distinct real "birthtime" $t_E \in [0,D ]$. The edges are taken one by one in the order of their birthtimes. The edge $E$ is accepted and included in $P$ if it does not overlap any previously accepted edges. Obviously, the subset $P$ is a packing and it can be shown that its size is $|P|=\frac{n}{K+1}$ almost surely. To show that, let stop the process of adding new edges at time $c$. For an arbitrary $\gamma >0$, pick $c, D_0, \epsilon$ such that for any $(D_0,\epsilon)$-good hypergraph $f_{x,H}(c)<\gamma^2$ where $f_{x,H}(c)$ denotes the probability of vertex $x$ survival (a vertex survives if it is not in any edges in $P$) until time $c$. Obviously, in such a situation the expected number of $x$ surviving at time $c$ is less than $\gamma^2 n$. As a result, the probability of $x$ surviving being less than $\gamma n$ is higher than $1-\gamma$. In other words, $P_c$ must include at least $(1-\gamma)n$ vertices which means that $|P|\geq (1-\gamma)\frac{n}{K+1}$. To complete the proof, it must be shown that $\lim_{c\rightarrow \infty} \lim_{x,H} f_{x,H}(c)=0$. For that, the asymptotic behavior of $x$ surviving is modeled by a continuous branching process. Fix $c>0$ and begin with Eve with the birthdate of $c$. Assume time goes backward so Eve gives birth in the interval of $[0,c)$ with a unit density Poisson distribution. The probability of Eve having $k$ birth is $\frac{e^{-c}c^k}{k!}$. By conditioning on $k$ the birthtimes $x_1,...,x_k$ are independently and uniformly distributed on $[0,c)$. Every birth given by Eve consists of $Q$ offspring all with the same birth time say $a$. The process is iterated for each offspring. It can be shown that for all $\epsilon >0$ there exists a $K$ so that with a probability higher than $(1-\epsilon)$, Eve has at most $K$ descendants. A rooted tree with the notions of parent, child, root, birthorder and wombmate shall be called a broodtree. Given a finite broodtree $T$ we say for each vertex that it survives or dies. A childless vertex survives. A vertex dies if and only if it has at least one brood all of whom survive. Let $f(c)$ denote the probability that Eve survives in the broodtree $T$ given by the above process. The objective is to show $\lim_{c\rightarrow \infty} f(c)=0$ and then for any fixed $c$, it can be shown that $\lim^* f_{x,H}(c)=f(c)$. These two relations complete our argument. To show $f(c)=0$, let $c\geq 0, \Delta c>0$. For $\Delta c$ small, $f(c+\Delta c)-f(c) \approx -(\Delta c)f(c)^{Q+1}$ as, roughly, an Eve starting at time $c+\Delta c$ might have a birth in time interval $[c, c+\Delta c)$ all of whose children survive while Eve has no births in $[0, c)$ all of whose children survive. Letting $\Delta c\rightarrow 0$ yields the differential equation $f'(c)=-f(c)^{Q+1}$. The initial value $f(0)=1$ gives a unique solution $f(c)=(1+Qc)^{-1/Q}$. Note that indeed $\lim_{c\rightarrow \infty} f(c)=0$. To prove $\lim^* f_{x,H}(c)=f(c)$, consider a proceture we call History which either aborts or produces a broodtree. History contains a set $T$ of vertices, initially $T=\{x\}$. $T$ will have a broodtree structure with $x$ the root. The $y\in T$ are either processed or unprocessed, $x$ is initially unprocessed. To each $y\in T$ is assigned a birthtime $t_y$, we initialize $t_x=c$. History is to take an unprocessed $y\in T$ and process it as follows. For the value of all $t_E$ with $y\in E$ but with no $x\in E$ that has already been processed, if either some $E$ has $t_E<t_y$ and $y,z\in E$ with $z\in T$ or some $E, E'$ have $t_E,t_{E'}<t_y$ with $y\in E,E'$ and $|E\cup E'|>1$, then History is aborted. Otherwise for each $E$ with $t_E<t_y$ add all $z\in E-\{y\}$ to $T$ as wombmates with parent $y$ and common birthdate $t_E$. Now $y$ is considered processed. History halts, if not aborted, when all $y\in T$ are processed. If History does not abort then root $x$ survives broodtree $T$ if and only if $x$ survives at time $c$. For a fixed broodtree, let $f(T,c)$ denote the probability that the branching process yields broodtree $T$. Then the probability that History does not abort is $f(T,c)$. By the finiteness of the branching process, $\sum f(T,c)=1$, the summation over all broodtrees $T$ and History does not abort. The $lim^*$ distribution of its broodtree approaches the branching process distribution. Thus $\lim^* f_{x,H}(c)=f(c)$. ↑Jump back a section ## Rödl nibble In 1985, Rödl proved Paul Erdős’s conjecture by a method called Rödl nibble. Rodl's result can be formulated in form of either packing or covering problem. For $2\leq l<k<n$ the covering number denoted by $M(n,k,l)$ shows the minimal size of a family $\kappa$ of k-element subsets of $\{1,...,n\}$ which have the property that every l-element set is contained in at least one $A \in \kappa$. Paul Erdős et al. conjecture was $\lim_{n\rightarrow \infty} \frac{M(n,k,l)}{{n \choose l}/{k \choose l}}=1$. where $2\leq l<k$. This conjecture roughly means that a tactical configuration is asymptotically achievable. One may similarly define the packing number $m(n,k,l)$ as the maximal size of a family $\kappa$ of k-element subsets of $\{1,...,n\}$ having the property that every l-element set is contained in at most one $A \in \kappa$. ↑Jump back a section ## Packing under the stronger condition In 1997, Noga Alon, Jeong Han Kim, and Joel Spencer also supply a good bound for $\gamma$ under the stronger $codegree$ condition that every distinct pair $v, v'\in V$ have at most one edge in common. For a $k-$uniform, $D-$regular hypergraph on $n$ vertices, if $k>3$, there exists a packing $P$ covering all vertices but at most $O(nD^{-1/(k-1)})$. If $k=3$ there exists a packing $P$ covering all vertices but at most $O(nD^{-1/2}\ln^{3/2}D)$. This bound is desirable in various applications, such as Steiner triple system. A Steiner Triple System is a $3-$uniform, simple hypergraph in which every pair of vertices is contained in precisely one edge. Since a Steiner Triple System is clearly $d=(n-1)/2-$regular, the above bound supplies the following asymptotic improvement. Any Steiner Triple System on $n$ vertices contains a packing covering all vertices but at most $O(n^{1/2}\ln^{3/2}n)$. ↑Jump back a section ## See also ↑Jump back a section ## References • Erdős, P.; Hanani, H. (1963), "On a limit theorem in combinatorial analysis", Publ. Math. Debrecen 10: 10–13 . • Spencer, J. (1995), "Asymptotic packing via a branching process", Random Structures and Algorithms 7 (2): 167–172, doi:10.1002/rsa.3240070206 . • Alon, N.; Spencer, J. (2008), The Probabilistic Method (3rd ed.), Wiley-Interscience, New York, ISBN 978-0-470-17020-5 . • Rödl, V.; Thoma, L. (1996), "Asymptotic packing and the random greedy algorithm", Random Structures and Algorithms 8 (3): 161–177, doi:10.1002/(SICI)1098-2418(199605)8:3<161::AID-RSA1>3.0.CO;2-W . • Spencer, J.; Pippenger, N. (1989), "Asymptotic Behavior of the Chromatic", , Series A 51 (1): 24–42, doi:10.1016/0097-3165(89)90074-5 . • Alon, N.; Kim, J.; Spencer, J. (1997), "Nearly perfect matchings in regular simple hypergraphs", 100 (1): 171–187, doi:10.1007/BF02773639 . ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 145, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907963752746582, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/129285-conjugacy-classes-two-problems.html
Thread: 1. Conjugacy classes - two problems. Hi, I was wondering if I could get some help with these questions? 1.Let G be a non-abelian group of order 21. Show that if m is the number of conjugacy classes of size 3 and n is the number of conjugacy classes of size 7 then 3m+7n = 20. Deduce that m = n = 2. 2. Let G be a group of order p^2 where p is a prime number. Show that all conjugacy classes in G have size 1 or p. I guess that I need to use The Class Equation or The Orbit-Stabilizer Theorem but I do not know how? Thanks! 2. Originally Posted by leelooana Hi, I was wondering if I could get some help with these questions? 1.Let G be a non-abelian group of order 21. Show that if m is the number of conjugacy classes of size 3 and n is the number of conjugacy classes of size 7 then 3m+7n = 20. Deduce that m = n = 2. 2. Let G be a group of order p^2 where p is a prime number. Show that all conjugacy classes in G have size 1 or p. I guess that I need to use The Class Equation or The Orbit-Stabilizer Theorem but I do not know how? Thanks! For the first question, prove that every non-abelian group of order $pq$ has trivial center. (You may wish to first prove that if $G/Z(G)$ is cyclic then $G$ is abelian). You can then apply the class equation. For the second result, use the orbit-stabiliser theorem to investigate the action of conjugation. What is an element's orbit? What is the corresponding stabiliser? Is one of these a subgroup? Are you at uni in Scotland? 3. Thanks for Your reply! I just proved in previous problem that if p and q are distinct primes and G is non abelian then Z(G) is trivial but I did not notice the relation between these problems. so quick proof: From Lagrange Theorem |Z(G)| could be pq, p, q or 1. It is easy to prove that order of Z(G) can not be pq ( because in this case Z(G) = G therefore G is abelian what contradicts the initial claim ). Also can not be p because the quotient group G/Z(G) would be cyclic of order q but G/Z(G) cannot be cyclic unless G is abelian ( because G/Z(G) cyclic implies G abelian.) The same for q. So Z(G) must be trivial. And now I see that if |G| = 21 then |Z(G)| = 1 and from class equation I get that 3m+7n = 20 and must be n=m=2. But I still can not done the 2nd question. I know that : -the size of each orbit divides the group order -in each conjugacy classes all elements have the same order -in all abelian groups every conjugacy class is a set containing one element (if |G| = $p^2$ and p is prime then G is abelian) and I can infer that order of conjugacy classes of abelian group equals 1. how to show that conjugacy classes in G have size p? Originally Posted by Swlabr Are you at uni in Scotland? Yes 4. Originally Posted by leelooana Thanks for Your reply! I just proved in previous problem that if p and q are distinct primes and G is non abelian then Z(G) is trivial but I did not notice the relation between these problems. so quick proof: From Lagrange Theorem |Z(G)| could be pq, p, q or 1. It is easy to prove that order of Z(G) can not be pq ( because in this case Z(G) = G therefore G is abelian what contradicts the initial claim ). Also can not be p because the quotient group G/Z(G) would be cyclic of order q but G/Z(G) cannot be cyclic unless G is abelian ( because G/Z(G) cyclic implies G abelian.) The same for q. So Z(G) must be trivial. And now I see that if |G| = 21 then |Z(G)| = 1 and from class equation I get that 3m+7n = 20 and must be n=m=2. But I still can not done the 2nd question. I know that : -the size of each orbit divides the group order -in each conjugacy classes all elements have the same order -in all abelian groups every conjugacy class is a set containing one element (if |G| = $p^2$ and p is prime then G is abelian) and I can infer that order of conjugacy classes of abelian group equals 1. how to show that conjugacy classes in G have size p? Well, the conjugacy class of the identity is 1. Assume your group is non-abelian and take an element which is non-central. What is it's conjugacy class? It must have order $p$ or $p^2$. What would happen if it's conjugacy class was $p^2$? What would this "mean"? Originally Posted by leelooana Yes It's the best place to be!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131317734718323, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/206183-how-show-following-equality-holds.html
# Thread: 1. ## any hints to solve the following !! Show that $f(x)$ is the derivative of $f(.)$ at $x$ if and only if $lim_{h \to 0} \sup_{|t|\leqslant h} \frac{|f(x+t)-f(x)-tf^{'} (x)|}{h} = 0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242593050003052, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/36764-physics-question.html
# Thread: 1. ## Physics Question I just had my last physics lesson before my exam, but I have one question where I disagree with my teacher. A dynamics Trolley A of mass 5kg is placed on a horizontal board. It is connected to block B (a hanging mass) of mass 2kg by a light, inextensible string over a frictionless pulley. Assuming no friction force on the trolley, calculate the magnitude of the acceleration of the trolley. My teacher said use F = ma where F = 2(9.8) and m = 5kg. Which yields the answer 3.92 m/s² I decided that the acceleration on the trolley and the hanging mass should be the same. So: Trolley: $a = \frac{T}{M_a}$ (1)where $M_a$is the mass of the trolley, T is tension Hanging mass: $a = \frac {F}{M_b}$where $M_b$ is the mass of the hanging mass. $a = \frac{M_b g- T}{M_b}$ where g is acceleration due to gravity. = $g -\frac{T}{M_b}$ (2) Equating equations (1) and (2), I get 2.8 m/s². I'm also aware that using the formula F = ma where m = 7, F = 2g also gives me this answer, but I don't know why. Is my teacher wrong, or am I just over complicating things? Can someone clarify this problem for me? 2. hi there, both methods are correct just that the way you do it is actually the longer way what your teacher did was he/she solves the hanging mass first in order to obtain the tension in the string which then allows the usage of Newtons second law. What you did was, you are solving both the trolley and the hanging mass at the same time 3. seems to me your making a mistake you have for the trolley: acceleration = tension / mass and for the hanging mass acceleration = force / mass and you rearrange this to be force = mass x gravity so for the hanging mass you have force = 2kg x -9.8 m/s^2 but remember that the force acting on the hanging mass is the same as the force of the tension in the rope for the trolley so then you should get acceleration of trolley = force of hanging mass / mass of trolley look over your equations im not sure how you got the one that reads acceleration of trolley = mass hanging x gravity - tension / mass hanging using the first two equations i dont see how to get to that one maybe you can show me 4. if the accelerations are the same then the masses have to be the same try the experiment using a 10 kg trolley and a 0.1 kg hanging mass... the hanging mass will always have acceleration of 9.8 m/s^2 down but do you think the 10 kg trolly will? 5. Originally Posted by finch41 if the accelerations are the same then the masses have to be the same try the experiment using a 10 kg trolley and a 0.1 kg hanging mass... the hanging mass will always have acceleration of 9.8 m/s^2 down but do you think the 10 kg trolly will? Sorry but you're totally wrong. The acceleration of the hanging mass is defintely NOT g under these conditions. Unless of course the tension in the string is zero ......... 6. sorry 7. Originally Posted by mr fantastic Sorry but you're totally wrong. The acceleration of the hanging mass is defintely NOT g under these conditions. Unless of course the tension in the string is zero ......... what if friction forces were zero? 8. Originally Posted by finch41 what if friction forces were zero? No, not even if friction forces were zero. The fact is that the net force on the hanging mass m is (taking downwards as the positive direction) mg - T. Then: $ma = mg - T \Rightarrow a = \frac{mg - T}{m}$. a = g only if T = 0. If T = 0 then the string is slack ..... I suggest you thoroughly review (learn?) the application of Newtonian mechanics to this sort of problem ......
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374234080314636, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/22648/maximum-distance-in-cycle-interval?answertab=oldest
# Maximum distance in cycle interval Perhaps this is a very basic question, but I can't find a fast solution. I have a cyclic interval [0,n] (distance from n to 0 is 1). I need a function that for a given value x returns the point in the interval where it maximizes the distance to all previous points (x-1, x-2...) For instance, if n=100 and f(0) = 0 ````f(1) = 50 f(2) = 25 f(3) = 75 f(4) = 12'5 f(5) = 37'5 f(6) = 62'5 f(7) = 87'5 f(8) = 6'25 f(100) = ?? ```` Note that f(0)=0 is for convinience, but you may start at f(0)=50. At the end, what I need is that an uniform distribution for {f(0), f(1)... f(n)} - Couldn't $f(2)$ be as well 75 instead of 25? It seems, your function i not well-defined. Also, what is your question? – Rasmus Feb 18 '11 at 10:30 Yes, f(2) could be 75 instead of 25.. I forget to mention, sorry. The point is I need a function where any f(n) has the maximum distance to any other f(x) where 0<=x<=n – Ivan Feb 18 '11 at 10:37 More: the problem is the same as to generate the sequence: {0,1/2,1/4,2/3,1/8,3/8,5/8,7/8,1/16,3/16... } and the multiply the i-th value by n. – Ivan Feb 18 '11 at 11:05 the sequence is {0,1/2,1/4,2/3,2/8,3/8,5/8,7/8,1/16,3/16... } Sorry – Ivan Feb 18 '11 at 11:23 ## 1 Answer Suppose $f(0)=0$ and $n=1$. For positions $2^{a} \le i < 2^{a+1}$, the sequence takes on values of the form $k 2^{-(a+1)}$ for odd $k$ less than $2^{a+1}$. For a given $a$, these values can be covered in any order, since all the distances are equal; but concretely we may take $k=2(i-2^{a})+1$, so $$f(i; 0, 1) = \frac{2(i-2^{a})+1}{2^{a+1}} = \frac{i-2^{a}}{2^a} + \frac{1}{2^{a+1}}$$ for all $i>0$, where $a = \lfloor\log_2(i)\rfloor$. More generally, for $f(0)=\alpha n$ and arbitrary $n$, we have $$f(i; \alpha, n) = \left(\frac{2\pi^{(a)}(i - 2^{a}) + 1}{2^a} + \frac{1}{2^{a+1}} + \alpha\right)n \mod n$$ where each $\pi^{(a)}$ is an arbitrary permutation of $0,1,2,...,2^{a}-1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8976393342018127, "perplexity_flag": "middle"}
http://nrich.maths.org/6603
nrich enriching mathematicsSkip over navigation ### Lying and Cheating Follow the instructions and you can take a rectangle, cut it into 4 pieces, discard two small triangles, put together the remaining two pieces and end up with a rectangle the same size. Try it! ### Muggles Magic You can move the 4 pieces of the jigsaw and fit them into both outlines. Explain what has happened to the missing one unit of area. ### Walk and Ride How far have these students walked by the time the teacher's car reaches them after their bus broke down? # How Steep Is the Slope? #### The gradient of a line tells us how far up or down we go when we take one step to the right: On a grid like the one below we can draw lines with different gradients. Check you agree that the black line is one of several that could be drawn with a gradient of 2 and the red line is one of several that could be drawn with a gradient of $-\frac{2}{3}$ Picture some more lines with different gradients. You may want to use this sheet to record your working. #### Arrange them in order of steepness and list the points each line passes through. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415422081947327, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/39828/how-do-you-decide-whether-a-question-in-abstract-algebra-is-worth-studying/40551
## How do you decide whether a question in abstract algebra is worth studying? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that. So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples: • Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations. • The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake. • Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own. The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself. Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting. How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you? Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups? Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study. I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"? Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO. Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true. Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one". - 4 +1 super question! Indeed, this is something which has been rattling around in my head for a very long time. – muad Sep 24 2010 at 7:19 11 I do not agree with "Now, cohomology is part of what people call "commutative algebra". – Martin Brandenburg Sep 24 2010 at 7:45 6 I do not agree with the claim that Kummer and Dedekind invented UFDs and PIDs to solve Diophantine equations. – Franz Lemmermeyer Sep 24 2010 at 7:48 1 Martin, I certainly don't insist on putting cohomology into any particular box. My point was that nowadays, people investigate cohomological questions just because they find them interesting, without any topological/geometric/number theoretic applications in the back of their mind and they don't have to justify this "indulgence". – Alex Bartel Sep 24 2010 at 8:05 3 Dedekind's aim was generalizing Kummer's theory to general number fields. Kummer's main motivation came from Gauss's and Jacobi's theory of cyclotomy in connection with reciprocity laws. The only diophantine equation Kummer ever looked at was x^n + y^n = z^n. – Franz Lemmermeyer Sep 24 2010 at 9:27 show 8 more comments ## 18 Answers Dear Alex, It seems to me that the general question in the background of your query on algebra really is the better one to focus on, in that we can forget about irrelevant details. That is, as you've mentioned, one could be asking the question about motivation and decision in any kind of mathematics, or maybe even life in general. In that form, I can't see much useful to write other than the usual cliches: there are safer investments and riskier ones; most people stick to the former generically with occasional dabbling in the latter, and so on. This, I think, is true regardless of your status. Of course, going back to the corny financial analogy that Peter has kindly referred to, just how risky an investment is depends on how much money you have in the bank. We each just make decisions in as informed a manner as we can. Having said this, I rather like the following example: Kac-Moody algebras could be considered 'idle' generalizations of finite-dimensional simple Lie algebras. One considers the construction of simple Lie algebras by generators and relations starting from a Cartan matrix. When a positive definiteness condition is dropped from the matrix, one arrives at general Kac-Moody algebras. I'm far from knowledgeable on these things, but I have the impression that the initial definition by Kac and Moody in 1968 really was somewhat just for the sake of it. Perhaps indeed, the main (implicit) justification was that the usual Lie algebras were such successful creatures. Other contributors here can describe with far more fluency than I just how dramatically the situation changed afterwards, accelerating especially in the 80's, as a consequence of the interaction with conformal field theory and string theory. But many of the real experts here seem to be rather young and perhaps regard vertex operator algebras and the like as being just so much bread and butter. However, when I started graduate school in the 1980's, this story of Kac-Moody algebras was still something of a marvel. There must be at least a few other cases involving a rise of comparable magnitude. Meanwhile, I do hope some expert will comment on this. I fear somewhat that my knowledge of this story is a bit of the fairy-tale version. Added: In case someone knowledgeable reads this, it would also be nice to get a comment about further generalizations of Kac-Moody algebras. My vague memory is that some naive generalizations have not done so well so far, although I'm not sure what they are. Even if one believes it to be the purview of masters, it's still interesting to ask if there is a pattern to the kind of generalization that ends up being fruitful. Interesting, but probably hopeless. Maybe I will add one more personal comment, in case it sheds some darkness on the question. I switched between several supervisors while working towards my Ph.D. The longest I stayed was with Igor Frenkel, a well-known expert on many structures of the Kac-Moody type. I received several personal tutorials on vertex operator algebras, where Frenkel expressed his strong belief that these were really fundamental structures, 'certainly more so than, say, Jordan algebras.' I stubbornly refused to share his faith, foolishly, as it turns out (so far). Added again: In view of Andrew L.'s question I thought I'd add a few more clarifying remarks. I explained in the comment below what I meant with the story about vertex operator algebras. Meanwhile, I can't genuinely regret the decision not to work on them because I quite like the mathematics I do now, at least in my own small way. So I think what I had in mind was just the platitude that most decisions in mathematics, like those of life in general, are mixed: you might gain some things and lose others. To return briefly to the original question, maybe I do have some practical remarks to add. It's obvious stuff, but no one seems to have written it so far on this page. Of course, I'm not in a position to give anyone advice, and your question didn't really ask for it, so you should read this with the usual reservations. (I feel, however, that what I write is an answer to the original question, in some way.) If you have a strong feeling about a structure or an idea, of course keep thinking about it. But it may take a long time for your ideas to mature, so keep other things going as well, enough to build up a decent publication list. The part of work that belongs to quotidian maintenance is part of the trade, and probably a helpful routine for most people. If you go about it sensibly, it's really not that hard either. As for the truly original idea, I suspect it will be of interest to many people at some point, if you keep at it long enough. Maybe the real difference between starting mathematicians and established ones is the length of time they can afford to invest in a strange idea before feeling like they're running out of money. But by keeping a suitably interesting business going on the side, even a young person can afford to dream. Again, I suppose all this is obvious to you and many other people. But it still is easy to forget in the helter-skelter of life. By the way, I object a bit to how several people have described this question of community interest as a two-state affair. Obviously, there are many different degrees of interest, even in the work of very famous people. - Dear Minhyong, that's another excellent example of the sort I was looking for. Thanks! I also find your financial analogy quite helpful. – Alex Bartel Sep 25 2010 at 8:48 4 Why is it foolish to disagree with an expert as long as you have sufficient background knowledge to make an informed evaluation,Minhyong? It would be very disappointing to say the least if Frenkel held your disagreement with him against you.He may very well be right,but that's not the point. We should be able to agree to disagree-that should be part of the process of making the transition from student to professional. – Andrew L Sep 25 2010 at 23:56 1 I'm not quite sure I'm addressing your question, but what I meant was that VOAs look very interesting to me now, and I could have learned quite a bit if I'd paid more attention. As with much of interesting mathematics, it's so much easier to learn it from the inside. Of course, I suppose I learned other things. By the way, Frenkel was really very nice the whole time. I don't think anything was held against me, to the extent that he thought about my disagreement at all. – Minhyong Kim Sep 26 2010 at 1:13 1 I'm not really qualified to comment on all of the generalizations of Kac-Moody algebra that are in the literature, but I think it is generally accepted that even among ordinary Kac-Moody algebras, the affine (i.e., smallest infinite dimensional) case has been far more theoretically fruitful than anything else. I think this is mostly because they have a straightforward tie to geometry, namely loop algebras (and from there, punctured algebraic curves), while the more general Kac-Moody constructions, even the small hyperbolic cases, do not have this interpretation. – S. Carnahan♦ Sep 27 2010 at 2:03 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm going to interpret your question in the language of Gowers's "two cultures" essay as follows: How does one get good at theory-building? The process of developing a good theory can seem deceptively simple. One takes some definitions, perhaps by generalizing some known definitions, and deduces simple consequences of them. In comparison with the work required to solve a hard problem, this seems easy---perhaps too easy. The catch, of course, is the one you raised: there is a significant risk of spending a lot of time studying something that ultimately has very little mathematical value. Of course there is also the risk of wasted effort when trying to solve a specific problem, but in that case, it's at least clear what you were trying to accomplish. In the case of theory-building, the signposts are less clear; maybe you succeeded in proving some things, so your efforts weren't entirely fruitless, but at the same time, how do you know that you actually got somewhere when there was no clear endpoint? The number one principle that I keep in mind when trying to build a theory is this: Relentlessly pursue the goal of understanding what's really going on. I'm reminded of a wonderful sentence that Loring Tu wrote in his May 2006 Notices article on "The Life and Works of Raoul Bott." Tu wrote, "I. M. Singer remarked that in their younger days, whenever they had a mathematical discussion, the most common phrase Bott uttered was “I don't understand,” and that a few months later Bott would emerge with a beautiful paper on precisely the subject he had repeatedly not understood." Von Neumann reportedly said that in mathematics, you don't understand things; you just get used to them. This can be valuable advice to a young mathematician who hasn't yet grasped that the reason we're doing research is precisely that we don't really understand what we're doing. However, the key to theory-building is to insist on thorough understanding, especially of things that are widely regarded as being already understood. Often, such subjects are not really as well understood as others would have you believe. If you start asking probing questions---why are things defined this way and not that way? why doesn't this argument actually prove something more (or maybe it does?)?---you will find surprisingly often that what seems like a very basic question has not really been addressed before. How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you? My reply is that the generalization is worth studying if it helps you understand the original concept better. Perhaps the generalization was obtained by weakening an axiom, and you can now see more clearly that certain theorems hold more generally while others don't, so you get some insight into which specific hypotheses of your original object are needed for which conclusions. The heuristic as you've stated it, on the other hand, doesn't sound too convincing to me. I see too much risk of wandering off into a fruitless direction if you're not firmly grounded in trying to understand your original object better. Keeping firmly in mind that your goal is a thorough understanding of some particular subject is also important because your efforts will, at least initially, not be greeted with enthusiasm by others. You will appear to be a complete idiot who doesn't understand even very basic things that other people think are obvious. Even when you start getting some fresh insights, they will seem trivial to others, who will claim that they "already knew that" (which they probably did, implicitly if not explicitly). Constantly adjusting definitions also appears to others to be an unproductive use of time. Even if you get to the point where your approach leads to a new and wonderfully clear presentation of the subject, and raises important new questions that nobody thought to ask before, you may not get credit for original thinking. Thus it is important that your internal compass is pointed firmly in the right direction. To repeat: ask yourself, am I driving towards an understanding of what's really going on in this important piece of mathematics? If so, keep at it. If not, then you've lost the thread somewhere along the way. - 3 This reminds me of Lueneberg's statement that "the goal of theory is to understand the examples". – Chris Godsil Aug 1 2011 at 1:22 4 I really enjoyed this answer. Knuth says something similar (on his web page I think). He says he doesn't read email because email is good for people who want to stay on top of things but he wants to get to the bottom of things. I've sometimes found it's good to keep the phrase "get to the bottom of things" in mind. – James Borger Aug 1 2011 at 8:32 I think this is spot on! – Jon Bannon May 23 at 15:22 It may be helpful to say how I got into groupoids. In the 1960s, I was writing a topology text and wanted to do the fundamental group of a cell complex, which required the van Kampen Theorem (I have now been persuaded to call this the Seifert-van Kampen theorem, as on wikipedia, so I call it SvKT). I was kind of irritated that this did not as then formulated give the fundamental group of the circle, so one had to make a detour and do all or a piece of covering space theory. Then I found a paper by Olum on nonabelian cohomology and van Kampen's theorem which I extended to a Mayer-Vietoris type sequence which did give the fundamental group of the circle. Unfortunately, when written out in full, it was rather boring! I then came across a paper of Philip Higgins which included the notion of free product with amalgamation of groupoids. So I decided to put in an exercise using this notion for the fundamental groupoid of a space. Then I wrote out a solution for this, and it was so much nicer than the nonabelian cohomology stuff that I decided to make the account in terms of groupoids. It still needed the key notion of the fundamental groupoid on a set $C$ of base points, written $\pi_1(X,C)$. For the circle, this needed $C$ to have 2 elements. This result appeared in the first 1968 edition, and in subsequent ones, of the book on topology, but in no other topology text in English since then. In 1967 I met George Mackey who told me of his work on ergodic groupoids. This persuaded me that the idea of groupoid was, or might be, more important than met the eye. On writing out the proof of the SvKT for groupoids maybe 5 times, it occurred to me in 1965 that the proof should generalise to higher dimensions if one had the `right' gadget generalising $\pi_1(X,C)$. This was finally found with Philip Higgins in 1974 as the fundamental double groupoid $\rho_2(X,A,C)$ of a space $X$ with subspace $A$ and set $C$ of base points. So we got a SvKT in dimension 2, published in 1978, and had extended this to all dimensions by 1979. Work with Chris Spencer in 1971-2 on double groupoids and crossed modules was essential as a basis for all this. The point I am making is that the initial aim of an improved proof of the fundamental group of the circle was very modest, but based on an aesthetic feeling, and the aim would not have got many marks for a research proposal! But in the end it opened out a new area. One main driving force for the higher dimensional work was the intuitions of subdividing a square into little squares, and getting the inverse to that, i.e. composing the little squares into a big one. Another problem was that of expressing the idea of commutative cubes. Philip Higgins told me of a remark of Philip Hall that one should try to make the algebra model the geometry, and not force it into an already known mold. I think that is what people were doing in avoiding the groupoid concept, despite its obvious nature. Indeed the idea of `change of base point' for the fundamental group is a bit like giving a railway timetable in terms of return journeys and change of start-- i.e. is bizarre. Perhaps the moral is that is good to look for ways of expressing intuitions in a rigorous mathematical form. And if that means building up some maths from scratch, previous to definitions, examples, theorems, proofs, as was needed in the higher dimensional work, then that is a lot of fun! (More fun than doing someone else's problem!) But it may take a long time, need lots of attempts, and searching for related ideas, and as it gets going, hard work, and in our case fruitful collaborations. Research students liked the idea of a big plan (what is or might be `higher dimensional group theory'?) and the attempts to pick from this something that might be doable. I'd better not go on about the opposition! Does that help? - I really like the sentence "One should try to make the algebra model the geometry, and not force it into an already known mold" – Amr Apr 25 at 18:01 "How much would you subscribe to the statement that studying questions one finds interesting is something established mathematicians do, while younger ones are better off studying questions that the rest of the community finds interesting?" Not at all. I don't think anyone, young or old, will find success by working on questions other than those they find interesting. Mathematics is just too difficult for that. Ideally, everyone should work on problems that are interesting to both themselves and the community. Senior mathematicians have the luxury of working on problems whose interest to the commmunity has not been established. - 1 You are rightly pointing out an inaccuracy in my formulation. Thanks! I will edit it slightly. – Alex Bartel Sep 24 2010 at 13:26 Jordan algebras were introduced first by P. Jordan and J. von Neumann in order to give a mathematical context for observables in quantum mechanics (say, a structure that generalizes the space of Hermitian matrices). At the end, the classification was disappointing, and Jordan algebras do not play a role any more in QM, but the topic survided in Mathematics untill now. - That's a really nice example! I wonder, whether the algebraists would have caught on if the initial motivation hadn't been there. I know, these "what would have happened if"-questions are rarely sensible, but this example distills my question in a succinct way. – Alex Bartel Sep 24 2010 at 9:09 2 I am not sure algebraists "caught on", in the sense that I don't think it was exactly the masses among them who studied Jordan algebras... – Mariano Suárez-Alvarez Sep 24 2010 at 9:20 I have just now looked again at this interesting blog and thought to add a few points. 1) Methodology: You could read the comments of Grothendieck on "speculation" http://pages.bangor.ac.uk/~mas010/Grothendieck-speculation.html. I also think that in private one should test an idea `beyond the bounds of human thought': that is, just for fun, take it as far as you you think it can possibly go, and if all went as well as possible. This I call the "ideal scenario". If, under the ideal scenario, the result does not look all that exciting, then you might put is aside. On the other hand, if, under the ideal scenario, the result would be wonderful, then you might say to yourself: "Life is not like that, there must be some obstructions to this working." So you look for obstructions, small things that you think you might be able to do. If these obstructions turn to be real, then that would be interesting, and you should modify your scenario. On the other hand, if these obstructions disappear one by one, that would be even more interesting! Either way, this is a win-win research strategy. If some negative person (these abound in mathematics!) says "your idea cannot work because...." then that gives another obstruction to work on. I also like the idea of writing a (draft!) paper on your new idea, in which a key part is the Introduction, which should be as free ranging as possible, following flights of fancy, catching ideas as they occur. These can always be later relegated to another document (the great advantage of mathematical wordprocessing). The process of writing can make these ideas more real. So can talking about them, though you do sometimes get funny looks from superior people! You may write a draft 4 times, ending in failure, then the fifth time the paper writes itself! (It took me 9 years, and many drafts which ran into sand, trying to write a paper on a new homotopy double groupoid, before realising with Philip Higgins in 1974 that it was useful to try a definition for a pair of spaces, rather than a plain space!) 2) The composer Ravel said you should copy. If you have some originality, then this might come out as you copy. If not, then never mind! I feel copying is a way of getting the rusty wheels of the brain slowly turning! The originality may come out later. So I advise trying to write up a known piece of mathematics in as "nice" a way as you can. Nothing can be lost by this. 3) A question for Scott: Is there a (hopefully useful) groupoid version of quandles related to the fundamental groupoid and a `peripheral subgroupoid'? 4) A dictum of the algebraist Philip Hall was that one should try to make the algebra model the geometry rather than force the geometry into an already existing algebraic mold. For me, an example of this "forcing" is to try and get a group, and then bring in the idea of change of base point, when the naturally occurring structure is a groupoid. There are many other examples! - 1 Dear Ronnie, this is very helpful and inspiring advice! Thank you! – Alex Bartel Jul 31 2011 at 12:05 This is very, very nice. – Jon Bannon May 23 at 15:18 Dan Schechtman, winner of the 2011 Nobel Prize in Chemistry for the discovery of quasi crystals, said: “The main lesson that I have learned over time is that a good scientist is a humble and listening scientist and not one that is sure 100 percent in what he reads in the textbooks.” My research on groupoids and higher groupoids was started in the 1960s by a dissatisfaction with a van Kampen theorem that did not compute the fundamental group of the circle, a basic example: but groupoids were at the time regarded as "rubbish" by many senior mathematicians, and the idea of higher van Kampen theorems using higher groupoids was described by one such for 10 years as "ridiculous". (He gave in eventually!) My worry is that people may be encouraged to follow high ups, rather than to analyse a programme on mathematical grounds, and so to develop their own feeling for mathematical structures. - There are surely no hard and fast rules as to assessing the importance of a generalization of a concept. I once took a look (chap. 9) at debates surrounding the move from groups to groupoids. One important step up for a concept is being deemed essential rather than merely useful. To achieve this it must find its place in an array of good storylines. - I see what you are saying. I guess my question is then, what should happen first: should potential applications force the concept upon you or do you first introduce a concept and then let it find its place in an array of good stylines? Does the latter scenario work at all? For example, if people know that the more limited concept has its uses and you introduce the generalisation, they might actually start specifically looking for applications of the new thing. But will they? Or will the burden of proof of concept rest with the one introducing the generalisation? – Alex Bartel Sep 24 2010 at 9:31 There are different kinds of 'should': How should a mathematician act to get on individually, given how things are?; How should a mathematician act in the best interests of mathematics?; How should the mathematical community act in the best interests of mathematics?, etc. Are you wondering how should a young mathematician act strategically to get noticed, or how should the community organize itself to promote new lines of thought? – David Corfield Sep 24 2010 at 11:01 2 Dear David, for the time being, I am more concerned about strategically prioritising different possible projects to advance my carreer than about the inner workings of the mathematical community. But the question addresses the broader issue of what we, the mathematicians, consider interesting to work on or to learn about. I hope (perhaps somewhat naively) that the two questions are very closely related. At any rate, my question is about the status quo, rather than about how people believe the world should work. – Alex Bartel Sep 24 2010 at 12:38 Not sure I agree with the whole post in detail. Distinguish "pure algebra" from "applied algebra"; and within "pure algebra" distinguish "structural" issues from "combinatorial" ones such as the Burnside problem. Remembering that "abstract algebra" is the modern term for what used to be called "modern algebra", we should probably drop the "abstract" to get a more reasonable view (the scope of "old" or 19th century algebra being that of Chrystal's Algebra say, some would now count as other branches of mathematics, such as numerical methods). So which questions are worth studying? Not just one kind, surely. Algebraic geometry, algebraic topology, algebraic number theory all do ask serious and interesting algebraic questions. See for example the Golod–Shafarevich theorem (http://en.wikipedia.org/wiki/Golod-Shafarevich_theorem) which is pure algebra to start with. Parts of algebra come across as "general" compared to mathematics as a whole, but this is somewhat subjective criterion these days. There are both general-structural and general-combinatorial parts of algebra. There do need to be some criteria operating in, say, infinite group theory and infinite-dimensional Lie algebra theory. Generality in the sense of category theory is rather 1960s in feel; derived categories are "abstract" but I wonder who these days would argue that they are too "general"? I suppose the general module over the general ring still looks troublesome as a setting for research. Well, I think "follow the masters" is probably the best advice, - 4 "Follow the masters" is probably the best advice for second-rate mathematicians (and I am speaking here as a third-rate mathematician of long standing). "Be a master" is probably the best advice for first-rate mathematicians. The tricky thing is figuring out which applies. – Gerry Myerson Sep 24 2010 at 13:02 1 Yes, the incentives tell you the wrong thing (aiming slightly too low is not as damaging as aiming slightly too high). But this has to be kept a secret if we want those first-rate guys. But this is dangerous ground, given the academic politics of those who imply "I may be hard to please, but this is the only way to make sure that my judgement of what makes the grade carries weight". Don't start off with self-peer-review: try to do a good job of research. – Charles Matthews Sep 24 2010 at 15:35 Charles, I have to admit that I am having trouble distilling an answer to the question from the post and the above comment. I am not sure how to reconcile your two pieces of advice "follow the masters" and "Don't start off with self-peer-review". If I correctly read the latter as "don't try to predict whether the community will agree with you on your assessment of how interesting a topic is", then it seems to contradict the former. – Alex Bartel Sep 24 2010 at 16:11 I also don't quite understand, what parts of your post constitute an answer to my question. E.g., I hope that I have made it clear that I am not talking about "applied algebra" here, so I don't quite see how introducing this distinction would have helped the question. On the other hand, the distinction between "structural" and "combinatorial" issues seems orthogonal to my question. The Burnside problem was proposed when everyone was already convinced that groups are something worth studying in their own right. I'm interested in the period before an algebraic concept reaches that stage. – Alex Bartel Sep 24 2010 at 16:15 1 Well, read good mathematicians before deciding what is interesting, try to do something before assessing on the basis of no experience whether you'll succeed, and get out of your current rut of assuming that "innovation by generalisation" is the basic paradigm in algebra. Mathematics gets done in different ways. – Charles Matthews Sep 24 2010 at 18:55 show 4 more comments Hi Alex! About the second question: I think senior mathematicians don't necessarily escape the criterion of general interest, but it can become a self-fulfilling prophecy: The mere fact that a senior mathematician is studying something can raise interest in the object of study among the mathematical community - I guess they easier grant him that he will see connections or analogies to other areas accepted as interesting. See Minhyong Kim's nice "money in the bank" comparison. About the first: Of course you want to study this concept you are interested in. So to make it interesting for others you could go for some introspection - what is it that you find intriguing about it? Can you pass it on to others (this is surely easier in talks than in papers)? It does not always have to be a big range examples that apply to it. Maybe you feel it behaves unexpectedly well in spite of weak axioms. Maybe it clarifies that many of the facts about Y depend only on the fact that it is an X and thus improves the understanding of the well-accepted theory of Y. Maybe you have a single application where it showed up and feel that there it greatly helped to separate the algebraic content of the situation (which is strictly more than the structure of a Y) from the rest. These seem all like potential good reasons to work on the theory of X. But maybe your fascination comes from the feeling that your X shows unusual behaviour for an algebraic structure, then spelling that out you could find that this just reflects your prejudices about algebraic structures, which others don't have - this could be a criterion record this as learning experience and do something else for publishing... - Alex, don't feel as if the weight of the burden of proof (of concept generalization) has to rest completely on your shoulders. I realize you already agree that curiousity and your own interest can be enough reason to pursue a topic or generalization, but... Isn't it the same as asking a question on mathoverflow about a topic which is interesting to you on its own merits, and finding out about the existence of either a longer history of it based on a parallel set of definitions or other possible applications of it in other branches of mathematics or physics? I had been working on a particular topic, but having approached it from one direction I could only perceive the question from my point of view. Even my attempts to research it found nothing initially because I was using the wrong key-words to look for similar work on my topic. It turned out that there was a long history of work on the topic using different terminology which I had not been aware of. Perhaps giving a short summary on mathoverflow (as a different question) of the generalization which you are working on would provide you some different points of view from other mathematicians. As to the utility of a generalization or of a particular approach, it is not possible to predict or find all of, many of, or even more than a few of, the possible applications of a mathematical technique on your own because you cannot survey the entirety of it yourself. It's often the intersection of multiple disparate interests that creates the application of a technique onto a problem, and every individual (and every individual mathematician) has a different set of disparate interests. (As long as the number of categories of possible interests is greater than the logarithm in base two of the size of the population under consideration; otherwise the pigeonhole principle requires that there must be at least two individuals with exactly the same interests. :) ) - This is an encouraging story, thanks! I realise that this general question does not replace a specific discussion about the particular project I have in mind. But I feel that the issue is likely to repeat itself and applies to more people than just me. That's why I decided to phrase it in this generality. – Alex Bartel Sep 24 2010 at 12:11 If it interests you. - This paper has a very nice introduction (it is on "pointless topology"). So apparently, one may come up with very random definitions for their own sake and hope someone "applies" them to more "concrete" problems. http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.bams/1183550014 - This is really an add-on to David Corfield's answer. Since David mentions groups and groupoids, I will mention that Ronnie Brown (http://www.bangor.ac.uk/~mas010/hdaweb2.htm) considers some of the possible criteria as follows: Tests for a theory which is successful in a mathematical and scientific rather than sociological sense could be the following. A successful theory would be expected to yield. He wanted to evaluate some new concepts and proposed the following advantages. • a range of new algebraic structures, with new applications and new results in traditional areas; • new viewpoints on classical material; • better understanding, from a higher dimensional viewpoint, of some phenomena in group theory; • new computations with these objects, and hence also in the areas in which they apply; • new algebraic understanding of the structure of certain geometric situations; • a stimulus to new ideas in related areas; • a range of unexplored ideas and potential applications; • the solution of some classical famous problems. I would suggest that this list (albeit incomplete as Ronnie suggests) applies to algebraic situations as well as his higher dimensional group theory context and that, suitably interpreted for other contexts, they can provide some very partial answer to the question. The second question is perhaps best answered by saying that 'established' mathematicians are expected to have some sort of 'gut' feeling about the importance of a question or area. Sometimes they just have blind prejudice however. One task of a research supervisor 'should' be to train a PG student towards getting that intuition, but not to hand on the prejudices. At a pragmatic level a debutant mathematician needs to get work published and noticed and that is easier in established areas (or near established areas). - @muad Thanks for fixing that. I could not see what was wrong in the formatting. – Tim Porter Sep 24 2010 at 12:04 Thanks! Such a systematic list of criteria is one of the things I was hoping for when asking the question. – Alex Bartel Sep 24 2010 at 14:21 1 @Alex Might I suggest that you and any others who think this is a good thing add to that list and 'plonk' it somewhere useful (n-Cat Café or one of the other similar blogs ... or here for that matter). Ronnie and I wrote an article that has appeared many times :-) in various places and languages, including Lithuanian. You can find a web version of it at popmath.org.uk/centre/pagescpm/methmat.html It was in a somewhat similar mode and might be of interest. – Tim Porter Sep 24 2010 at 16:39 Thanks Tim, I hadn't been aware of this article and I really enjoyed reading it! – Alex Bartel Sep 25 2010 at 1:59 In some sense, mathematical structure is simply analogy at a very high level. One tries to fill in details in a way that is likely to pay off. (E.g. looking for a natural way to make a semigroup you are looking at into a group may just pay off, simply because groups are ubiquitous and useful.) This may be the reason why an eye toward mathematical structure is a good thing to cultivate. This is usually a decent way to meet algebraic problems that need attention, when a "picture" needs to be filled in. Ultimately, this "picture" should provide some unification or better understanding of diverse phenomena, or the solution of a reticent problem. Looking for or working on mathematical (or simply algebraic) structure is just another strategy for building a better conceptual picture of the mathematical landscape. - 1 You might be amused by the articles: Brown, R. and T. Porter: 2006, Category Theory: an abstract setting for analogy and comparison, In: What is Category Theory?, Advanced Studies in Mathematics and Logic, Polimetrica Publisher, Italy, (2006) 257-274. and maths.bangor.ac.uk/research/ftp/rpam/06_08.pdf in which we tried to go into this in a bit more detail. – Tim Porter Sep 26 2010 at 8:32 1 I should have given a link to the first of these. It can be found in a preprint version on this page. bangor.ac.uk/~mas010/brownpr.html – Tim Porter Sep 26 2010 at 8:35 These are great, Tim. Thanks! – Jon Bannon May 23 at 15:16 I am thinking of specific examples. In much the same way, David Corfield mentioned groupoids. I am personally not a big fan of the general theory of loops. In part, my own disinterest is because I have not found an application. On the other hand, I have seen enough to believe that Moufang loops are interesting even if I personally don't know a lot about them. Still I like the idea of algebraists thinking about the structures of loops because they find them interesting. Closer to my own interest is the idea of quandles. These were introduced essentially in the 1940s, then again in the late 1970s and early 1980s, rediscovered, and have only found some greater applicability because quandle cohomology gives interesting topological invariants. The idea, apparently was natural: it was discovered, forgotten, rediscovered, forgotten, and found to be applicable. Nevertheless, some of you might find it to be be a fringe notion. Even knot theorist might believe that there is not much in the quandle concept because the information in the quandle is present in the fundamental group and a peripheral subgroup. I think Tim's articulation of Ronnie's list should include that the algebraic concept yields a more concise language in which ideas can be expressed. - @Scott I am sort of suggesting that someone and somewhere there should be an updated version of the list, to help mathematicians think about these problems. A discussion of methodology might be a useful feature to add to mathematics degree courses. (Amusingly enough, in typing the above I initially made a typo and typed 'mathodology' and perhaps that is a good neologism to use. :-)) – Tim Porter Sep 26 2010 at 9:48 I think something is worth studying if it helps one of: • solving a problem I know about, • giving a new perspective on something I know, or • raising interesting questions, some of which are easy to solve and some of which aren't. Especially, I study it if it gives me some degree of gratification. Here are a couple of examples of things that I hope to pursue after my current interests wane: Recursive clone theory: A class of functions on a set which is closed under composition and having projections is called a clone; the notion is a part of basic general algebra. Something that should be mentioned in basic recursion theory classes but is not is that various definitions are specializations of clones: primitive recursive functions, partial recursive functions, total recursive functions. I think it would be useful to blend the ongoing research in clone theory with a computational component that can answer how complex a class can be. Transforming Shelah's classification theory: In determining how many inequivalent models of cardinality kappa exist for a first order theory, Saharon Shelah came up with conditions on the theory which (loosely and inaccurately speaking) sometimes dealt with whether a theory could encode a particular order or a certain simpler theory. I think the ideas can be moved into the domain of computation over finite structures. In particular, languages that are members of some complexity class (oh, say, NP) could be shown to satisify properties analogous to what Shelah developed for first order theories. I think that this would be a promising route to find a language in NP - P . Granted, these are not generalizations so much as taking tools, trying them on a new kind of widget, and then retooling the tool to work on the widget. The justifications for working on them should be the same and (I think) apply to your questions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9675243496894836, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30691/hierarchy-is-no-problem-and-susy-is-a-mathematical-tool-for-data-fitting
# Hierarchy is no problem and susy is a mathematical tool for data fitting I saw this paper in the arXiv: http://arxiv.org/abs/1206.4711. The author seems well-known and has many very well cited papers. But what he says sounds very strange. What I get from this paper are the claims that: There's no naturalness or fine-tuning problem in the Standard Model. And that Susy is mathematics that is presented as physics, so we should deal with it as a tool and not to expect to see real sparticles. My questions are, 1) What does it mean that: "perturbation theory is not designed to handle problems with widely separated time or energy scales?" And how accurate is this statement? 2) How valid is the conclusion that the naturalness and fine-tuning concerns over energy hierarchies do not exist? 3) How valid are the statements on Susy? 4) What I really what to know from our experts here, does this paper have any merit at all! - 2 – Luboš Motl Jun 24 '12 at 5:58 2 In case it isn't clear from my answer, I agree with Lubos. But I like the little calculations in it, although they don't demonstrate the conclusion. The final thing about susy as external parameter tuning doesn't make any sense. SUSY is a symmetry not a tuning. – Ron Maimon Jun 24 '12 at 7:41 1 I guess it shouldn't be surprising that all these arguments are surfacing now, but nevertheless I've been surprised by how many people I hear them from. Sometimes they're right that ways that people have stated the hierarchy problem are a bit off the point, and it opens up room for criticism. For instance, there's often confusion about whether one is worried about the Higgs mass changing, or the Higgs vev, and whether one fixes one when calculating the other, and so on. But it's just clear that if the Higgs vev is way below fundamental scales, the potential had to be delicately adjusted. – Matt Reece Jun 25 '12 at 2:33 I deleted a second comment here -- I think if I'm going to make it I should write a full answer. Not sure if I'll get around to it. – Matt Reece Jun 25 '12 at 3:50 @MattReece I hope you consider adding an answer for the benefit of all. Especially about the mass-vev confusion. – stupidity Jun 28 '12 at 1:30 show 2 more comments ## 1 Answer This paper has some interesting observations and the Lanczos trick is very nice, but the conclusion is no good. I will paraphrase the argument in the paper as follows: The naturalness and heirarchy problems are caused by perturbation theory diseases. If you have a heirarchy put in by hand, it doesn't matter if you add a high energy sector, you can keep the heirarchy stable even if perturbation theory says it is unstable. The reason the author says this is because he estimates how perturbation theory shifts eigenvalues in an exactly diagonalizable matrix, where he knows the answer, and he shows that the shift is not reliably given by perturbation theory. This is absolutely correct, and it shows that the perturbative argument for the shift in the Higgs mass away from zero is not persuasive by itself. If this were the real argument for heirarchy, this paper would show you that there is no argument. But this is not the real argument. The real argument is Wilsonian. ### The Real Hierarchy Problem consider any regulator for the standard model, like a lattice. The chiral fields (the fermions and quarks) are naturally massless, and have an infinite compton wavelength compared to the lattice, so they are always automatically tuned to their critical point. This is actually not so simple to understand, because it requires some crazy manipulations to get chiral fermions to naturally emerge with a lattice cutoff. But in this case, the lattice cutoff is just a stand-in for some sort of string-scale regularization, where chiral fermions emergy naturally. The gauge fields are naturally massless, so they are also self-organized to their critical point, except that the SU(3) is confining, which picks out a confinement scale which is as the log of the lattice-scale coupling, so it is enormous even when the coupling is not terribly small. Anyway, everything is critical, and we would see a gravity-free version of something like our universe at long distances in this model. Except there is no Higgs. Now if you introduce a lattice Higgs sector, if it is a fundamental scalar and not a composite of fermions, you get into trouble. The Higgs is not massless, and your lattice standard model will have a Higgs field correlation function which falls off order one with unit of length equal to the lattice spacing. It won't help to set the lattice mass term of the Higgs to zero. The quartic term means that the lattice mass has to be some negative specific value to reach the massless Higgs point, since the phase transition point is not at 0 when you have a quartic self-coupling. The reason is that the Higgs itself is like the Ising model--- it has a tunable $m^2$ (which can be negative or positive, despite the name), and you need to make $m^2$ negative order 1 units on the lattice to reach the critical point in the Ising model. In the standard model, the critical point will be somewhere else, since it depends on the details of the fields you have in the theory, but it will not be at zero, it will be at some crazy value that depends both on the field content and on the regulator. Yet we see that the Higgs is critical in terms of the regulator. This is the ridiculous fine-tuning which is called the Heirarchy problem--- why is the Higgs $m^2$ term fine tuned on the lattice scale to give a nearly massless Higgs? Why is the ising-like Higgs sector look like an Ising model at the critical point, rather than an Ising model at a generic point, where the SU(2) and U(1) fields would acquire lattice-scale masses from the Higgs VEV (or else remain unbroken, if the Higgs is stable at zero field)? This order 1 shift at the lattice scale is what is nonperturbatively tantamount to a quadratically divergent perturbative divergent shift in the Higgs mass. An order 1 shift at the lattice scale is an order $\Lambda^2$ shift when you are looking at the thing in units appropriate for the low-energy theory. This shift is absolutely real--- you can see it by lattice simulating, and it is not fixed in the standard model. There is always an order 1 shift. In SUSY models, the Higgs mass is partnered with a chiral fermion mass. So keeping one SUSY unbroken means that you only get as much Higgs mass as there is SUSY breaking. This has nothing to do with perturbation theory. It's the location of the phase transition point in the Wilsonian point of view. If you say "The Higgs is just fine tuned to the phase transition point on the lattice", you are being absurd. Such a fine-tuning to a critical point screams for a dynamical mechanism. You might say "it's anthropic", but even after you tune the Higgs, as far as we can see, you need anthropic to tune the cosmological constant, and it is a little ridiculous to claim two parameters are turned anthropically to be so much smaller than Planckian, especially when there are coincidences that need to be explained: • The Higgs scale is only 3 orders of magnitude bigger than the QCD scale, but 14 orders of magnitude less than the Planck/GUT scale. Why is there a QCD Higgs near coincidence? This would be explained by technicolor, if the couplings start off the same for some other gauge group, and the confinement just happens a little sooner because of quicker running. • The cosmological constant scale is as far lower than the Higgs scale as the Higgs scale is lower than the Planck scale. This could be a coincidence, but it feels like a clue. It is not reasonable to claim that this kind structure is a random anthropic coincidence. There is more information there than in a random anthropic tuning. But the paper is right that the arguments for SUSY are extremely weak. Stabilizing the Higgs can be done in a variety of ways, and SUSY is only preferred because it is the one that is suggested by string theory. The fact that strings were out when SUSY was formulated meant that people made up other reasons to like SUSY, but really, the only thing that suggests SUSY is strings. But even though we know that the fundamental theory is SUSY, that doesn't mean that the low-energy theory has to be SUSY, but it does allow the low-energy theory to be SUSY, and this does stabilize the heirarchy problem, which is a real problem. - 1 I know I asked this long ago, but what I understood from you and Lubos and Matt, that there is a real hierarchy problem in the SM. But what about those who say, if there's nothing but the SM, then there's no problem at all since the theory is normalizable? – stupidity Sep 13 '12 at 15:08 @stupidity: There's gravity at least, and the continuum standard model has a Landau pole somewhere smaller than that, so I don't know what that means. The hierarchy problem is real and obvious--- it's just the statement that the Higgs mass parameter in the standard model is tuned very close to the Higgs critical point, the phase transition from infinitely Higgsed to no-Higgs SM with infinitely massive Higgs, and why is that? That's all. – Ron Maimon Sep 13 '12 at 16:13 @RonMaimon IMO, you are mixing up two different problems, namely: the regularization of UV divergencies in RQFT and the fine-tuning problem because you unjustifiably assume that the lattice (spacing) is somehow physical. Can you argue the same without using a particular regulator? – drake Sep 13 '12 at 20:51 1 @drake: I don't think the lattice spacing is physical, but the large shift in critical point is from the contribution of low energy modes, and it is real in any regularization. It formally goes away in dim-reg because of this regularization agnostic trick: you break up $\int {dk\over k^2+m^2}$ into $\int {dk \over k^2} - m^2 \int {dk\over k^2(k^2+m^2)}$ and only the second log-divergent part is m dependent. An analytic scheme amounts to differentiating with respect to $m^2$ and reintegrating, so the constant part goes poof. This doesn't affect RG, but it affects critical point location. – Ron Maimon Sep 14 '12 at 5:02 1 ... this trick is not a real fix for the critical point shift, if you wanted that, you would have to argue that the regulator nature chooses cancels out the low energy half of the $\int {dk\over k^2}$ part, the part that's thrown away by the formal structure of dim-reg, with the high energy half, and there is no real nonperturbative regulator which does this. It doesn't happen in string theory for sure, the effective terms in the low-energy theory are natural, modulo smallness from effective geometry. There is no miraculous fine-tuning in string theory, which I think is the physical regulator. – Ron Maimon Sep 14 '12 at 5:05 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447735548019409, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/47656/reference-request-is-mathematics-discovered-or-created?answertab=oldest
# Reference request: is mathematics discovered or created? I have to write a short monograph as an assignment for a course on the philosophy of science. Being a math student, of course I want to opt for something math-related. After some initial ideas which would have needed way too much research, I imagined I could narrow it down to a question which I have always wondered about: is mathematics discovered or created? I'm thus asking for references to books/papers/quotes/anything which adresses this question. I hope it is not too soft for a math.SE question; I apologize if it is. In particular, I remember a quote saying something like "Natural numbers were created by God. All else is the work of men", I'd like to know its exact statement and author. Anything, even if tangentially related, may come in handy. Thank you. - 5 "God made the natural numbers; all else is the work of man." -- Leopold Kronecker – InterestedGuest Jun 25 '11 at 21:49 8 As a Math student, I think it is best if you don't do a project on Math. – jspecter Jun 25 '11 at 22:04 6 – outsider Jun 25 '11 at 22:28 5 @Bruno: No, I mean research in the sense of "library research" not "mathematical research" or "original research". The fact that the course is required and that you don't seem to be thrilled about taking it is not really an adequate excuse for asking the internet mathematical community to help you write it. I am voting to close. – Pete L. Clark Jun 26 '11 at 0:37 5 I really don't understand the votes to close this question as off-topic - this is definitely a question relating to the history and development of mathematics. I therefore vote against closing following this suggestion here. The next user who wants to cast a vote to close should leave a comment cancelling my vote instead of voting. (please vote this comment up so that it appears above the "fold") – t.b. Jun 26 '11 at 9:55 show 18 more comments ## 7 Answers Original answer by trutheality: Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk. -Leopold Kronecker Translated to English: God made the integers; all else is the work of man. It also often appears as "natural numbers". A quick search online suggests that "ganzen Zahlen" means integers in German. But I don't speak German, so any input from someone who does is appreciated. Added: (Theo Buehler) Kronecker's quote is from a talk he gave at the "Berliner Naturforscher-Versammlung" in 1886. I'm not aware of a transcript of this talk. The quote is most often cited in the form in which it appears in the very interesting obituary by H. Weber: The obituary can be found in the Jahresbericht der Deutschen Mathematiker-Vereinigung Vol. 2, (1891/92), the quote is on page 19. Here's an attempt at a translation (rather loose): Concerning the rigor of notions [Kronecker] imposes highest requirements and tries to squeeze everything that should have a right of citizenship in Mathematics into the crystal clear and edgy form of number theory. Many among you will remember the dictum he made during a talk at the 1886 reunion of natural scientists in Berlin ("Berliner Naturforscher-Versammlung"): "God made the integers; all else is the work of man." - 5 I don't understand the down-vote. It answered what the OP asked, and it's a great answer in its own right, so I'm up-voting this answer. – Mike Jones Jun 25 '11 at 21:54 1 By "Die ganzen Zahlen" one indeed means $\mathbb{Z}$, while $\mathbb{N}$ is most often called "die Menge der natürlichen Zahlen". The word "ganz" is synonym for "komplett", "unbeschädigt" which you could translate as "integer". (You never say "die unbeschädigten Zahlen", though.) – leftaroundabout Jun 25 '11 at 23:19 @leftaroundabout Thanks. Feel free to add that by editing the answer if you'd like. – trutheality Jun 26 '11 at 0:54 @leftaroundabout: What you're saying is true in nowadays usage, but I'm not sure that this applies to the present discussion (lacking context). – t.b. Jun 26 '11 at 10:41 1 @Bruno: I added a loose translation of the passage I displayed. I attempted to retain the colorful language, I hope you can follow the idea. – t.b. Jun 26 '11 at 20:46 show 2 more comments I would like to recommend 'The Two Cultures of Mathematics' by W. T. Gowers http://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf In the setting of this article, personally, I prefer to say, Theory is created, while a solution to a math problem is discovered. - I fixed your link. I know of this article and have read it before (and liked it a lot). It is of course related. Thank you. – Bruno Stonek Jun 25 '11 at 22:06 The operative word is research. It is a search of the truth about things that already exist. About theories - they require proofs that must be acceptable by fellow mathematicians. The sufficiency of a proof is subjective and varies with the person and time. This itself makes theories more of a hypothesis. I came across a quote (by someone - I do not remember) - We call the theories we believe axioms and the facts we disbelieve theories. - As a physicist who has recently switched to a Mathematics career, I can give you only my opinion based on my experience and knowledge of the Laws of Nature. I do believe mathematics is completely real and is discovered not invented. A similar opinion was held by physicist Richard Feynman, in particular I recommend you watch his old lectures on the Character of the Physical Law, concretely lecture no. 2 about "The Relation of Mathematics and Physics" to appreciate that mathematics seems to be the proper setting to talk about the structures we find in Nature. If you want to deepen about the mathematical universe hypothesis concerning the (for many crazy) idea that everything is mathematical, see the preprint by Max Tegmark and his other articles in his website. (This answer contained an excessively long digression about those ideas but I have removed it in order not to contribute to endless debates; only the previous references remain as useful). - 1 I can't claim to have read your entire post in detail, but I would like to respond to the claim "Anything we will ever be able to say about Nature will be mathematical in the end." This obviously cannot be known - while it has been our experience so far that natural phenomena can be described mathematically, there is no reason why the universe should be able to be described by math, or indeed by anything. As Einstein said, "The most incomprehensible thing about the world is that it is at all comprehensible", and we can never rule out that the universe is not completely comprehensible. – Zev Chonoles♦ Jun 26 '11 at 5:13 1 Whereas there are lots of reasons to "believe" (scientifically by induction) that Nature is indeed mathematical (name every single structure and mathematical law I mentioned and all the rest), there is NO single reason to believe the contrary except the logical possibility of it. All the evidence supports the claim so far. We cannot be sure, of course, but THAT IS SCIENCE, as we cannot be sure that a relativistic black hole passes through the solar system and there is no rising sun tomorrow, all the evidence says it is quite improbable... – Javier Álvarez Jun 26 '11 at 5:20 Besides, there is a school of mathematical thought which says that natural languages can be reduced to mathematics in the end. Any information-theoretic construction that we made, be it in mathematics or in any other language, is in the end some kind of structure within the correlations of the information in our brains and the degrees of freedom perceived from the outside world. Since everything that can be said about Nature must be said in some language, everything that an inside observer will ever say will be an information-structure, and thus reducible to formal systems and mathematics. – Javier Álvarez Jun 26 '11 at 5:23 @Zev Chonoles: as I said in the post, it is my opinion based on my experience and knowledge and I hoped not to open a debate. To rebut some of your claims: most mathematicians blind themselves to the scientific method because they want complete truth, but the example of physics gives you the answer that INDEED Nature is described by Math to the level of precision we have today. The amount of progress made by that knowledge PROVES it is described at some level by math. – Javier Álvarez Jun 26 '11 at 5:33 @Zev Chonoles (cont'): We can never rule out anything? are you sure? I though science made advances because of falsifiable models of Nature. As Feynman says in the lectures I linked to, "we can only be sure about what is false". I challenge anybody to defy gravity and jump a cliff... since you could not be sure if it starts repelling that precise moment. Some things are true and those are here to stay. One of them is the usefulness of mathematics in the natural sciences... of that you can be sure. – Javier Álvarez Jun 26 '11 at 5:38 show 6 more comments In his autobiography Un mathématicien aux prises avec le siècle L. Schwartz discusses the question and says that it somewhat complicated. I haven’t the book, so can't cite properly, but the reasoning was something like this. Consider, for example, complex numbers. They can be regarded as human invention. But all their properties then are discoveries. - An excellent discussion of these issues is given by Reuben Hersch in his book What is mathematics, really?. The general message is that mathematics is philosophically "humanist" - it has a socially created reality. This doesn't give much of an idea of what the book is about, but it's about the best account of these sorts of issues that I've seen. - Doug Hofstadter's book Fluid Concepts and Creative Analogies responds to this question. He adopts the metaphor of mathematician as a person feeling around in a dark cave. He feels that mathematicians use their creativity to discover natural truths. (So, I guess his answer might be "Both"?) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950800895690918, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/08/24/an-example-part-2/?like=1&source=post_flair&_wpnonce=39170bc4ce
# The Unapologetic Mathematician ## An Example (part 2) We follow yesterday’s example of an interesting differential form with a (much simpler) example of some $1$-chains. Specifically, we’ll talk about circles! More specifically, we consider the circle of radius $a$ around the origin in the “punctured” plane. I used this term yesterday, but I should define it now: a “punctured” space is a topological space with a point removed. There are also “twice-punctured” or “$n$-times punctured” spaces, and as long as the space is a nice connected manifold it doesn’t really matter much which point is removed. But since we’re talking about the plane $\mathbb{R}^2$ it comes with an identified point — the origin — and so it makes sense to “puncture” the plane there. Now the circle of radius $a$ will be a singular $1$-cube. That is, it’s a curve in the plane that never touches the origin. Specifically, we’ll parameterize it by: $\displaystyle c_a(t)=(a\cos(2\pi t),a\sin(2\pi t))$ so as $t$ ranges from $0$ to $1$ we traverse the whole circle. There are two $0$-dimensional “faces”, which we get by setting $t=0$ and $t=1$: $\displaystyle\begin{aligned}c_a(0)&=(a,0)\\c_a(1)&=(a,0)\end{aligned}$ When we calculate the boundary of $c$, these get different signs: $\displaystyle\begin{aligned}\partial c_a&=(-1)^{1+0}c_a(0)+(-1)^{1+1}c_a(1)\\&=-(a,0)+(a,0)=0\end{aligned}$ We must be very careful here; these are not vectors and the addition is not vector addition. These are merely points in the plane — $0$-cubes — and the addition is purely formal. Still, the same point shows up once with a positive and once with a negative sign, so it cancels out to give zero. Thus the boundary of $c_a$ is empty. On the other hand, we will see that this circle cannot be the boundary of any $2$-chain. The obvious thing it might be the boundary of is the disk of radius $a$, but this cannot work because there is a hole at the origin, and the disk cannot cross that hole. However this does not constitute a proof; maybe there is some weird chain that manages to have the circle as its boundary without crossing the origin. But the proof will have to wait. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 1 Comment » 1. [...] we can take our differential form and our singular cube and put them together. That is, we can integrate the -form over the circle [...] Pingback by | August 24, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317635297775269, "perplexity_flag": "head"}
http://mathoverflow.net/questions/19649/physical-construction-of-nonconstant-meromorphic-functions-on-compact-riemann-s
## “Physical” construction of nonconstant meromorphic functions on compact Riemann surfaces? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Miranda's book on Riemann surfaces ignores the analytical details of proving that compact Riemann surfaces admit nonconstant meromorphic functions, preferring instead to work out the algebraic consequences of (a stronger version of) that assumption. Shafarevich's book on algebraic geometry has this to say: A harmonic function on a Riemann surface can be conceived as a description of a stationary state of some physical system: a distribution of temperatures, for instance, in case the Riemann surface is a homogeneous heat conductor. Klein (following Riemann) had a very concrete picture in his mind: "This is easily done by covering the Riemann surface with tin foil... Suppose the poles of a galvanic battery of a given voltage are placed at the points $A_1$ and $A_2$. A current arises whose potential $u$ is single-valued, continuous, and satisfies the equation $\Delta u = 0$ across the entire surface, except for the points $A_1$ and $A_2$, which are discontinuity points of the function." Does anyone know of a good reference on Riemann surfaces where a complete proof along these physical lines (Shafarevich mentions the theory of elliptic PDEs) is written down? How hard is it to make this appealing physical picture rigorous? (The proof given in Weyl seems too computational and a little old-fashioned. Presumably there are now slick conceptual approaches.) - ## 5 Answers For this and most other things about Riemann surfaces, I recommend Donaldson's Notes on Riemann surfaces, which are based on a graduate course I was once lucky enough to see, and which may eventually make it into book format. In his account, the "main theorem for compact Riemann surfaces" says that one can solve $\Delta f = \rho$ for any 2-form $\rho$ with integral zero. He describes this as the equation for a steady-state temperature distribution. A full proof is given, but I wouldn't describe it as slick: this is still a substantial result in analysis. - 2 Ha! I was half way through posting the exact same link, when you posted! I would add that he explains how to use this one equation to deduce the standard results of Riemann surface theory: Hodge decomopsition, Riemann-Roch, uniformisation etc. Incidentally, my understanding is that he hopes to finish the book soon. – Joel Fine Mar 28 2010 at 20:00 Thanks! Donaldson's proof looks much more user-friendly than Weyl's. – Qiaochu Yuan Mar 28 2010 at 20:12 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. See the book "Introduction to Riemann Surfaces" by G. Springer and references in it. - The first introductory chapter really is all that is needed for someone who wants to see physical constructions. It uses hydrodynamic constructions largely and starts with a very complex-analysis based approach. I wish I'd encountered it in college. – Justin Curry Mar 29 2010 at 19:55 It is a very good book, a bit old-fashioned, but it is ok. I heard that V.I. Arnold learn the subject using it. – Petya Mar 29 2010 at 20:30 The answers to this previous question are relevant. To fill in some gaps, that question is about building functions on a disc with specified Laplacian. As Tim Perutz says, you want to generalize that to building functions on Riemmann surface whose Laplacian is a specified $2$-form of integral zero. - For a discrete analogue of the connection between electrical networks and harmonic functions, I suggest taking a look at chapter 9 of the book on Markov Chains and Mixing time by Levin, Peres, and Wilmer. - The end of the first chapter in McKean and Moll's book on elliptic curves elaborates quite a bit on Klein's picture, (there described in terms of hydrodynamics, rather than electrodynamics). Most of the details are left to the reader, though. Edit: I guess the bit in the first chapter is more about uniformization than what you're asking. 2.18 discusses differentials of the first kind (those without poles) for compact Riemann surfaces in Klein's hydrodynamical picture. I couldn't find anything about constructing meromorphic functions using this picture, so this may not be so helpful to you. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389864802360535, "perplexity_flag": "head"}
http://en.wikibooks.org/wiki/Econometric_Theory/Asymptotic_Convergence
# Econometric Theory/Asymptotic Convergence Wikipedia has related information at Convergence of random variables ## Asymptotic Convergence ### Modes of Convergence #### Convergence in Probability Convergence in probability is going to be a very useful tool for deriving asymptotic distributions later on in this book. Alongside convergence in distribution it will be the most commonly seen mode of convergence. ##### Definition A sequence of random variables $\{ X_n ; n=1,2, \cdots \}$ converges in probability to $X_{ }$ if: $\forall \epsilon, \delta >0,$ $\exists N \; \operatorname{s.t.} \; \forall n \geq N,$ $\Pr \{ |X_n - X| > \delta \}< \epsilon$ an equivalent statement is: $\forall \delta >0,$ $\lim_{n \to \infty} \Pr \{ |X_n - X| > \delta \}=0$ This will be written as either $X_n \begin{matrix} \begin{matrix} { }_p \\ \longrightarrow \\{ } \end{matrix} \end{matrix} X$ or $\operatorname{plim} X_n = X$. ##### Example $X_n = \begin{cases} \eta & 1- \begin{matrix} \frac{1}{n} \end{matrix} \\ \theta & \begin{matrix} \frac{1}{n} \end{matrix} \end{cases}$ We'll make an intelligent guess that this series converges in probability to the degenerate random variable $\eta$. So we have that: $\forall \delta >0,\; \Pr \{ |X_n - \eta| > \delta \} \leq \Pr \{ |X_n - \eta| > 0 \}= \Pr \{ X_n= \theta \}= \begin{matrix} \frac{1}{n} \end{matrix}$ Therefore our definition for convergence in probability in this case is: $\forall \epsilon , \delta >0,$ $\exists N \quad \operatorname{s.t.} \forall n \geq N,$ $\Pr \{ |X_n - \eta | > \delta \} \leq \Pr \{ |X_n - \eta | > 0 \}=\Pr \{ X_n= \theta \}= \begin{matrix} \frac{1}{n} \end{matrix} < \epsilon$ So for any positive values of $\epsilon \in \mathbb{R}$ we can always find an $N \in \mathbb{N}$ large enough so that our definition is satisfied. Therefore we have proved that $X_n \begin{matrix} { }_p \\ \longrightarrow \\{ } \end{matrix} \eta$. #### Convergence Almost Sure Almost-sure convergence has a marked similarity to convergence in probability, however the conditions for this mode of convergence are stronger; as we will see later, convergence almost surely actually implies that the sequence also converges in probability. ##### Definition A sequence of random variables $\{ X_n ; n=1,2, \cdots \}$ converges almost surely to the random variable $X$ if: $\forall \delta >0,$ $\lim_{n \to \infty} \Pr \{ \bigcup_{m \geq n} |X_m - X| > \delta, \}=0$ equivalently $\Pr \{ \lim_{n \to \infty} X_n = X \}=1$ Under these conditions we use the notation $X_n \begin{matrix} \begin{matrix} { }_{a.s.} \\ \longrightarrow \\{ } \end{matrix} \end{matrix} X$ or $\lim_{n \to \infty} X_n = X \operatorname{a.s.}$. ##### Example A Wikibookian disputes the factual accuracy of this page or section. You can help make it accurate. Please view the relevant discussion. Let's see if our example from the convergence in probability section also converges almost surely. Defining: $X_n = \begin{cases} \eta & 1- \begin{matrix} \frac{1}{n} \end{matrix} \\ \theta & \begin{matrix} \frac{1}{n} \end{matrix} \end{cases}$ we again guess that the convergence is to $\eta$. Inspecting the resulting expression we see that: $\Pr \{ \lim_{n \to \infty} X_n = \eta \}=1- \Pr \{ \lim_{n \to \infty} X_n \ne \eta \}=1- \Pr \{ \lim_{n \to \infty} X_n= \theta \} \geq 1-\lim_{n \to \infty}\begin{matrix} \frac{1}{n} \end{matrix}=1$ Thereby satisfying our definition of almost-sure convergence. #### Convergence in Distribution Convergence in distribution will appear very frequently in our econometric models through the use of the Central Limit Theorem. So let's define this type of convergence. ##### Definition A sequence of random variables $\{ X_n ; n=1,2, \cdots \}$ asymptotically converges in distribution to the random variable $X$ if $F_{X_n}(\zeta ) \rightarrow F_{X}(\zeta )$ for all continuity points. $F_{X_n}(\zeta )$ and $F_{X_{}}(\zeta )$ are the cumulative density functions of $X_n$ and $X$ respectively. It is the distribution of the random variable that we are concerned with here. Think of a students-T distribution: as the degrees of freedom, $n$, increases our distribution becomes closer and closer to that of a gaussian distribution. Therefore the random variable $Y_n \sim t(n)$ converges in distribution to the random variable $Y \sim N(0,1)$ (n.b. we say that the random variable $Y_n \begin{matrix} { }_{d} \\ \longrightarrow \\{ } \end{matrix} Y$ as a notational crutch, what we really should use is $f_{Y_n} (\zeta )\begin{matrix} { }_{d} \\ \longrightarrow \\{ } \end{matrix} f_Y(\zeta )$/ ##### Example Let's consider the distribution Xn whose sample space consists of two points, 1/n and 1, with equal probability (1/2). Let X be the binomial distribution with p = 1/2. Then Xn converges in distribution to X. The proof is simple: we ignore 0 and 1 (where the distribution of X is discontinuous) and prove that, for all other points a, $\lim F_{X_n}(a) = F_X(a)\,$. Since for a < 0 all Fs are 0, and for a > 1 all Fs are 1, it remains to prove the convergence for 0 < a < 1. But $F_{X_n}(a) = \frac{1}{2} ([a \ge \frac{1}{n}] + [a \ge 1])$ (using Iverson brackets), so for any a chose N > 1/a, and for n > N we have: $n > 1/a \rightarrow a > 1/n \rightarrow [a \ge \frac{1}{n}] = 1 \land [a \ge 1] = 0 \rightarrow F_{X_n}(a) = \frac{1}{2}\,$ So the sequence $F_{X_n}(a)\,$ converges to $F_X(a)\,$ for all points where FX is continuous. #### Convergence in R-mean Square Convergence in R-mean square is not going to be used in this book, however for completeness the definition is provided below. ##### Definition A sequence of random variables $\{ X_n ; n=1,2, \cdots \}$ asymptotically converges in r-th mean (or in the $L^r$ norm) to the random variable $X$ if, for any real number $r>0$ and provided that $E(|X_n|^r) < \infty$ for all n and $r\geq 1$, $\lim_{n\to \infty }E\left( \left\vert X_n-X\right\vert ^r\right) =0.$ #### Cramer-Wold Device The Cramer-Wold device will allow us to extend our convergence techniques for random variables from scalars to vectors. ##### Definition A random vector $\mathbf{X}_n \begin{matrix} { }_{d} \\ \longrightarrow \\{ } \end{matrix} \mathbf{X} \; \iff \; {\mathbf{\lambda}}^{\operatorname{T}}\mathbf{X}_n \begin{matrix} { }_{d} \\ \longrightarrow \\{ } \end{matrix} {\mathbf{\lambda}}^{\operatorname{T}}\mathbf{X} \quad \forall \lVert \mathbf{\lambda} \rVert \ne 0$. ### Central Limit Theorem Let $\ X_1, X_2, X_3, ...$ be a sequence of random variables which are defined on the same probability space, share the same probability distribution D and are independent. Assume that both the expected value μ and the standard deviation σ of D exist and are finite. Consider the sum $\ S_n = X_1 + ... + X_n$. Then the expected value of $\ S_n$ is nμ and its standard error is σ n1/2. Furthermore, informally speaking, the distribution of Sn approaches the normal distribution N(nμ,σ2n) as n approaches ∞.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.873003363609314, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/12/24/
The Unapologetic Mathematician Permutations and Polytabloids We’ve defined a bunch of objects related to polytabloids. Let’s see how they relate to permutations. First of all, I say that $\displaystyle R_{\pi t}=\pi R_t\pi^{-1}$ Indeed, what does it mean to say that $\sigma\in R_{\pi t}$? It means that $\sigma$ preserves the rows of the tableau $\pi t$. And therefore it acts trivially on the tabloid $\{\pi t\}$. That is: $\sigma\{\pi t\}=\{\pi t\}$. But of course we know that $\{\pi t\}=\pi\{t\}$, and thus we rewrite $\sigma\pi\{t\}=\pi\{t\}$, or equivalently $\pi^{-1}\sigma\pi\{t\}=\{t\}$. This means that $\pi^{-1}\sigma\pi\in R_t$, and thus $\sigma\in\pi R_t\pi^{-1}$, as asserted. Similarly, we can show that $C_{\pi t}=\pi C_t\pi^{-1}$. This is slightly more complicated, since the action of the column-stabilizer on a Young tabloid isn’t as straightforward as the action of the row-stabilizer. But for the moment we can imagine a column-oriented analogue of Young tabloids that lets the same proof go through. From here it should be clear that $\kappa_{\pi t}=\pi\kappa_t\pi^{-1}$. Finally, I say that the polytabloid $e_{\pi t}$ is the same as the polytabloid $\pi e_t$. Indeed, we compute $\displaystyle e_{\pi t}=\kappa_{\pi t}\{\pi t\}=\pi\kappa_t\pi^{-1}\pi\{t\}=\pi\kappa_t\{t\}=\pi e_t$ About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060404896736145, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/98343/list
## Return to Question 3 edited tags 2 edited body A long time ago I've seen a paper considering, given $\ell$ fixed, estimates for $$\sum_{n \leq x, (n, \ell) = 1} 1$$ Of course, this is easy to estimate with a trivial error term of $O(\varphi(q))$. O(\varphi(l))\$. However in the paper I am looking for the authors attempted obtaining better bounds, using some Fourier analysis (in particular the Fourier series for the fractional part of x). I think, bounds in the sum $$\sum_{n \leq x} (n, \ell)$$ are essentially an equivalent variation of the problem, so references on this problem are welcome aswell. The reason why I am interested in this problem is ... pure curiosity. I am curious to see how the Fourier methods meshed in, and what kind of bounds they gave, even though of course we cannot really expect anything too fantastic in this problem. 1 # Number of integers coprime to l A long time ago I've seen a paper considering, given $\ell$ fixed, estimates for $$\sum_{n \leq x, (n, \ell) = 1} 1$$ Of course, this is easy to estimate with a trivial error term of $O(\varphi(q))$. However in the paper I am looking for the authors attempted obtaining better bounds, using some Fourier analysis (in particular the Fourier series for the fractional part of x). I think, bounds in the sum $$\sum_{n \leq x} (n, \ell)$$ are essentially an equivalent variation of the problem, so references on this problem are welcome aswell. The reason why I am interested in this problem is ... pure curiosity. I am curious to see how the Fourier methods meshed in, and what kind of bounds they gave, even though of course we cannot really expect anything too fantastic in this problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257740378379822, "perplexity_flag": "head"}
http://mathoverflow.net/questions/103613/is-there-a-software-to-prove-or-deduce-symbolic-inequalities
## Is there a software to prove or deduce symbolic inequalities? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a bunch of inequalities, and I'm trying to see if another inequality can be deduced from the first bunch. For example, assuming that $a \leq b$ and $c \leq d$, we can deduce that $a + c \leq b + d$, but we cannot deduce that $a + d \leq b + c$. Is there some software that allows me to take some inequalities which are assumed to be true as input, and to check if another input inequality can be deduced or proved from these earlier inequalities? I am most familiar with MATLAB and it doesn't seem to do this. I realized that I could hack it and say for example $b = a + p$, $d = c + q$ with $p, q \geq 0$, then just writing $b + d - (a + c)$ and asking MATLAB to simplify the expression, and noticing that all the $p, q$ terms are nonnegative. But is there a more natural way to do this? - 1 There is the Coq software. See coq.inria.fr/a-short-introduction-to-coq – Damian Rössler Jul 31 at 14:29 Use the Reduce function in mathematica...maybe that'll help. – S. Sra Jul 31 at 17:42 If you're talking about linear inequalities (as in your examples), this can be written as a feasibility problem of linear programming. The nonlinear case is much more difficult. Still, numerical optimization software may be helpful. – Robert Israel Jul 31 at 18:41 ## 2 Answers As @Robert Israel points out, the linear case is MUCH easier than the general case, and the really general case (arbitrary inequalities of the form $f(\mathbf{x}) \geq 0$) is clearly undecidable, but if the inequalities are polynomial, this is the problem of "quantifier elimination over real closed fields", which you can google. The first algorithm was "Tarski's algorithm", there have been many improvements since, most of them used by computer algebra systems (Mathematica, Maple, etc). - 2 In particular, Maple 16 has new features for solving polynomial systems, including polynomial equations, inequations and inequalities. See, for instance, maplesoft.com/support/help/Maple/… – J W Jul 31 at 20:24 Thank you for your answer. My case is linear, so I will be solving a feasibility problem for linear programming to find if the inequality is implied. But this is definitely useful information for me to keep in mind. – unknown (google) Aug 2 at 17:15 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. According Wolfran, Mathematica uses a large number of original algorithms to provide automatic systemwide support for inequalities and inequality constraints. Whereas equations can often be solved in terms of numbers, even representing solution sets for inequalities is only made possible by Mathematica's symbolic capabilities. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945386528968811, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/1534-rational-algebraic-expressions.html
# Thread: 1. ## Rational algebraic expressions In a book stands that rational algebraic expression is ${\sqrt 2}$ and that are not rational algebraic expression ${\sqrt 2gh}, x+{\sqrt x}$ How can ${\sqrt 2}$ be rational algebraic expression? Why isn't irrational? Why ${\sqrt 2gh}, x+{\sqrt x}$ are irrational? 2. Originally Posted by DenMac21 In a book stands that rational algebraic expression is ${\sqrt 2}$ and that are not rational algebraic expression ${\sqrt 2gh}, x+{\sqrt x}$ How can ${\sqrt 2}$ be rational algebraic expression? Why isn't irrational? Why ${\sqrt 2gh}, x+{\sqrt x}$ are irrational? A rational algebraic expression is an expressions which is of the form $\frac{P(x)}{Q(x)}$, where $P(x)$ and $Q(x)$ are polynomials in x. Note this does not require that the coefficients in either polynomial be rational. With this definition ${\sqrt 2}$ is a rational algebraic expression (even though ${\sqrt 2}$ is not a rational number). Clearly with this definition $x+{\sqrt x}$ is not a rational algebraic expression. Assuming the remaining expression should be: $\sqrt {2gh}$ and that $g$ and $h$ are polynomials then this is not a rational algebraic expression. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327417016029358, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/1852-constants-composition-functions.html
# Thread: 1. ## Constants for Composition of functions Let f(x) = ax+b and g(x) =cx+d where a,b,c,d are constants. Determine for which constants a,b,c,d it is true that f o g = g o f. I am not very sure how to approach this problem. I simplifed the f o g and g o f but after that I am not sure how to go from there. Here is what I did first f(g(x)) = g(f(x)) a(cx+d) +b = c(ax+b) +d acx+ad+b = cax + cb +d then after that I got stuck. I tried switching around the varibles on either side and tried to get them to equal to eachother, however that didn't work. Am I'm going in the right direction or am I totally off? Thank you for your help!!! acx-acx = cb +d ad+b = cb +d ad -d = cb -b d(a-1) = b(c-1) that is where I got stuck 2. Originally Posted by hotmail590 Let f(x) = ax+b and g(x) =cx+d where a,b,c,d are constants. Determine for which constants a,b,c,d it is true that f o g = g o f. I am not very sure how to approach this problem. I simplifed the f o g and g o f but after that I am not sure how to go from there. Here is what I did first f(g(x)) = g(f(x)) a(cx+d) +b = c(ax+b) +d acx+ad+b = cax + cb +d then after that I got stuck. I tried switching around the varibles on either side and tried to get them to equal to eachother, however that didn't work. Am I'm going in the right direction or am I totally off? Thank you for your help!!! acx-acx = cb +d ad+b = cb +d ad -d = cb -b d(a-1) = b(c-1) that is where I got stuck You are almost there, but may be expecting something cleverer than is actualy the case. First $f \circ g = g \circ f$ gives as you observe: $acx+ad+b = cax + cb +d$, which rearranges to: $(ac-ca)x + ad+b-cb+d=0$, Which holds when: $ad+b-cb+d=0$. Which is all there is to it; other than rearranging this last equation to as neat a form as possible. I like: $\frac {a-1}{b}=\frac{c-1}{d}$ RonL 3. Would it be safe to conclude that the constants a b c d may be all real numbers however a must equal to c and b must equal to d? 4. Originally Posted by hotmail590 Would it be safe to conclude that the constants a b c d may be all real numbers however a must equal to c and b must equal to d? Yes. When a=c and b=d, the equation becomes an identity. f(x) = ax +b g(x) = cx +d For f o g = g o f, a(cx +d) +b = c(ax +b) +d acx +ad +b = acx +bc +d The acx cancels out, ad +b = bc +d ad -d = bc -b d(a-1) = b(c-1) ---------*** So if a=c and b=d, b(c-1) = b(c-1) Or, d(a-1) = d(a-1) 5. Originally Posted by hotmail590 Would it be safe to conclude that the constants a b c d may be all real numbers however a must equal to c and b must equal to d? It is true that $a, b, c,$ and $d$ may be any real numbers which satisfy: $<br /> \frac {a-1}{b}=\frac{c-1}{d}<br />$ So if $a=2$ and $c=4$ then any $b$ and $d$ will do as long as $d=3b$. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940890908241272, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/37593/easy-special-cases-of-the-decomposition-theorem
## Easy special cases of the decomposition theorem? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The decomposition theorem states roughly, that the pushforward of an IC complex, along a proper map decomposes into a direct sum of shifted IC complexes. Are there special cases for the decomposition theorem, with "easy" proofs? Are there heuristics, why the decomposition theorem should hold? - ## 1 Answer Well, it depends on what you mean by "easy". A special case, which I find very instructive, is a theorem of Deligne from the late 1960's. Theorem. `$\mathbb{R} f_*\mathbb{Q}\cong \bigoplus_i R^if_*\mathbb{Q}[-i]$`, when $f:X\to Y$ is a smooth projective morphism of varieties over $\mathbb{C}$. (This holds more generally with $\mathbb{Q}_\ell$-coefficients.) Corollary. The Leray spectral sequence degenerates. The result was deduced from the hard Lefschetz theorem. An outline of a proof (of the corollary) can be found in Griffiths and Harris. It is tricky but essentially elementary. A much less elementary, but more conceptual argument, uses weights. Say $Y$ is smooth and projective, then $E_2^{pq}=H^p(Y, R^qf_*\mathbb{Q})$ should be pure of weight $p+q$ (in the sense of Hodge theory or $\ell$-adic cohomology). Since $$d_2: E_2^{pq}\to E_2^{p+2,q-1}$$ maps a structure of one weight to another it must vanish. Similarly for higher differentials. If $f$ is proper but not smooth, the decomposition theorem shows that $\mathbb{R} f_*\mathbb{Q}$ decomposes into sum of translates of intersection cohomology complexes. This follows from more sophisticated purity arguments (either in the $\ell$-adic setting as in BBD, or the Hodge theoretic setting in Saito's work). There is also a newer proof due to de Cataldo and Migliorini which seems a bit more geometric. I have been working through some of this stuff slowly. So I may have more to say in a few months time. Rather than updating this post, it may be more efficient for the people interested to check here periodically. - Thanks, for the answer I will accept it. Although if you can add something more in a few months do it :) – Jan Weidner Sep 7 2010 at 6:26 Should one of the $p$'s in $E*{pq}2=H^p(Y,R^p f∗Q)$ be a $q$', above? – Richard Montgomery Sep 10 2010 at 3:16 Yes, thanks. Fixed. – Donu Arapura Sep 10 2010 at 11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237728714942932, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/71534-solved-second-fundamental-form.html
# Thread: 1. ## [SOLVED] Second Fundamental Form Hi there. I'm trying to understand the Second Fundamental Form but I'm stuck on something. Let: $\textbf{X}=(x(u^1,u^2),y(u^1,u^2),z(u^1,u^2))$ is a parameterised form of a surface. $\textbf{X}_i=\dfrac{\partial \textbf{X}}{\partial u^i}$ $\textbf{X}_{i j}=\dfrac{\partial \textbf{X}^2}{\partial u^i \partial u^j}$ It's written $\textbf{X}_{i j}=\Gamma_{ij}^{r}\textbf{X}_r +L_{ij}\textbf{U}$ where $\textbf{U}$ is the normal vector to the surface. I don't understand what is $L$ and $\Gamma$ 2. Originally Posted by fobos3 Hi there. I'm trying to understand the Second Fundamental Form but I'm stuck on something. Let: $\textbf{X}=(x(u^1,u^2),y(u^1,u^2),z(u^1,u^2))$ is a parameterised form of a surface. $\textbf{X}_i=\dfrac{\partial \textbf{X}}{\partial u^i}$ $\textbf{X}_{i j}=\dfrac{\partial \textbf{X}^2}{\partial u^i \partial u^j}$ It's written $\textbf{X}_{i j}=\Gamma_{ij}^{r}\textbf{X}_r +L_{ij}\textbf{U}$ where $\textbf{U}$ is the normal vector to the surface. I don't understand what is $L$ and $\Gamma$ This is just the decomposition of $\textbf{X}_{ij}$ in a basis. Indeed, $(\textbf{X}_1,\textbf{X}_2)$ is a basis of the tangent plane, while $\textbf{U}$ is orthogonal to that plane, so that $(\textbf{X}_1,\textbf{X}_2,\textbf{U})$ is a basis of $\mathbb{R}^3$ (which depends on $(u_1,u_2)$). Then for instance $\Gamma_{ij}^1$ is the $\textbf{X}_1$-component of $\textbf{X}_{i j}$. 3. Originally Posted by Laurent This is just the decomposition of $\textbf{X}_{ij}$ in a basis. Indeed, $(\textbf{X}_1,\textbf{X}_2)$ is a basis of the tangent plane, while $\textbf{U}$ is orthogonal to that plane, so that $(\textbf{X}_1,\textbf{X}_2,\textbf{U})$ is a basis of $\mathbb{R}^3$ (which depends on $(u_1,u_2)$). Then for instance $\Gamma_{ij}^1$ is the $\textbf{X}_1$-component of $\textbf{X}_{i j}$. Never mind. It's explained 3 chapters ahead. It's the Christoffel symbol. 4. Originally Posted by fobos3 Never mind.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348069429397583, "perplexity_flag": "head"}
http://cust-serv@ams.org/bookstore-getitem/item=coll-52-r
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education $$J$$-holomorphic Curves and Symplectic Topology: Second Edition Dusa McDuff, Barnard College, Columbia University, New York, NY, and Dietmar Salamon, ETH, Zurich, Switzerland SEARCH THIS BOOK: Colloquium Publications 2012; 726 pp; hardcover Volume: 52 ISBN-10: 0-8218-8746-7 ISBN-13: 978-0-8218-8746-2 List Price: US\$109 Member Price: US\$87.20 Order Code: COLL/52.R See also: Frobenius Manifolds, Quantum Cohomology, and Moduli Spaces - Yuri I Manin The theory of $$J$$-holomorphic curves has been of great importance since its introduction by Gromov in 1985. In mathematics, its applications include many key results in symplectic topology. It was also one of the main inspirations for the creation of Floer homology. In mathematical physics, it provides a natural context in which to define Gromov-Witten invariants and quantum cohomology, two important ingredients of the mirror symmetry conjecture. The main goal of this book is to establish the fundamental theorems of the subject in full and rigorous detail. In particular, the book contains complete proofs of Gromov's compactness theorem for spheres, of the gluing theorem for spheres, and of the associativity of quantum multiplication in the semipositive case. The book can also serve as an introduction to current work in symplectic topology: there are two long chapters on applications, one concentrating on classical results in symplectic topology and the other concerned with quantum cohomology. The last chapter sketches some recent developments in Floer theory. The five appendices of the book provide necessary background related to the classical theory of linear elliptic operators, Fredholm theory, Sobolev spaces, as well as a discussion of the moduli space of genus zero stable curves and a proof of the positivity of intersections of $$J$$-holomorphic curves in four-dimensional manifolds. The second edition clarifies various arguments, corrects several mistakes in the first edition, includes some additional results in Chapter 10 and Appendices C and D, and updates the references to recent developments. Readership Graduate students and research mathematicians interested in symplectic topology and geometry. AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8613994121551514, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/10612/explain-how-or-if-a-box-full-of-photons-would-weigh-more-due-to-massless-photo?answertab=oldest
# Explain how (or if) a box full of photons would weigh more due to massless photons I understand that mass-energy equivalence is often misinterpreted as saying that mass can be converted into energy and vice versa. The reality is that energy is always manifested as mass in some form, but I struggle with some cases: Understood Nuclear Decay Example In the case of a simple nuclear reaction, for instance, the total system mass remains the same since the mass deficit (in rest masses) is accounted for in the greater relativistic masses of the products per $E=\Delta m c^2$. When a neutron decays and you are left with a fast proton and a relativistic electron. If you could weigh those two without slowing them down, you would find it weighed as much as the original neutron. Light in a Box This becomes more difficult for me when moving to massless particles like photons. Photons can transmit energy from one heavy particle to another. When a photon is absorbed the relativistic mass (not the rest mass) of the (previously stationary) particle that absorbs it increases. But if my understanding is correct, the energy must still be manifested as mass somehow while the photon is in-flight, in spite of the fact that the photon does not have mass. So let's consider a box with the interior entirely lined with perfect mirrors. I have the tare weight of the box with no photons in it. When photons are present the box has an additional quantifiable amount of energy (quantified below) due to the in-flight photons. Say there are $N$ photons... obviously assume $N$ is large. $$\Delta m = \frac{ E }{ c^2 } = \frac{ N h }{ \lambda c}$$ Interactions are limited to reflections with the wall, which manifest as a constant pressure on the walls. If I hold this box in a constant gravitational field (like the surface of Earth) then there will be a gradient in the pressure that pushes down slightly. Is this correct? Wouldn't there still technically be mass as the photons are in-flight, which would cause its own gravitational field just as all matter does? How is this all consistent with the assertion that photons are massless? Is it really correct to say that photons don't have mass? It seems to be a big stretch. Please offer a more complete and physically accurate account of this mirror-box. - ## 3 Answers The statement that photons are massless means that photons do not have rest mass. In particular, this means that, in units where $c=1$, the magnitude of the photon 3-momentum must be equal to the total energy of the photons, rather than the standard relationship where $m^{2} = E^{2}-p^{2}$. But, you can create multi-photon systems where the net momentum is zero, since momentum adds as a vector. When you do this, however, since the energy of a non-bound state is always non-negative, the energies just add. So, this system looks just like the rest frame of a massive particle, which has energy associated with its mass and nothing else. The statement about gravity is a little bit more subtle, but all photon states will interact with the gravitational field, thanks to the positive results of the light-bending observations that have been made over the past century. So you don't even need a construction like this to get photons "falling" in a gravitational field. - Would it then be correct to say that the relativistic mass, not the rest mass, of a photon is $m=\frac{h}{\lambda c}$? Then all statements I would make about the mirror box could be done using this. I think that $(m_0 c^2)^2 = E^2 - (p c)^2$ (this might be what you had in mind) still holds, however, since $m_0=0$. – AlanSE May 31 '11 at 14:46 1 @Zassoundtsukushi: YOu can do that, but it's somewhat conceptually complicated to use the term "relativistic mass" eventually--the "relativistic mass" is really just the energy, and has properites closer to an allergy. – Jerry Schirmer May 31 '11 at 15:00 Yes, to the extent that your unit system allows it (like $\frac{MeV}{c^2}$ for mass) I agree that the energy of the photons can be looked at as "just the energy". My question is sufficiently answered by nothing that this photon relativistic mass or energy (which one you use is semantics) exhibits all of the properties expected of that quantity of mass. This means that it is affected by gravitational fields and warps space-time itself. This is more than I previously felt comfortable claiming, but the conceptual picture here seems to be consistent. – AlanSE May 31 '11 at 15:45 @Zassoundsukushi: the reason why it won't work out is technical--radiation gravitates, but not in the same way as matter, and rest mass and relativistic energies don't add together in the same way when you're creating systems out of many particles. At a first glance, you're ok using them semi-interchangeably, but realize that it's complication-prone if you plan on going deeper into this stuff. – Jerry Schirmer May 31 '11 at 16:28 @Jerry "not in the same way as matter" -> how so? Gravitation couples to stress energy tensor which doesn't really differ all that much between massless and massive systems... – Marek May 31 '11 at 19:04 show 3 more comments I think that there is some confusion in your understanding of relativistic physics in the statement here: In the case of a simple nuclear reaction, for instance, the total system mass remains the same since the mass deficit (in rest masses) is accounted for in the greater relativistic masses of the products per E=Δmc2. When a neutron decays and you are left with a fast proton and a relativistic electron. If you could weigh those two without slowing them down, you would find it weighed as much as the original neutron. The correct statement is that the summed four vector of all the decay products would have the effective mass of a neutron. Masses are not conserved in relativistic physics, in an analogous way that lengths are not conserved when adding vectors in three dimensions. What is conserved is energy and momentum, a four vector whose measure is m*c, where m is the effective mass of the system, similar to the length of a three-vector after the addition of three-vectors . When a pi0 goes into two photons, it is true that the available energy for each gamma in the centre of mass system of the pio is half the mass of the pio, and the measure of the invariant mass of those two gammas will be the mass of the pio. When more particles are involved , lets say two pio's then the invariant mass of the four gammas four momenta is not additive to two pio masses. It is better to forget about convoluted arguments with masses and think of four momenta when in the relativistic regime. Now a box of photons will have a four momentum sum in measure equal to E*2/c*2 - |p|*2=m*2*c**2. (please see the wiki link for clear terminology) If the three vector momentum sums up to zero, the effective mass of the photons in the box will be E/c**2. Small but there to be weighed. - There is no confusion, these statements are unambiguously 100% correct, and are the reason physicists used relativistic mass as a concept. – Ron Maimon Jul 4 '12 at 5:50 There are no confusions in your understanding, everything you said is correct, and it is the nontrivial content of Einstein's E=mc^2 paper. These systems are the reason that "relativistic mass", as introduced by Tolman is pedagogically useful. The concept that we call "mass" in our day-to-day life is the energy of a system (in mass units), and when you only use the word mass to mean "rest-mass", the intuitive concept is changed somewhat. For the atomic fission, the fast moving fragments have energy which is equal to the initial bomb energy. If you weigh them without slowing them down (for example, if they are charged and you capture them by making them do circles in a magnetic field), the weight you would field on a scale once they are captured would increase by the relativistic mass (the energy over c^2). The photons in a spherical mirror box weigh the box down exactly as the relativistic mass of the photons inside. The pull of the Earth on these photons is on the relativistic mass. If you replaced the photons with a particle gas at the same pressure, and removed a little mass from the walls to make the total mass be positive, the gravitatonal field outside will be the same, this isn't true replacing the photons with a pressureless block with a weight equal to their relativistic mass, only because the pressure contributes to the gravitational field too. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534589648246765, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/211892-rationalize-fraction-fractional-exponent.html
# Thread: 1. ## Rationalize a fraction with fractional exponent My question is, how are they getting from step 2 to step 3. Where is the 3^4/5 coming from?! Not sure i understand the rationalize the denominator of (1/3)^1/5 Thanks for the help. Attached Thumbnails 2. ## Re: Rationalize a fraction with fractional exponent You have $\left( \frac{1}{3} \right)^\frac{1}{5} = \frac{1^\frac{1}{5}}{3^\frac{1}{5}} = \frac{1}{3^\frac{1}{5}}$ and you want to get the fractional exponent out of the denominator. You can multiply top and bottom by whatever you like (except 0). Of course, you choose $3^{\frac{4}{5}}$ so that the denominator will be just 3. That's where $3^{\frac{4}{5}}$ comes from. And so you get $\frac{1}{3^\frac{1}{5}} = \frac{3^{\frac{4}{5}}}{3^\frac{1}{5}3^{\frac{4}{5} }} = \frac{3^\frac{4}{5}}{3} = \frac{1}{3} \, 3^\frac{4}{5}$. - Hollywood
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8658012747764587, "perplexity_flag": "middle"}
http://pediaview.com/openpedia/RLC_circuit
RLC circuit A series RLC circuit: a resistor, inductor, and a capacitor An RLC circuit (or LCR circuit or CRL circuit or RCL circuit) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance and capacitance respectively. The circuit forms a harmonic oscillator for current and will resonate in a similar way as an LC circuit will. The main difference that the presence of the resistor makes is that any oscillation induced in the circuit will die away over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency somewhat. Some resistance is unavoidable in real circuits, even if a resistor is not specifically included as a component. A pure LC circuit is an ideal which really only exists in theory. There are many applications for this circuit. They are used in many different types of oscillator circuits. Another important application is for tuning, such as in radio receivers or television sets, where they are used to select a narrow range of frequencies from the ambient radio waves. In this role the circuit is often referred to as a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. The three circuit elements can be combined in a number of different topologies. All three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some with practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance. Inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit. Basic concepts[] Linear analog electronic filters • Constant k filter • m-derived filter • General image filters • Zobel network (constant R) filter • Lattice filter (all-pass) • Bridged T delay equaliser (all-pass) • Composite image filter • mm'-type filter Simple filters • RC filter • RL filter • LC filter • RLC filter Resonance[] An important property of this circuit is its ability to resonate at a specific frequency, the resonance frequency, $f_0 \,$. Frequencies are measured in units of hertz. In this article, however, angular frequency, $\omega_0 \,$, is used which is more mathematically convenient. This is measured in radians per second. They are related to each other by a simple proportion, $\omega_0 = 2 \pi f_0 \,$ Resonance occurs because energy is stored in two different ways: in an electric field as the capacitor is charged and in a magnetic field as current flows through the inductor. Energy can be transferred from one to the other within the circuit and this can be oscillatory. A mechanical analogy is a weight suspended on a spring which will oscillate up and down when released. This is no passing metaphor; a weight on a spring is described by exactly the same second order differential equation as an RLC circuit and for all the properties of the one system there will be found an analogous property of the other. The mechanical property answering to the resistor in the circuit is friction in the spring/weight system. Friction will slowly bring any oscillation to a halt if there is no external force driving it. Likewise, the resistance in an RLC circuit will "damp" the oscillation, diminishing it with time if there is no driving AC power source in the circuit. The resonance frequency is defined as the frequency at which the impedance of the circuit is at a minimum. Equivalently, it can be defined as the frequency at which the impedance is purely real (that is, purely resistive). This occurs because the impedance of the inductor and capacitor at resonance are equal but of opposite sign and cancel out. Circuits where L and C are in parallel rather than series actually have a maximum impedance rather than a minimum impedance. For this reason they are often described as antiresonators, it is still usual, however, to name the frequency at which this occurs as the resonance frequency. Natural frequency[] The resonance frequency is defined in terms of the impedance presented to a driving source. It is still possible for the circuit to carry on oscillating (for a time) after the driving source has been removed or it is subjected to a step in voltage (including a step down to zero). This is similar to the way that a tuning fork will carry on ringing after it has been struck, and the effect is often called ringing. This effect is the peak natural resonance frequency of the circuit and in general is not exactly the same as the driven resonance frequency, although the two will usually be quite close to each other. Various terms are used by different authors to distinguish the two, but resonance frequency unqualified usually means the driven resonance frequency. The driven frequency may be called the undamped resonance frequency or undamped natural frequency and the peak frequency may be called the damped resonance frequency or the damped natural frequency. The reason for this terminology is that the driven resonance frequency in a series or parallel resonant circuit has the value[1] $\omega_0 = \frac {1}{\sqrt {LC}}$ This is exactly the same as the resonance frequency of an LC circuit, that is, one with no resistor present, that is, it is the same as a circuit in which there is no damping, hence undamped resonance frequency. The peak resonance frequency, on the other hand, depends on the value of the resistor and is described as the damped resonance frequency. A highly damped circuit will fail to resonate at all when not driven. A circuit with a value of resistor that causes it to be just on the edge of ringing is called critically damped. Either side of critically damped are described as underdamped (ringing happens) and overdamped (ringing is suppressed). Circuits with topologies more complex than straightforward series or parallel (some examples described later in the article) have a driven resonance frequency that deviates from $\scriptstyle \omega_0 = \frac {1}{\sqrt {LC}}$ and for those the undamped resonance frequency, damped resonance frequency and driven resonance frequency can all be different. Damping[] Damping is caused by the resistance in the circuit. It determines whether or not the circuit will resonate naturally (that is, without a driving source). Circuits which will resonate in this way are described as underdamped and those that will not are overdamped. Damping attenuation (symbol α) is measured in nepers per second. However, the unitless damping factor (symbol ζ, zeta) is often a more useful measure, which is related to α by $\zeta = \frac {\alpha}{\omega_0}$ The special case of ζ = 1 is called critical damping and represents the case of a circuit that is just on the border of oscillation. It is the minimum damping that can be applied without causing oscillation. Bandwidth[] The resonance effect can be used for filtering, the rapid change in impedance near resonance can be used to pass or block signals close to the resonance frequency. Both band-pass and band-stop filters can be constructed and some filter circuits are shown later in the article. A key parameter in filter design is bandwidth. The bandwidth is measured between the 3dB-points, that is, the frequencies at which the power passed through the circuit has fallen to half the value passed at resonance. There are two of these half-power frequencies, one above, and one below the resonance frequency $\Delta \omega = \omega_2 - \omega_1 \,$ where $\scriptstyle \Delta \omega$ is the bandwidth, $\scriptstyle \omega_1$ is the lower half-power frequency and $\scriptstyle \omega_2$ is the upper half-power frequency. The bandwidth is related to attenuation by, $\Delta \omega = 2 \alpha \,$ when the units are radians per second and nepers per second respectively. Other units may require a conversion factor. A more general measure of bandwidth is the fractional bandwidth, which expresses the bandwidth as a fraction of the resonance frequency and is given by $F_\mathrm b = \frac {\Delta \omega}{\omega_0}$ The fractional bandwidth is also often stated as a percentage. The damping of filter circuits is adjusted to result in the required bandwidth. A narrow band filter, such as a notch filter, requires low damping. A wide band filter requires high damping. Q factor[] The Q factor is a widespread measure used to characterise resonators. It is defined as the peak energy stored in the circuit divided by the average energy dissipated in it per cycle at resonance. Low Q circuits are therefore damped and lossy and high Q circuits are underdamped. Q is related to bandwidth; low Q circuits are wide band and high Q circuits are narrow band. In fact, it happens that Q is the inverse of fractional bandwidth $Q = {1 \over F_\mathrm b} = \frac {\omega_0}{\Delta \omega}$ Q factor is directly proportional to selectivity, as Q factor depends inversely on bandwidth. Scaled parameters[] The parameters ζ, Fb, and Q are all scaled to ω0. This means that circuits which have similar parameters share similar characteristics regardless of whether or not they are operating in the same frequency band. The article next gives the analysis for the series RLC circuit in detail. Other configurations are not described in such detail, but the key differences from the series case are given. The general form of the differential equations given in the series circuit section are applicable to all second order circuits and can be used to describe the voltage or current in any element of each circuit. Series RLC circuit[] Figure 1: RLC series circuit V - the voltage of the power source I - the current in the circuit R - the resistance of the resistor L - the inductance of the inductor C - the capacitance of the capacitor In this circuit, the three components are all in series with the voltage source. The governing differential equation can be found by substituting into Kirchhoff's voltage law (KVL) the constitutive equation for each of the three elements. From KVL, $v_R+v_L+v_C=v(t) \,$ where $\textstyle v_R, v_L, v_C$ are the voltages across R, L and C respectively and $\textstyle v(t)$ is the time varying voltage from the source. Substituting in the constitutive equations, $Ri(t) + L { {di} \over {dt}} + {1 \over C} \int_{-\infty}^{\tau=t} i(\tau)\, d\tau = v(t)$ For the case where the source is an unchanging voltage, differentiating and dividing by L leads to the second order differential equation: ${{d^2 i(t)} \over {dt^2}} +{R \over L} {{di(t)} \over {dt}} + {1 \over {LC}} i(t) = 0$ This can usefully be expressed in a more generally applicable form: ${{d^2 i(t)} \over {dt^2}} + 2 \alpha {{di(t)} \over {dt}} + {\omega_0}^2 i(t) = 0$ $\alpha \,$ and $\omega_0 \,$ are both in units of angular frequency. $\alpha \,$ is called the neper frequency, or attenuation, and is a measure of how fast the transient response of the circuit will die away after the stimulus has been removed. Neper occurs in the name because the units can also be considered to be nepers per second, neper being a unit of attenuation. $\omega_0 \,$ is the angular resonance frequency.[2] For the case of the series RLC circuit these two parameters are given by:[3] $\alpha = {R \over 2L}$ and $\omega_0 = { 1 \over \sqrt{LC}}$ A useful parameter is the damping factor, $\zeta$ which is defined as the ratio of these two, $\zeta = \frac {\alpha}{\omega_0}$ In the case of the series RLC circuit, the damping factor is given by, $\zeta = {R \over 2} \sqrt{C\over L}$ The value of the damping factor determines the type of transient that the circuit will exhibit.[4] Some authors do not use $\zeta \,$ and call $\alpha \,$ the damping factor.[5] Transient response[] Plot showing underdamped and overdamped responses of a series RLC circuit. The critical damping plot is the bold red curve. The plots are normalised for L=1, C=1 and $\scriptstyle \omega_0 = 1 \,$ The differential equation for the circuit solves in three different ways depending on the value of $\scriptstyle \zeta \,$. These are underdamped ($\scriptstyle \zeta < 1 \,$), overdamped ($\scriptstyle \zeta > 1 \,$) and critically damped ($\scriptstyle \zeta = 1 \,$). The differential equation has the characteristic equation,[6] $s^2 + 2 \alpha s + {\omega_0}^2 = 0$ The roots of the equation in s are,[6] $s_1 = -\alpha +\sqrt {\alpha^2 - {\omega_0}^2}$ $s_2 = -\alpha -\sqrt {\alpha^2 - {\omega_0}^2}$ The general solution of the differential equation is an exponential in either root or a linear superposition of both, $i(t) = A_1 e^{s_1 t} + A_2 e^{s_2 t}$ The coefficients A1 and A2 are determined by the boundary conditions of the specific problem being analysed. That is, they are set by the values of the currents and voltages in the circuit at the onset of the transient and the presumed value they will settle to after infinite time.[7] Overdamped response[] The overdamped response ($\scriptstyle \zeta > 1 \,$) is,[8] $i(t) = A_1 e^{-\omega_0 \left ( \zeta + \sqrt {\zeta^2 - 1} \right ) t} + A_2 e^{-\omega_0 \left ( \zeta - \sqrt {\zeta^2 - 1} \right ) t}$ The overdamped response is a decay of the transient current without oscillation.[9] Underdamped response[] The underdamped response ($\scriptstyle \zeta < 1 \,$) is,[10] $i(t) = B_1 e^{-\alpha t} \cos (\omega_d t) + B_2 e^{-\alpha t} \sin (\omega_d t) \,$ By applying standard trigonometric identities the two trigonometric functions may be expressed as a single sinusoid with phase shift,[11] $i(t) = B_3 e^{-\alpha t} \sin (\omega_d t + \varphi) \,$ The underdamped response is a decaying oscillation at frequency $\omega_d \,$. The oscillation decays at a rate determined by the attenuation $\alpha \,$. The exponential in $\alpha \,$ describes the envelope of the oscillation. B1 and B2 (or B3 and the phase shift $\varphi \,$ in the second form) are arbitrary constants determined by boundary conditions. The frequency $\omega_d \,$ is given by,[10] $\omega_d = \sqrt { {\omega_0}^2 - \alpha^2 } = \omega_0 \sqrt {1 - \zeta^2}$ This is called the damped resonance frequency or the damped natural frequency. It is the frequency the circuit will naturally oscillate at if not driven by an external source. The resonance frequency, $\omega_0 \,$, which is the frequency at which the circuit will resonate when driven by an external oscillation, may often be referred to as the undamped resonance frequency to distinguish it.[12] Critically Damped Response[] The critically damped response ($\scriptstyle \zeta = 1 \,$) is,[13] $i(t) = D_1 t e^{-\alpha t} + D_2 e^{-\alpha t} \,$ The critically damped response represents the circuit response that decays in the fastest possible time without going into oscillation. This consideration is important in control systems where it is required to reach the desired state as quickly as possible without overshooting. D1 and D2 are arbitrary constants determined by boundary conditions.[14] Laplace domain[] The series RLC can be analyzed for both transient and steady AC state behavior using the Laplace transform.[15] If the voltage source above produces a waveform with Laplace-transformed V(s) (where s is the complex frequency $s = \sigma + i \omega \,$), KVL can be applied in the Laplace domain: $V(s) = I(s) \left ( R + Ls + \frac{1}{Cs} \right )$ where I(s) is the Laplace-transformed current through all components. Solving for I(s): $I(s) = \frac{1}{ R + Ls + \frac{1}{Cs} } V(s)$ And rearranging, we have that $I(s) = \frac{s}{ L \left ( s^2 + {R \over L}s + \frac{1}{LC} \right ) } V(s)$ Laplace admittance[] Solving for the Laplace admittance Y(s): $Y(s) = { I(s) \over V(s) } = \frac{s}{ L \left ( s^2 + {R \over L}s + \frac{1}{LC} \right ) }$ Simplifying using parameters α and ωo defined in the previous section, we have $Y(s) = { I(s) \over V(s) } = \frac{s}{ L \left ( s^2 + 2 \alpha s + {\omega_0}^2 \right ) }$ Poles and zeros[] The zeros of Y(s) are those values of s such that $Y(s) = 0$: $s = 0 \,$     and     $|s| \rightarrow \infty$ The poles of Y(s) are those values of s such that $Y(s) \rightarrow \infty$. By the quadratic formula, we find $s = - \alpha \pm \sqrt{\alpha^2 - {\omega_0}^2}$ The poles of Y(s) are identical to the roots $s_1$ and $s_2$ of the characteristic polynomial of the differential equation in the section above. General solution[] For an arbitrary E(t), the solution obtained by inverse transform of I(s) is: $I(t) = \frac{1}{L}\int_{0}^{t} E(t-\tau) e^{-\alpha\tau} \left ( \cos \omega_d\tau - { \alpha \over \omega_d } \sin \omega_d\tau \right ) d\tau$ in the underdamped case ($\omega_0 > \alpha$) $I(t) = \frac{1}{L}\int_{0}^{t} E(t-\tau) e^{-\alpha\tau} ( 1 - \alpha \tau ) d\tau$ in the critically damped case ($\omega_0 = \alpha$) $I(t) = \frac{1}{L}\int_{0}^{t} E(t-\tau) e^{-\alpha\tau} \left ( \cosh \omega_r\tau - { \alpha \over \omega_r } \sinh \omega_r\tau \right ) d\tau$ in the overdamped case ($\omega_0 < \alpha$) where $\omega_r = \sqrt { \alpha^2 - {\omega_0}^2 }$, and cosh and sinh are the usual hyperbolic functions. Sinusoidal steady state[] Sinusoidal steady state is represented by letting $s = i \omega \,$ Taking the magnitude of the above equation with this substitution: $\displaystyle | Y(s=i \omega) | = \frac{1}{\sqrt{ R^2 + \left ( \omega L - \frac{1}{\omega C} \right )^2 }}.$ and the current as a function of ω can be found from $\displaystyle | I( i \omega ) | = | Y(i \omega) | | V(i \omega) |.\,$ There is a peak value of $|I (i \omega)|$. The value of ω at this peak is, in this particular case, equal to the undamped natural resonance frequency:[16] $\omega_0 = \frac{1}{\sqrt{L C}}.$ Parallel RLC circuit[] Figure 5. RLC parallel circuit V - the voltage of the power source I - the current in the circuit R - the resistance of the resistor L - the inductance of the inductor C - the capacitance of the capacitor The properties of the parallel RLC circuit can be obtained from the duality relationship of electrical circuits and considering that the parallel RLC is the dual impedance of a series RLC. Considering this it becomes clear that the differential equations describing this circuit are identical to the general form of those describing a series RLC. For the parallel circuit, the attenuation α is given by[17] $\alpha = {1 \over 2RC }$ and the damping factor is consequently $\zeta = {1 \over 2R}\sqrt{L\over C}$ This is the inverse of the expression for ζ in the series circuit. Likewise, the other scaled parameters, fractional bandwidth and Q are also the inverse of each other. This means that a wide band, low Q circuit in one topology will become a narrow band, high Q circuit in the other topology when constructed from components with identical values. The Q and fractional bandwidth of the parallel circuit are given by $Q = R \sqrt{C\over L}$ and $F_\mathrm b = {1 \over R}\sqrt{L\over C}$ Frequency domain[] Figure 6. Sinusoidal steady-state analysis normalised to R = 1 ohm, C = 1 farad, L = 1 henry, and V = 1.0 volt The complex admittance of this circuit is given by adding up the admittances of the components: ${1\over Z}=$  ${1\over Z_L}+{1\over Z_C}+{1\over Z_R}=$  ${1\over{j\omega L}}+{j\omega C}+{1\over R}$ The change from a series arrangement to a parallel arrangement results in the circuit having a peak in impedance at resonance rather than a minimum, so the circuit is an antiresonator. The graph opposite shows that there is a minimum in the frequency response of the current at the resonance frequency $\omega_0={1\over\sqrt{LC}}$ when the circuit is driven by a constant voltage. On the other hand, if driven by a constant current, there would be a maximum in the voltage which would follow the same curve as the current in the series circuit. Other configurations[] Fig. 7. RLC parallel circuit with resistance in series with the inductor Fig. 8. RLC series circuit with resistance in parallel with the capacitor A series resistor with the inductor in a parallel LC circuit as shown in figure 7 is a topology commonly encountered where there is a need to take into account the resistance of the coil winding. Parallel LC circuits are frequently used for bandpass filtering and the Q is largely governed by this resistance. The resonant frequency of this circuit is,[18] $\omega_0 = \sqrt {\frac{1}{LC} - \left ( \frac{R}{L} \right )^2}$ This is the resonant frequency of the circuit defined as the frequency at which the admittance has zero imaginary part. The frequency that appears in the generalised form of the characteristic equation (which is the same for this circuit as previously) $s^2 + 2 \alpha s + {\omega'_0}^2 = 0$ is not the same frequency. In this case it is the natural undamped resonant frequency[19] $\omega'_0 = \sqrt \frac {1}{LC}$ The frequency $\omega_m$ at which the impedance magnitude is maximum is given by,[20] $\omega_m =\omega'_0\sqrt{\frac{-1}{Q^2_L}+\sqrt{1+\frac{2} {Q^2_L}}}$ where $Q_L=\frac{\omega'_0L} {R}$ is the quality factor of the coil. This can be well approximated by,[20] $\omega_m \approx \omega'_0 \sqrt{1-\frac {1} {2Q^4_L} }$. Furthermore, the exact maximum impedance magnitude is given by,[20] $|Z|_{max}=RQ^2_L \sqrt{\frac{1} {2Q_L\sqrt{Q^2_L+2}-2Q^2_L-1}}$. For values of $Q_L$ greater than unity, this can be well approximated by,[20] $|Z|_{max} \approx R\sqrt{Q^2_L (Q_L^2+1)}$. In the same vein, a resistor in parallel with the capacitor in a series LC circuit can be used to represent a capacitor with a lossy dielectric. This configuration is shown in figure 8. The resonant frequency (frequency at which the admittance has zero imaginary part) in this case is given by,[21] $\omega_0 = \sqrt {\frac{1}{LC}-\frac{1}{(RC)^2}}$ while the frequency $\omega_m$ at which the impedance magnitude is maximum is given by $\omega_m =\omega'_0\sqrt{\frac{-1}{Q^2_C}+\sqrt{1+\frac{2} {Q^2_C}}}$ where $Q_C=\omega'_0 {R}{C}$ History[] The first evidence that a capacitor could produce electrical oscillations was discovered in 1826 by French scientist Felix Savary.[22][23] He found that when a Leyden jar was discharged through a wire wound around an iron needle, sometimes the needle was left magnetized in one direction and sometimes in the opposite direction. He correctly deduced that this was caused by a damped oscillating discharge current in the wire, which reversed the magnetization of the needle back and forth until it was too small to have an effect, leaving the needle magnetized in a random direction. American physicist Joseph Henry repeated Savary's experiment in 1842 and came to the same conclusion, apparently independently.[24][25] British scientist William Thomson (Lord Kelvin) in 1853 showed mathematically that the discharge of a Leyden jar through an inductance should be oscillatory, and derived its resonant frequency.[24][25][22] British radio researcher Oliver Lodge, by discharging a large battery of Leyden jars through a long wire, created a tuned circuit with its resonant frequency in the audio range, which produced a musical tone from the spark when it was discharged.[24] In 1857 German physicist Berend Wilhelm Feddersen photographed the spark produced by a resonant Leyden jar circuit in a rotating mirror, providing visible evidence of the oscillations.[24][25][22] In 1868 Scottish physicist James Clerk Maxwell calculated the effect of applying an alternating current to a circuit with inductance and capacitance, showing that the response is maximum at the resonant frequency.[22] The first example of an electrical resonance curve was published in 1887 by German physicist Heinrich Hertz in his pioneering paper on the discovery of radio waves, showing the length of spark obtainable from his spark-gap LC resonator detectors as a function of frequency.[22] One of the first demonstrations of resonance between tuned circuits was Lodge's "syntonic jars" experiment around 1889[24][22] He placed two resonant circuits next to each other, each consisting of a Leyden jar connected to an adjustable one-turn coil with a spark gap. When a high voltage from an induction coil was applied to one tuned circuit, creating sparks and thus oscillating currents, sparks were excited in the other tuned circuit only when the inductors were adjusted to resonance. Lodge and some English scientists preferred the term "syntony" for this effect, but the term "resonance" eventually stuck.[22] The first practical use for RLC circuits was in the 1890s in spark-gap radio transmitters to allow the receiver to be tuned to the transmitter. The first patent for a radio system that allowed tuning was filed by Lodge in 1897, although the first practical systems were invented in 1900 by Anglo Italian radio pioneer Guglielmo Marconi.[22] Marconi's tuning patents were subsequently overturned in favor of those of Nikola Tesla by the US Supreme Court in December 1943, shortly after Tesla's death. Applications[] Variable tuned circuits[] A very frequent use of these circuits is in the tuning circuits of analogue radios. Adjustable tuning is commonly achieved with a parallel plate variable capacitor which allows the value of C to be changed and tune to stations on different frequencies. For the IF stage in the radio where the tuning is preset in the factory the more usual solution is an adjustable core in the inductor to adjust L. In this design the core (made of a high permeability material that has the effect of increasing inductance) is threaded so that it can be screwed further in, or screwed further out of the inductor winding as required. Filters[] Fig. 9. RLC circuit as a low-pass filter Fig. 10. RLC circuit as a high-pass filter Fig. 11. RLC circuit as a series band-pass filter in series with the line Fig. 12. RLC circuit as a parallel band-pass filter in shunt across the line Fig. 13. RLC circuit as a series band-stop filter in shunt across the line Fig. 14. RLC circuit as a parallel band-stop filter in series with the line In the filtering application, the resistor R becomes the load that the filter is working into. The value of the damping factor is chosen based on the desired bandwidth of the filter. For a wider bandwidth, a larger value of the damping factor is required (and vice versa). The three components give the designer three degrees of freedom. Two of these are required to set the bandwidth and resonant frequency. The designer is still left with one which can be used to scale R, L and C to convenient practical values. Alternatively, R may be predetermined by the external circuitry which will use the last degree of freedom. Low-pass filter An RLC circuit can be used as a low-pass filter. The circuit configuration is shown in figure 9. The corner frequency, that is, the frequency of the 3dB point, is given by $\omega_ \mathrm c = \frac{1}{\sqrt {LC}}$ This is also the bandwidth of the filter. The damping factor is given by[26] $\zeta = \frac {1}{2R_ \mathrm L} \sqrt {\frac{L}{C}}$ High-pass filter A high-pass filter is shown in figure 10. The corner frequency is the same as the low-pass filter $\omega_ \mathrm c = \frac{1}{\sqrt {LC}}$ The filter has a stop-band of this width.[27] Band-pass filter A band-pass filter can be formed with an RLC circuit by either placing a series LC circuit in series with the load resistor or else by placing a parallel LC circuit in parallel with the load resistor. These arrangements are shown in figures 11 and 12 respectively. The centre frequency is given by $\omega_ \mathrm c = \frac{1}{\sqrt {LC}}$ and the bandwidth for the series circuit is[28] $\Delta \omega = \frac {R_\mathrm L}{L}$ The shunt version of the circuit is intended to be driven by a high impedance source, that is, a constant current source. Under those conditions the bandwidth is[28] $\Delta \omega = \frac {1}{C R_\mathrm L}$ Band-stop filter Figure 13 shows a band-stop filter formed by a series LC circuit in shunt across the load. Figure 14 is a band-stop filter formed by a parallel LC circuit in series with the load. The first case requires a high impedance source so that the current is diverted into the resonator when it becomes low impedance at resonance. The second case requires a low impedance source so that the voltage is dropped across the antiresonator when it becomes high impedance at resonance.[29] Oscillators[] For applications in oscillator circuits, it is generally desirable to make the attenuation (or equivalently, the damping factor) as small as possible. In practice, this objective requires making the circuit's resistance R as small as physically possible for a series circuit, or alternatively increasing R to as much as possible for a parallel circuit. In either case, the RLC circuit becomes a good approximation to an ideal LC circuit. However, for very low attenuation circuits (high Q-factor) circuits, issues such as dielectric losses of coils and capacitors can become important. In an oscillator circuit $\alpha \ll \omega_0$. or equivalently $\zeta \ll 1$. As a result $\omega_d \approx \omega_0$. Voltage multiplier[] In a series RLC circuit at resonance, the current is limited only by the resistance of the circuit $I = \frac{V}{R}$ If R is small, consisting only of the inductor winding resistance say, then this current will be large. It will drop a voltage across the inductor of $V_\mathrm L = \frac{V}{R} \omega_0 L$ An equal magnitude voltage will also be seen across the capacitor but in antiphase to the inductor. If R can be made sufficiently small, these voltages can be several times the input voltage. The voltage ratio is, in fact, the Q of the circuit, $\frac{V_\mathrm L}{V} = Q$ A similar effect is observed with currents in the parallel circuit. Even though the circuit appears as high impedance to the external source, there is a large current circulating in the internal loop of the parallel inductor and capacitor. Pulse discharge circuit[] An overdamped series RLC circuit can be used as a pulse discharge circuit. Often it is useful to know the values of components that could be used to produce a waveform this is described by the form: $I(t) = I_0(e^{-\alpha t}-e^{-\beta t})\$ Such a circuit could consist of an energy storage capacitor, a load in the form of a resistance, some circuit inductance and a switch - all in series. The initial conditions are that the capacitor is at voltage $V_0$ and there is no current flowing in the inductor. If the inductance $L$ is known, then the remaining parameters are given by the following - capacitance: $C = \frac{1}{L \alpha \beta}$ Resistance (total of circuit and load): $R = L(\alpha + \beta)\$ Initial terminal voltage of capacitor: $V_0 = -I_0 L \alpha \beta \left(\frac{1}{\beta}-\frac{1}{\alpha}\right)$ Rearranging for the case where R is known - Capacitance: $C = \frac{(\alpha + \beta)}{R \alpha \beta}$ Inductance (total of circuit and load): $L = \frac{R}{(\alpha + \beta)}$ Initial terminal voltage of capacitor: $V_0 = \frac{-I_0 R \alpha \beta}{(\alpha + \beta)} \left(\frac{1}{\beta}-\frac{1}{\alpha}\right)$ References[] 1. Kaiser, pp.7.71-7.72. 2. Nilsson and Riedel, p.308. 3. Agarwal and Lang, p.641. 4. Irwin, pp.217-220. 5. Agarwal and Lang, p.646. 6. ^ a b Agarwal and Lang, p.656. 7. Nilsson and Riedel, pp.287-288. 8. Irwin, p.532. 9. Agarwal and Lang, p.648. 10. ^ a b Nilsson and Riedel, p.295. 11. Humar, pp.223-224. 12. Agarwal and Lang, p.692. 13. Nilsson and Riedel, p.303. 14. Irwin, p.220. 15. This section is based on Example 4.2.13 from Lokenath Debnath, Dambaru Bhatta, Integral transforms and their applications, 2nd ed. Chapman & Hall/CRC, 2007, ISBN 1-58488-575-0 [Amazon-US | Amazon-UK], pp. 198-202 (some notations have been changed to fit the rest of this article.) 16. Kumar and Kumar, Electric Circuits & Networks, p.464. 17. Nilsson and Riedel, p.286. 18. Kaiser, pp.5.26-5.27. 19. Agarwal and Lang, p.805. 20. ^ a b c d Cartwright, K. V.; Joseph, E. and Kaminsky, E. J. (2010). "Finding the exact maximum impedance resonant frequency of a practical parallel resonant circuit without calculus". The Technology Interface International Journal 11 (1): 26–34. 21. Kaiser, pp.5.25-5.26. 22. Blanchard, Julian (October 1941). "The History of Electrical Resonance". Bell System Technical Journal (USA: American Telephone & Telegraph Co.) 20 (4): 415–. Retrieved 2013-02-25. 23. Savary, Felix (1827). "Memoirs sur l'Aimentation". Annales de Chimie et de Physique (Paris: Masson) 34: 5–37. 24. ^ a b c Huurdeman, Anton A. (2003). The worldwide history of telecommunications. USA: Wiley-IEEE. pp. 199–200. ISBN 0-471-20505-2 [Amazon-US | Amazon-UK]. 25. Kaiser, pp.7.14-7.16. 26. Kaiser, p.7.21. 27. ^ a b Kaiser, pp.7.21-7.27. 28. Kaiser, pp.7.30-7.34. Bibliography[] • Anant Agarwal, Jeffrey H. Lang, Foundations of analog and digital electronic circuits, Morgan Kaufmann, 2005 ISBN 1-55860-735-8 [Amazon-US | Amazon-UK]. • J. L. Humar, Dynamics of structures, Taylor & Francis, 2002 ISBN 90-5809-245-3 [Amazon-US | Amazon-UK]. • J. David Irwin, Basic engineering circuit analysis, Wiley, 2006 ISBN 7-302-13021-3 [Amazon-US | Amazon-UK]. • Kenneth L. Kaiser, Electromagnetic compatibility handbook, CRC Press, 2004 ISBN 0-8493-2087-9 [Amazon-US | Amazon-UK]. • James William Nilsson, Susan A. Riedel, Electric circuits, Prentice Hall, 2008 ISBN 0-13-198925-1 [Amazon-US | Amazon-UK]. Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "RLC circuit", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=RLC_circuit • Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 121, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213791489601135, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/electroweak+renormalization
# Tagged Questions 0answers 60 views ### CP-violation in weak and strong sectors There is a possible CP-violating term in the strong sector of the standard model proportional to $\theta_\text{QCD}$. In the absence of this term, the strong interactions are CP-invariant. In the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.86570143699646, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/2483/minkowski-sum-of-small-connected-sets
## Minkowski sum of small connected sets ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose that the convex hull of the Minkowski sum of several compact connected sets in $\mathbb R^d$ contains the unit ball centered at the origin and the diameter of each set is less than $\delta$. If $\delta$ is very small (this smallness may depend on $d$ but on nothing else), does it follow that the sum itself contains the origin? - WRONG TAGS --- it should be geometry or/and convex-geometry... – Anton Petrunin Nov 23 2009 at 1:40 This was asked long before such tags existed and before I would be able to create them. I'm not really sure what are the best tags for this now (I chose these two because the question arose in a purely analytic setting and it seemed topological in nature) but I do not mind in the slightest if somebody more comfortable with the vast forest of the current tags will retag it. I am currently completely lost in the aforementioned forest, so I'll take no action myself. – fedja Nov 23 2009 at 2:59 ## 4 Answers I finally figured it out. My solution is here. I would repost it on mathoverflow but until LaTeX is enabled, it is quite hard for me to communicate such things here... - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Argh! I tried to add this as a comment but when submitting it, I was told that there was a 600 character limit and all my typing just disappeared without trace. Anyway, my point was that you can get "straight paths". The reason is that if $a=\sum_ i x_ i$ and $b=\sum_ i y_ i$ are in the sum, then the vectors $v_ i=y_ i-x_ i-(b-a)/n$ are small and add up to $0$. There is a cute result that then you can rearrange them in such order that all partial sums are small (just constant times larger than the vectors themselves). If you switch from $x_ i$ to $y_ i$ along the corresponding compact in this order, you get a curve that travels from $a$ to $b$ within a small neighborhood of the segment $[a,b]$. Another observation that may be useful is that the sum is $d\delta$ dense in its convex hull. Indeed, if $a=\sum_ i v_ i$ and $v_ i$ are in the convex hull of $K_ i$, then we can start moving $v_ i$ until their representations as convex combinations of points in $K_ i$ get shorter and we can do it as long as there are at least $d+1$ vectors $v_ i$ that do not belong to $K_ i$ themselves (any $d+1$ vectors in $\mathbb R^d$ are linearly dependent). Thus, in the representation of every point in the convex hull as a sum, we need only $d$ vectors from convex hulls and the rest may be taken in $K_ i$. I feel that these two observations put together should be enough and I just do not see how to add 2 and 2 here. - I don't have a proof yet, but here are some ideas that might lead to one. I'll stick to two dimensions. Suppose that your sets are X_ 1,X_ 2,...,X_ m and in each X_ i-X_ i you take a vector x_ i. Now let's suppose that x_ 1+...+x_ r=x and x_ {r+1}+...+x_ m=y, and that x and y are quite a lot bigger than the x_ i and point in very different directions. Then the Minkowski sum of the X_i contains a path that goes from some z to z+x and from there to z+x+y, using bits out of different sets for the two parts of the path. Now we could get from z to z+x+y using the same bits in a different order. For exampe we could go first to z+y and from there to z+x+y. And I'm fairly sure that a topological argument will show that the region bounded by those four bits of path will all be in the Minkowski sum. So it would be enough to show that we can get paths that are reasonably straight. Why should the bits in between be in the Minkowski sum? It's enough to prove that in the case r=1 and m=2. (After that one just keeps swapping xs from the first half with xs from the second half -- I hope all this is making sense, but I haven't thought about it carefully enough to be sure it isn't stupid in some way.) If X_ 1 contains a path P_ 1 from 0 to x (WLOG) and X_ 2 contains a path x+P_ 2 from x to y, and if the four paths P_ 1, x+P_ 2, P_ 2, y+P_ 1 do not cross and bound some region, then the sets P_ 1+u with u in P_ 2 trace out that region I think. There are plenty of details to check there and even if they all work one still needs to prove that in the Minkowski sum you can get from any point to any other using a "reasonably straight path", whatever that means. So this is just my preliminary thoughts. A very nice problem, though! - In dimension 1, why are the sets {-100}, {80},{120} not a counterexample? (In particular, they are compact connected sets with 'diameter' 0; their Minkowski sum, going by the wikipedia definition, is {-20,20,200}, which does not contain 0, but whose convex hull [-20,200] contains a large ball around 0) This example of course has nothing to do with dimension, and you can easily flush out the points into tiny balls, if you like. I suspect that this is based on a misreading of the question. - 2 The Minkowski sum of those sets is the set {100}. The Wikipedia article discusses the Minkowski sum of two sets. But this is a commutative and associative operation, so we can discuss the Minkowski sum of any finite number of sets. – David Speyer Oct 27 2009 at 16:12 1 As David Speyer points out, the Minkowski sum is a singleton in this case. My only reason for making this comment is to say that I came up with exactly the same "counterexample" myself at one point, and even started writing an answer based on it. But then I realized my mistake. – gowers Oct 29 2009 at 11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600485563278198, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/203830-stuck-transforming-base-function-completing-table-values-graphing.html
# Thread: 1. ## Stuck on transforming a base function, completing a table of values, and graphing it The question is as follows: Complete the table of values and plot the transformed points to obtain the graph of y=-2(-1/3(x+2))2-4 | | | | | |--------|------------|--------------|---------------------| | Y=f(x) | Y=f(-1/3x) | Y=-2f(-1/3x) | Y=-2f(-1/3(x+2))2-4 | | (0,0) | | | | | (1,1) | | | | | (2,4) | | | | | (3,9) | | | | I am supposed to find out the transformed x and y values for the function f(x) for Y=f(- 1/3x), etc., as per the table above. I just want to mention that I am NOT plugging the (0,0), (1,1) etc. values into the equations given, but that they correspond to the transformation of the basic equation y=f(x). I have no clue at all how to tackle this, and any help with the answer, as well as some insight into what is going on here, would be greatly appreciated. Thanks. Oh, also, the previous questions in this part of my homework were as follows (I am only jotting them down here because the above question is part c of a three part question and I'm worried the other parts may be relevant. a) State the base function that corresponds to the transformed function y = -2(-1/3(x+2))2-4 a) f(x)=x2 b) State the parameters and describe the corresponding transformations. b) a = -2 | Stretched vertically and reflected in the x axis by a factor of 2 k = -1/3 | reflected in the y axis and horizontally stretched by a factor of -1/3 d= -2 shifted 2 units left 2. ## Re: Stuck on transforming a base function, completing a table of values, and graphing Completing the table is very easy. What you need to do is just apply the transformations one at a time to the original points of $y=f(x)$. I am filling the table for you as your reading this! Edit: Here you go hope this helps you What I did for row #2 was that i took the parent function points and multiplied their x values by -3 because as you should of learned in class (-1/3x) (Horizontal stretch) is actually making the x values get bigger and reflecting them through the y-axis. What I did for row #3 was that I took the points from row #2 and I multiplied their y values by -2 which doubles them ( Vertical stretch) and reflects them through the x-axis. What I did for row #4 was that I took the points from row #3 and I subtracted 2 from their x values because (x+2) means x and two to the left and subtracted 4 units down. I hope this helps a bit! 3. ## Re: Stuck on transforming a base function, completing a table of values, and graphing hey! Yeah, the last function is written correctly. I guess I'm stuck on how to actually apply the transformations to the original points. The -1/3 is negative one third, I'm not sure how to write it differently than that at this juncture.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391741752624512, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=91808
Physics Forums ## Significance of isospin and hypercharge This may sounds like a dumb question but I want to figure out the 'point' of them. I know that isospin was an attempt to describe the proton and nucleon as an isospin doublet and that hypercharge seems to me to be a nifty little relation between the electric charge and the isospin - but what is the point of them now that we know that we cannot describe the nucleon as an isospin doublet and that it is in fact constructed of quarks. Is it that: a) Before we new about quarks it seemed like a good idea. or b) We can indeed build a model based on isospin. I am learning about GUTS at the moment and specifically $$SU(2)_L \times SU(1)_Y$$ What is the significance of the hypercharge Y subscript in the $$SU(1)_Y$$? It seems (to my admittedly ignorant mind) that hypercharge is nothing more than a nifty relation to charge so why does it get elevated to the subscript level $$SU(1)_Y$$ in GUTS??? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Ok - I know now so I don't want to waste anyone time!! :) Quote by robousy Ok - I know now so I don't want to waste anyone time!! :) well tell me what u have understood...i 'll b glad 2 read what u 've interpreted of it..... Admin ## Significance of isospin and hypercharge A collection of comments on hypercharge and isospin. Conservation of strangeness is not in fact an independent conservation law, but can be viewed as a combination of the conservation of charge, isospin, and baryon number. It is often expressed in terms of hypercharge Y, defined by: Y = S + B = 2(Q-I), where S = Strangeness B = Baryon number Q = charge I = isospin Isospin and either hypercharge or strangeness are the quantum numbers often used to draw particle diagrams for the hadrons. adapted from http://hyperphysics.phy-astr.gsu.edu.../quark.html#c4 However - it seems it can more complicated - http://en.wikipedia.org/wiki/Hypercharge http://en.wikipedia.org/wiki/Isospin As for strangeness - A property of hadrons which is conserved in particle reactions caused by the strong force and which has in a weak interaction. Eric Weisstein's Scienceworld at Wolfram.com - http://scienceworld.wolfram.com/phys...rangeness.html http://en.wikipedia.org/wiki/Strangeness - See the definition here, which then states The reason for this unintuitive definition is that the concept of strangeness was defined before the existence of quarks was discovered, and for consistency with the original definition the strange quark must have strangeness -1, and the anti-strange quark must have strangeness +1. Particle physics and supporting theories do appear as strange. Sometimes one is faced with the need for a theory (understanding) without having all the necessary information. Remember back to the 1800's before quantum physics - the physicists had glimpses of atomic structure, but just didn't have all the information. A neat bit of trivia - Unlike his brother Maurice, who was primarily an experimental physicist, Louis de Broglie had the mind of a theoretician rather than that of an experimenter or engineer. His 1924 doctoral thesis, Recherches sur la théorie des quanta (Research on Quantum Theory), introduced his theory of electron waves. - from http://en.wikipedia.org/wiki/Louis_de_Broglie and http://en.wikipedia.org/wiki/Murray_Gell-Mann Quote by preet0283 well tell me what u have understood...i 'll b glad 2 read what u 've interpreted of it..... Well, due to the almost identical masses of the proton and neutron we hypothesized the existence of a nucleon isospin doublet. Clearly the p and n have differing electric charges so a formula was created by Gellman and Nishijima linking the isospin with the electric charge: Q=Y+H where H is the hypercharge and Y the isospin. It turns out that hypercharge is actually composed more fundamentally of B-L where B is Baryon and L Letpton number. Thats the OLD interpretation. I am starting to realize that there is a new interpretation of Isospin which is a bit more complicated and the name is just a carry on from the old days. Something to do with the generators of the groups of the standard model and the Cartan Subalgebra eigenvalues being used to construct the charge generator...(maybe) If you can expose more of the 'new' isospin idea that would be good. Thanks! Recognitions: Science Advisor I am learning about GUTS at the moment and specifically $$SU(2)_L \times SU(1)_Y$$ What is SU(1)_Y? Look, do you mean the electroweak gauge group SU(2)XU(1) or chiral symmetry group SU(2)XSU(2)? THE 1ST GROUP HAS SUBSCRIPTS (W) OR (L) ON SU(2) TO MEAN WEAK OR LEFTHANDED FIELDS, AND Y ON U(1) TO MEAN THE WEAK HYPERCHARGE(THE GENERATOR OF U(1)). THE CHIRAL GROUP SU(2)XSU(2) OFTEN WRITTEN AS SU(2)_LXSU(2)_R. FROM NOW ON, BEFORE I ANSWER ANY OF YOUR QUESTIONS, I NEED TO KNOW WETHER OR NOT YOU ARE LEARNING SOME THING FROM US. TELL ME; ARE WE WASTING OUR TIME? Quote by samalkhaiat What is SU(1)_Y? Look, do you mean the electroweak gauge group SU(2)XU(1) or chiral symmetry group SU(2)XSU(2)? THE 1ST GROUP HAS SUBSCRIPTS (W) OR (L) ON SU(2) TO MEAN WEAK OR LEFTHANDED FIELDS, AND Y ON U(1) TO MEAN THE WEAK HYPERCHARGE(THE GENERATOR OF U(1)). THE CHIRAL GROUP SU(2)XSU(2) OFTEN WRITTEN AS SU(2)_LXSU(2)_R. FROM NOW ON, BEFORE I ANSWER ANY OF YOUR QUESTIONS, I NEED TO KNOW WETHER OR NOT YOU ARE LEARNING SOME THING FROM US. TELL ME; ARE WE WASTING OUR TIME? $$SU(3) \times SU(2) \times U(1)_Y$$ Yes, the U(1)_Y weak hypercharge generator. Yes, I am learning - some aspects faster than others, but please, if you think you are wasting your time then please do not answer. I am sure I would not be offended if noone left an answer. Recognitions: Homework Help Science Advisor Quote by robousy If you can expose more of the 'new' isospin idea that would be good. Perhaps it would be good to remember that the fundamental particles are not actually little indivisible things that cannot be divided. A better description of them is to think of the wave functions one dealt with in quantum mechanics. With wave functions, you can take the sum of two wave functions and, because the equations are linear, the result is another wave function with properties somehow sort of midway between the other two wave functions. For example: $$\psi = \psi_A + \psi_B$$ where the two wave functions on the right satisfy an operator equation like: $$\mathcal{O} \psi_A = A \psi_A$$ and same for B. The sum of the A and B wave functions is a valid wave function but is not likely to be an eigenfunction for the operator like A and B were. So if you define "particle" as the things that are eigenfunctions for that operator, then the sum is not a "particle". In the case of the elementary particles, we consider charge, Q, to be one of the operators that define what are the elementary particles. For example, if "e" is the electron, and \psi_e is an electron wave function, we have the operator equation: $$\mathcal{Q} \psi_e = -e \psi_e$$ since the charge of the electron is -e. Similarly, $$\mathcal{Q} \psi_\nu = 0$$ since the charge of the neutrino is zero. So we classify the elementary particles in a way that makes them eigenfunctions of (electric) charge and mass (and parity or whatever). But just because we classify elementary particles in this way does not mean that we cannot reclasify them in the same we can reclassify wave functions by taking linear combinations of them. And some of the alternative methods of classifying them would make more sense in certain circumstances. An example is the Cabibbo angle. The up and down quarks (of the "electron family") are eigenstates of the electric charge operator and are eigenstates of mass, but they are not eigenstates of the "weak charge" (when I was in grad school it was called "neutral charge") operator in the sense that when you change an up quark into a down quark by emitting a W-, you don't actually get a pure down quark. Instead, you have to mix in some of the other families of quarks, the "muon family" and "tau family". W+ and W- interactions for quarks involve changing from a +2/3 to a -1/3 (or -2/3 to a +1/3), so you can fix the weak charges by either mixing the (u,c,t), or by mixing the (d,s,b). Nowadays, it is done by mixing the (down,strange,bottom) and it is called the "CKM" matrix. As an alternative, when you have a pure down quark and arrange for it to emit a W-, it doesn't become a pure up quark but instead ends up with a mixture of top and charm. Thus you could instead define a CKM type matrix as mixing the (u,t,c). With the leptons, there is also a mixing between the neutral leptons (neutrinos) as compared to the charged leptons. As with the quarks, the mixing appears when you define the particles according to their masses (and therefore into the families or "flavors"). The electron, muon and tau are the mass eigenstates of the charged leptons. When one of these particles emits a W-, the charged lepton changes to a mixture of neutrinos. Now with the quarks, the effect of the emission of a W+ or W- is a relatively small probability of a change in the family. Therefore we collect the quarks into pairs, (up,down), etc. But with the neutrinos, the mixtures are more democratic so it's hard to say which of the neutrino eigenstates corresponds to the electron and which to the muon and tau. So the neutrinos are generally defined according to what you get when you take a W+/- out of the corresponding lepton mass eigenstate. That is, we talk about an electron neutrino, a muon neutrino and a tau neutrino. This means that the lepton analog of the quark CKM matrix is defined in sort of reverse, an extra opportunity for confusion. It is my belief that the quark mixing is more pure than the lepton mixing arises naturally from the way they are produced from subparticles but that's another story; in the standard model, these are all fairly arbitrary parameters. If you want to always define "particle" as things that are eigenstates of electric charge and mass, then you will have the weak interactions mixing particle types. But you could instead define "particle" as things that are eigenstates of weak charge and mass, and that would leave you with electric interactions that mixed particle types. It is also possible to define particles as the things that are eigenstates of both electric and weak interactions. If you do this, then you will have particles that are of mixed mass eigenstates. I suspect that things will be simplest in this basis. In any of these three cases, it is important to note that the mixing is over corresponding particles in the different families, and that the families differ only in their masses. Now what does this have to do with weak isospin and weak hypercharge? Weak isospin and hypercharge, together, give electric charge and weak charge. Weak isospin is the SU(2) type symmetry between the two objects that a weak force W+/- converts between. Once you've defined weak isospin and electric charge, the difference between these is also defined and (twice the difference) is called weak hypercharge. When the electric and weak interactions are combined into an electroweak interaction, the appropriate currents (i.e. moving charges) are weak isospin (which is a vector) and weak hypercharge (a scalar). These two are mixed by the Weinberg angle so that instead of interacting directly with gauge bosons according to weak isospin and weak hypercharge, they instead use weak charge and electric charge. Thus the photon is a mixture of a weak isospin and weak hypercharge interaction. It's late. I hope I haven't made too many serious mistakes. Carl CarlB, Just wanted to let you know that i really like your last post. You have the ability to explain (conceptually) difficult stuff in an easy language. I like your style, man. I wanna ask you to contribute to the "elementary particles presented thread" if you want. For example, you could make a reference to this post into that thread. Or, if you feel that certain aspects/topics of theoretical physics need to be explained, i invite you to post them there and thus expand our elementary particles-library. I would be very greatful regards marlon Recognitions: Homework Help Science Advisor Quote by marlon CarlB, Just wanted to let you know that i really like your last post. You have the ability to explain (conceptually) difficult stuff in an easy language. I like your style, man. I wanna ask you to contribute to the "elementary particles presented thread" if you want. For example, you could make a reference to this post into that thread. Or, if you feel that certain aspects/topics of theoretical physics need to be explained, i invite you to post them there and thus expand our elementary particles-library. I would be very greatful regards marlon Whoa! Before you conclude that I know what I'm talking about, you should be aware that (a) I am not associated with any physics (or math) departments and in fact make a living by doing stuff like driving forklifts, using a nail gun, and soldering; (b) I started grad school back when Carter was president and never finished a PhD; (c) I don't believe in Einstein's relativity; (d) I believe that using symmetry to define elementary particles is fundamentally misguided; (e) I don't believe in the quantum mechanical vacuum (along with Schwinger as it turns out); (f) I believe that the symmetry breaking seen in the standard model is actually a part of spacetime itself rather than an attribute of the particles per se; (g) I believe that standard matter is condensed from tachyons; and (h) I have a nasty habit of going to physics conferences and being ignored for supporting points (c-g). In short, I really can't think of anyone you'd want less to be telling you about the standard model. On the other hand, I do believe that what I wrote above is a fairly accurate portrayal of what both I and the standard model believe about the elementary particles and you're certainly welcome to copy it where you wish. The mixing matrix for the weak force as applied to the quarks is called the "CKM" matrix. The same thing applied to the leptons is called the "MNS" matrix. Naturally I'm busily attempting to unify all this with my own version of particle theory, but if you value your sanity (or your standing in the standard physics community, I advise you to stay far away). Carl CarlB at what university did you study ? Besides, you can have whatever personal opinion that you want, i do not care. I read your "personal caracterization" very thouroughly and i admit i would be the exact opposite of you. However, this does not take away the fact that lot's of your post are of high quality and that is my honest opinion. So, YES, i would like to have you on board when it comes to explaining difficult concepts to laymen, which one of the primary intentions of this great forum. For the rest, you are forgiven :) marlon ps : why is it that only the really good and talented people are so very honest and direct ? :) CarlB. I've been meaning to thank you for ages now since you kindly posted a long and no doubt lucid explanation to my question. TBH I haven't had a chance to go through it yet - but I plan to next week (when a 'long lost friend' I am entertaining returns home) - but thanks again! Thread Tools Similar Threads for: Significance of isospin and hypercharge Thread Forum Replies High Energy, Nuclear, Particle Physics 6 High Energy, Nuclear, Particle Physics 2 High Energy, Nuclear, Particle Physics 14 High Energy, Nuclear, Particle Physics 4 Advanced Physics Homework 13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403814077377319, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/266684/evaluate-int-2x-over-x2-1dx/266689
# Evaluate $\int {2x\over x^2-1}dx$ My friend evaluated this before he went to bed: $$\int {2x\over x^2-1}dx$$ The answer was $\log(x^2-1)$. I just can't figure out how that works. I know that $\int \frac1x dx = \log|x|$, so what just happened to $2x$? - ## 7 Answers Let $\quad\quad u = \;x^2 - 1.\quad$ (Use "$\;u$-substitution.") Then $\;\;\; du = 2x \;\;dx$. $($Recall, we need to replace $\;2x\;dx\;$ with $\;du\;$ to integrate in terms of $u.)$ This gives us... $$\int \frac{2x \;dx}{x^2 - 1} \quad=\quad \int \frac{du}{u} \quad= \quad\int \frac{1}{u} du \;\;= \;\;\;?$$ From what you stated in your question, I think you can go from here? - Ah thanks! You edited the ans. the way i kept thinking lol – Kishan Thobhani Dec 28 '12 at 20:41 You're welcome, Kishan! – amWhy Dec 28 '12 at 20:42 ## Did you find this question interesting? Try our newsletter email address $$\int\frac{2x}{x^2-1}dx=\int\frac{(x+1)+(x-1)}{x^2-1^2}dx=$$ $$=\int\frac{(x+1)+(x-1)}{(x+1)(x-1)}dx=\int\left(\frac{x+1}{(x+1)(x-1)}+\frac{x-1}{(x+1)(x-1)}\right)dx=$$ $$=\int\left(\frac1{x-1}+\frac1{x+1}\right)dx=\int\left(\frac1{x-1}\right)dx+\int\left(\frac1{x+1}\right)dx=$$ $$=\ln(x-1)+\ln(x+1)+C=\ln(x-1)(x+1)+C=\ln(x^2-1)+C$$ - +1 for showing an alternate way. – half-integer fan Dec 29 '12 at 2:07 $\dfrac{2x}{x^2-1}dx=\dfrac{d(x^2-1)}{x^2-1}$ - $\frac{2x}{x^2-1}=\frac{(x+1)+(x-1)}{x^2-1}=\frac1{x-1}+\frac1{x+1}$ Alternatively, using Partial Fraction Decomposition, let $\frac{2x}{x^2-1}=\frac{(x+1)+(x-1)}{x^2-1}=\frac A{x+1}+\frac B{x-1}$ where $A,B$ are arbitrary constants. So, $2x=(A+B)x+B-A$ Comparing the constant terms in either of the identity, $B-A=0\implies A=B$ Comparing the coefficients of $x,A+B=2\implies A=B=1$ So, $\frac{2x}{x^2-1}=\frac{(x+1)+(x-1)}{x^2-1}=\frac 1{x+1}+\frac 1{x-1}$ - +1 just for showing there are multiple ways to solve problems, but I think you should have factored $x^2 - 1$ and completed the re-combining of the logs to be explicit. – half-integer fan Dec 29 '12 at 2:06 @half-integerfan, could you please have a look into the edited answer? – lab bhattacharjee Dec 29 '12 at 5:32 Well, by writing $(x+1) + (x-1)$ for $2x$ you've given away the answer. If you just start with A and B then that shows that the solution can be arrived at even if you can't do that trick "by inspection". I would have also explicitly shown the $\frac{A(x-1) + B(x+1)}{x^2-1}$ step leading to your equation between $2x$ and $A$ and $B$. Sorry if I am being pedantic but I think if the question is relatively easy then you should not assume the questioner can follow multiple transformations per step. – half-integer fan Dec 29 '12 at 14:53 Substitute $u=x^2-1$, $\mathrm du=2x\,\mathrm dx$ Then $\int \frac{2x}{x^2-1}\,\mathrm dx=\int \frac{1}{u} \,\mathrm du=\log(u)=\log(x^2-1)$ - $\int {2x\over x^2-1}dx = -\int {2x\over 1-x^2}dx = -\int 2x dx \sum_{n=0}^{\infty} x^{2n} = -2\int dx \sum_{n=0}^{\infty} x^{2n+1} = -2\sum_{n=0}^{\infty}\int x^{2n+1}dx = -2\sum_{n=0}^{\infty} {x^{2n+2} \over 2n+2} = -\sum_{n=0}^{\infty} {(x^2)^{n+1} \over n+1} = -\sum_{n=1}^{\infty} {(x^2)^{n} \over n} = \ln(1-x^2)$. Check: $(\ln(1-x^2))' = {-2x \over 1-x^2} = {2x \over x^2-1}$. Just playing with power series Eulerishly to see what would happen. - In general, $$\int(\frac{\frac{d}{dx}(f(x))}{f(x)}dx=\log(|f(x)|)+C$$ (As seen in $\frac1x dx = \log|x|+C$) Similarly, $$\frac{d}{dx}(x^2-1)=2x$$ $$\therefore\int {2x\over x^2-1}dx=\int(\frac{\frac{d}{dx}(x^2-1)}{x^2-1}dx$$ $$=\log(|x^2-1|)+C$$ We apply mod in logarithm to take only the positive value of $x^2-1$ because if $x<1$ then $(x^2-1)<0$ and logarithms of negative values do not exist. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264459609985352, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47855/radicals-of-binomial-ideals
## Radicals of binomial ideals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $R=k[x_1,x_2,...,x_n]$ be the polynomial ring in $n$ indeterminates over a field $k$. An ideal (that can be) generated by monomials is called a monomial ideal. For the monomial ideal $M=(m_1,m_2,...,m_t)$, the radical of $M$ itself is monomial and can be written as, $Rad(M)=(\sigma(m_1),\sigma(m_2),...,\sigma(m_t))$ where $\sigma(x_1^{a_1}x_2^{a_2}...x_n^{a_n})$ is the product of indeterminates $x_i$ s.t. $a_i\geq 1$. A binomial ideal in $R$ is generated by binomials. I was wondering if we have similar theorems for the case of binomial ideals where we can write down a generating set for the radical by just knowing a generating set of the ideal. Eisenbud and Sturmfels, in their monumental paper on binomial ideals, showed that the radical itself is binomial. I am especially interested in finding generators for radical of binomial ideals in the case where char$(k)=0$ (or even when $k=\mathbb{C}$) and what kind of binomials generate radical binomial ideals. Becker, Grobe and Niermann discuss the case of zero dimensional binomial ideals. Ojeda and Sanchez prove some results for radicals of lattice (binomial) ideals. I have also seen some results in positive characteristic, but they are not relevant to my research. - Did you check this out: front.math.ucdavis.edu/1009.2823? – Hailong Dao Dec 1 2010 at 4:22 @Hailong: Thanks for the link. I hadn't seen this before, though curiously, the word "radical" does not appear even once in the article. I'll see if the primary decomposition methods are of any help. – Timothy Wagner Dec 1 2010 at 6:32 Have you seen this: arxiv.org/abs/alg-geom/9401001? – J.C. Ottem Dec 1 2010 at 9:24 @Tymothy: the radical is the intersection of all minimal primes, so in some sense you only need to know the minimal primes. – Hailong Dao Dec 1 2010 at 13:59 @Hailong: Yes I understand that. But I am looking for a more concrete description of the ideals in terms of generators rather than as intersection of several prime ideals. – Timothy Wagner Dec 1 2010 at 21:34 show 1 more comment ## 1 Answer The minimal primes (and sometimes also their intersection) can be computed relatively quickly (compared to primary decomposition) using Algorithm 4 of http://arxiv.org/pdf/0906.4873v3. I've looked at binomial ideals for some time and I doubt that there is an easy way to see the generators of the radical. - @Thomas: Thanks. I had already looked over your paper earlier and also used your package "binomials" in Macaulay2. It has been extremely useful, though I was curious to know if there is any abstract description of radicals of binomial ideals. I am not optimistic about as general a result as in the case of monomial ideals, but I would definitely be interested in seeing some results under additional hypothesis (like the ones I mention in the last paragraph). – Timothy Wagner Dec 1 2010 at 21:39 @Timothy, I think the most opaque part of this is why the intersection of the minimal primes comes out binomial in the first place. For me it has always been useful to think about the cellular case: An ideal is cellular if in the quotient every monomial is nilpotent or regular. Cellular decompositions into binomial ideals exist in polynomial rings over every field and in characteristic zero a radical cellular ideal is just a lattice ideal + variables. (Note that in characteristic zero lattice ideals themselves radical.) – Thomas Kahle Dec 2 2010 at 7:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211057424545288, "perplexity_flag": "head"}
http://torus.math.uiuc.edu/cal/math/cal?year=2012&month=04&day=03&interval=day
Seminar Calendar for events the day of Tuesday, April 3, 2012. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. ``` March 2012 April 2012 May 2012 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 1 2 3 4 5 6 7 1 2 3 4 5 4 5 6 7 8 9 10 8 9 10 11 12 13 14 6 7 8 9 10 11 12 11 12 13 14 15 16 17 15 16 17 18 19 20 21 13 14 15 16 17 18 19 18 19 20 21 22 23 24 22 23 24 25 26 27 28 20 21 22 23 24 25 26 25 26 27 28 29 30 31 29 30 27 28 29 30 31 ``` Tuesday, April 3, 2012 Ergodic Theory 11:00 am   in 347 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by jathreya. Joshua Bowman (Stony Brook)Basins of infinity for polynomial maps of $\mathbb{C}^2$Abstract: Basins of attraction in holomorphic dynamics are well understood in dimension 1, much less so in higher dimensions. We will consider regular polynomial maps of $\mathbb{C}^2$ (maps which extend to endomorphisms of $\mathbb{P}^2$) and describe some tools for studying their basins of infinity. We show that there exist endomorphisms of $\mathbb{P}^2$ whose basins of infinity have infinitely generated second homology. Number Theory Seminar 11:00 am   in 241 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by ford. Benjamin Smith (INRIA Saclay and Ecole Polytechnique Paris)Point counting on genus 2 curves with real multplicationAbstract: Point counting -- that is, computing zeta functions of curves over finite fields --- is a fundamental problem in algorithmic number theory and cryptography. In this talk, we present an accelerated Schoof-type point-counting algorithm for curves of genus 2 equipped with an efficiently computable real multiplication endomorphism. Using our new algorithm, we can compute the zeta function of an explicit RM genus 2 curve over $\mathbb{F}_q$ in $O(\log^5 q)$ bit operations (vs. $O(\log^8 q)$ for the classical algorithm). This, together with a number of other practical improvements, yields a dramatic speedup for cryptographic-sized Jacobians over prime fields, as well as some record-breaking computations. (Joint work with D. Kohel and P. Gaudry) Topology Seminar 11:00 am   Tuesday, April 3, 2012 Del Edit Copy Submitted by franklan. No seminar todayAbstract: We resume next week. Logic Seminar 1:00 pm   in 345 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by phierony. Robert E. Jamison (Clemson/UIUC)A Dependency Calculus for Finitary ClosuresAbstract: A closure system consists of a ground set $X$ together with a family $\mathscr{C}$ of closed subsets of $X$. The only requirements are that $\mathscr{C}$ is closed under arbitrary intersections and contains $X$. Thus each subset $S$ of $X$ lies in a smallest closed set $\mathscr{C}(S)$. The map $S \to \mathscr{C}(S)$ is the closure operator. The closure operator is finitary provided whenever $p \in \mathscr{C}(S)$, there is a finite subset $E$ of $S$ with $p \in \mathscr{C}(E)$. In this talk a first order logic for finitary closure operators will be presented. This first order logic can be used to describe and systematize the study of most important properties of finitary systems. In particular, I will describe a classification scheme for many of the important classes of finitary closures (matroids, antimatroids, partial order convexity, etc). Moreover, I will describe several metatheorems concerning classical convexity invariants such as the Helly and Radon numbers. Algebra, Geometry and Combinatoric 2:00 pm   in 345 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by darayon2. Bridget Tenner (DePaul University)Repetitions and patternsAbstract: A permutation $w$ can be written as a product of adjacent transpositions, and such a product of shortest length, $\ell(w)$, is called a reduced decomposition of $w$. The difference between $\ell(w)$ and the number of distinct letters appearing in a (any) reduced decomposition of $w$ is $\text{rep}(w)$; that is, this statistic describes the amount of repetition in a reduced decomposition of $w$. In this talk, we will explore this statistic $\text{rep}(w)$, and find that it is always bounded above by the number of 321- and 3412-patterns in $w$. Additionally, these two quantities are equal if and only if $w$ avoids the ten patterns 4321, 34512, 45123, 35412, 43512, 45132, 45213, 53412, 45312, and 45231. Probability Seminar 2:00 pm   in Altgeld Hall 347,  Tuesday, April 3, 2012 Del Edit Copy Submitted by kkirkpat. Jun Yin (U Wisconsin-Madison)Eigenvalue and Eigenvector distributions of Random matricesAbstract: In the current study of the random matrix theory, many long time open problems have been solved in the past three years. Right now, the study of the distribution of individual eigenvalues, even eigenvectors has become possible. In some works, we even obtained some brand new formulas which were not predicted before. And our methods have been successfully applied on many different matrix ensembles, like (generalized) Wigner matrix, covariance matrix, band matrix, Erdoes-renyi Graph, correlation matrix, etc. In this talk, besides the recent process on random matrix theory, we will also introduce the main open questions in this field. Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by katz. Sheldon Katz   [email] (Department of Mathematics, University of Illinois at Urbana-Champaign)Quantum Cohomology of Toric VarietiesAbstract: The structure of the quantum cohomology ring of a smooth projective toric variety was described by Batyrev and proven by Givental as a consequence of his work on mirror symmetry. This talk is in part expository since some details were never written down by Givental. I conclude with some open questions related to the quantum cohomology ring and the quantum product. An extension of these questions play a foundational role in the development of quantum sheaf cohomology which has been undertaken jointly with Donagi, Guffin, and Sharpe. Given a smooth projective variety X and a vector bundle E with $c_i(E)=c_i(X)$ for i=1,2, the quantum sheaf cohomology ring of string theory is supposed to be a deformation of the algebra $H^*(X,\Lambda^*E^*)$. If E=TX, quantum sheaf cohomology is the same as ordinary quantum cohomology. Graph Theory and Combinatorics 3:00 pm   in 241 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by west. Thomas Mahoney (UIUC Math)Extending graph choosability results to paintabilityAbstract: Introduced independently by Schauz and by Zhu, the Marker-Remover game is an on-line version of list coloring. The resulting graph parameter, paintability, is at least the list chromatic number (also known as "choosability"). We strengthen several choosability results to paintability. We study paintability of joins with complete or empty graphs. We determine upper and lower bounds on the paintability of complete bipartite graphs. We characterize 3-paint-critical graphs and show that claw-free perfect graphs with $\omega(G)\le3$ have paintability equal to chromatic number. Finally, we introduce and study sum-paintability, the analogue of sum-choosability. Mathematics in Science and Society (MSS) 4:00 pm   in 245 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by kapovich. Igor Rivin (Temple University)Conformal matching of proteinsAbstract: The question of whether two proteins can bind, and how, is one of the canonical problems in molecular biology, where it is sometimes known as the protein docking problem. This question has, so far, been studied primarily by ad hoc methods (such as Monte Carlo simulation). In this talk I will discuss some ideas and work in progress (some joint with Joel Hass of UC Davis) on using discrete (and not so discrete) conformal geometry to attack the problem, and the interesting (to the speaker, anyhow) mathematical questions which arise.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916832685470581, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/26778/how-do-we-know-dark-matter-is-non-baryonic
# How do we know Dark Matter is non baryonic? It seems widely stated, but not thoroughly explained, that Dark Matter is not normal matter as we understand it. Wikipedia states "Consistency with other observations indicates that the vast majority of dark matter in the universe cannot be baryons, and is thus not formed out of atoms." How can we presume to know this? Our best evidence for such dark matter is the rotational speeds of galaxies. It sounds like we can measure/approximate the gas density and stellar masses somehow, yet I don't understand how we can account for things like planets, asteroids, black holes without accretion disks, and other things that have mass but don't glow. How is it we dismiss these explanations for it, and jump right to WIMPs and other exotic explanations? - You want to search on "Micro-lensing" and "MACHOs" in conjunction. The density of compact, cold baryonic object in middle masses is well measured for the Milky Way. It is far below that needed to account for the rotation curve. – dmckee♦ Apr 5 '12 at 19:45 – Qmechanic♦ Feb 2 at 11:57 ## 1 Answer Definitely see the comments on your question. But a very brief outline of the data: Rotation-curves and galaxy-cluster mass measurements show the detailed distribution of matter in those objects, the amount of mass far exceeds the observed mass ---> most mass is non-observed Gravitational-lensing searches show that the "dark-matter" constituents must be composed of objects less than about $10^{-7} \textrm{ M}_\odot \sim 0.03 \textrm{ M}_\oplus$, i.e. it must be asteroid size or smaller. Asteroid size can't really form stably (in such large amounts), and would be rapidly accreted by larger mass objects --> dark-matter constituents must be small. Baryonic matter which is massive and small is constrained to gas and dust. Both of these things, when hot, are easily observable (especially in hot galaxy clusters)... yet the premise is that we can't see them --> dark matter is not baryonic There is lots more evidence, this is just the most basic outline. The biggest additional piece overall is from cosmology: anisotropies in the cosmic microwave background tell you a lot about the initial universe and the seeds of structure formation -- comparing that with what we see in the current universe tells us about the evolution of structure in the universe, which ends up requiring that the dominant component of mass in the universe has no pressure which again rules out baryonic material. There's still more evidence.... but I'm not expert enough to try to explain it. - Can you explain for a layman, does that mean that the masses of typical dark matter chunks are less than that of asteroids, or that the chunks of dark matter occupy about that much volume, so that there are asteroid-sized blobs of dark matter floating around everywhere? – Rei Miyasaka Jan 2 at 15:36 That's a good - and actually quite complicated - question. To my knowledge, most of the constraints on dark-matter are actually most accurately expressed in terms of density in some region (i.e. the universe as a whole, or the average over a galaxy-cluster, or the average-density at a certain distance from a galaxy center). You can make statements about mass of particles using statistical smoothness arguments, but volume isn't really constrained---I don't think. Even the concept of volume for non-baryonic matter is non-trivial to define / constrain. – zhermes Jan 2 at 20:43 Note that, the effects of DM depend on density (and sometimes mass-per-particle), but volume doesn't really matter---as long as its vaguely comparable or less than the separation between particles... – zhermes Jan 2 at 20:44 Ahh, got it -- more or less. Thanks! – Rei Miyasaka Jan 2 at 22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399133324623108, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/123773-triangular-matrices.html
# Thread: 1. ## Triangular Matrices Hi! Could you help me with the following problem: Find an invertible matrix P such that $P^-1 AP$ is upper triangular, where A is the matrix: $\begin{pmatrix}3&2&1\\\!\!\!-1&2&1\\1&0&1\end{pmatrix}$ Thanks a lot for suggestions/help! (general algorithms welcome) Best Wishes, M 2. There is an invertible matrix P for which $P^{-1}AP$ is a diagonal matrix. If P exists then A is diagonalizable. Do you know how to find P? Have you learnt about diagonalizability? Your 3x3 matrix A is diagonalizable if A has 3 linearly independent eigenvectors. So, the first step you have to find eigenvectors of A, say $P_1, P_2, P_3$. Then form the matrix $P=[P_1,P_3,P_3]$. the matrix $P^{-1}AP$ will be diagonal and upper triangular and will have the eigenvalues corresponding to $P_1, P_2, P_3$, respectively, as its successive diagonal entries. Let's see how you go. 3. Originally Posted by Roam There is an invertible matrix P for which $P^{-1}AP$ is a diagonal matrix. This may be far from true: not every matrix is diagonalizable. Tonio If P exists then A is diagonalizable. Do you know how to find P? Have you learnt about diagonalizability? Your 3x3 matrix A is diagonalizable if A has 3 linearly independent eigenvectors. So, the first step you have to find eigenvectors of A, say $P_1, P_2, P_3$. Then form the matrix $P=[P_1,P_3,P_3]$. the matrix $P^{-1}AP$ will be diagonal and upper triangular and will have the eigenvalues corresponding to $P_1, P_2, P_3$, respectively, as its successive diagonal entries. Let's see how you go. . 4. Originally Posted by Mimi89 Hi! Could you help me with the following problem: Find an invertible matrix P such that $P^-1 AP$ is upper triangular, where A is the matrix: $\begin{pmatrix}3&2&1\\\!\!\!-1&2&1\\1&0&1\end{pmatrix}$ Thanks a lot for suggestions/help! (general algorithms welcome) Best Wishes, M Read http://www.millersville.edu/~rumble/...larization.pdf There's a worked example in page 4. Tonio 5. This may be far from true: not every matrix is diagonalizable. Tonio I see your point but I disagree. What if we found that A happens to have 3 linearly independent eigenvectors? It would satisfy the condition for diagonalizability. By the way, an nxn matrix which has n distinct real eigenvalues is diagonalizable; because we can make a set of n linearly independent eigenvectors by choosing one eigenvector from each eigenspace. 6. Originally Posted by Roam I see your point but I disagree. What if we found that A happens to have 3 linearly independent eigenvectors? It would satisfy the condition for diagonalizability. By the way, an nxn matrix which has n distinct real eigenvalues is diagonalizable; because we can make a set of n linearly independent eigenvectors by choosing one eigenvector from each eigenspace. Well, if A has 3 lin. ind. eigenvectors THEN, and only then, it is diagonalizable...but it could perfectly well be that it has no 3 lin. ind. eigenvectors, and STILL it'd be triangularizable! The whole point of Schur's Triangularization Theorem is that ANY complex matrix is similar to a tringular matrix, even if it is not diagonalizable , and this is what the OP, imo, is trying to achieve. Tonio Ps. An nxn matrix doesn't have to have n different eigenvalues to be diagonalizable: this is a sufficient condition but not a necessary one. An nxn matrix over a field F is diagonalizable iff it has n lin. ind. eigenvectors iff its minimal pol. in F[x] splits into different linear factors (again, NOT necessarily n different linear factors...just different linear factors) 7. Gosh, I just might be forced to agree with you! Triangularization is probably what needs to be done, but the OP may not have yet done triangularization as it's often taught after diagonalization. Hmm, Yes, the converse of what I said is false; it's possible for an nxn matrix to be diagonalizable without having n distinct eigenvalues. But if it has, you know it is diagonalizable. I guess the real key to diagonalizability is with the dimensions of the eigenspaces. 8. Originally Posted by Roam There is an invertible matrix P for which $P^{-1}AP$ is a diagonal matrix. If P exists then A is diagonalizable. Do you know how to find P? Have you learnt about diagonalizability? Your 3x3 matrix A is diagonalizable if A has 3 linearly independent eigenvectors. So, the first step you have to find eigenvectors of A, say $P_1, P_2, P_3$. Then form the matrix $P=[P_1,P_3,P_3]$. the matrix $P^{-1}AP$ will be diagonal and upper triangular and will have the eigenvalues corresponding to $P_1, P_2, P_3$, respectively, as its successive diagonal entries. Let's see how you go. Thanks for the quick reply. The characteristic polynomial is $C(x)= (x-2)^3$ , which has only one eigenvalue: 2. Its eigenspace is spanned by (1,-1,1). Hence, it isn't diagonalisable (for that don't we have to have the dimension of the eigenspace=dimension of the vector space?). Originally Posted by tonio Read http://www.millersville.edu/~rumble/Math.422/schur-triangularization.pdf There's a worked example in page 4. Tonio Thank you, I will do that now. Thanks for finding something with an example - those usually help me quite a lot. Best, M.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168674349784851, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/8801/is-it-possible-to-know-the-exact-values-of-momentum-and-velocity-of-a-particle-s
# Is it possible to know the exact values of momentum and velocity of a particle simultaneously? I know that by Heisenberg's Uncertainty Principle that it is not possible to know the exact values of position and momentum of a particle simultaneously, but can we know the exact values of momentum and velocity of a particle simultaneously? I would think the answer would be no because even if we were 100% certain of the particle's position, we would be completely unsure of the particle's momentum, thus making us also completely unsure of the particle's velocity. Does anyone have any insight into this? - ## 3 Answers It is quite common to discuss the two extremes of the uncertainty principle, sinusoid and delta function. One has a perfectly defined wavelength but no position, the other has a perfectly defined position but no wavelength. However, neither one of these shapes is terribly physical for a particle's position wavefunction. A true sinusoidal wavefunction would extend through all space, which is absurd for several reasons (including the presence of other matter). A true delta function would be equally likely to have any momentum, which would probably violate conservation of energy. So, these two extreme limits are mathematically interesting, but not physically relevant. Given the question "Does the uncertainty principle put some bound on momentum and velocity being simultaneously well-defined?", the answer is no. Given the question "Does the uncertainty principle forbid me from measuring any single variable with infinite precision?", the answer is no. Given the question "Does anything forbid me from measuring with infinite precision?", the answer is yes. So, your question mentions 'exact values', which is a very interesting, thorny subject. (Is it ever possible to measure an exact value? How would we tell the difference?) Are you really curious about 'exact values'? Are you more curious about where the Heisenberg uncertainty principle does and does not apply? Or are you curious if there are other bounds on our ability to measure, in addition to the uncertainty principle? - I was only asking because it was asked on a test and I was curious to know the answer after I took the test. I know the Uncertainty principle deals with energy and time, and then it also deals with position and momentum. So I thought if we hypothetically measured position with exact certainty, then we would be completely uncertain about its position, thus completely uncertain about its velocity. All I wanted to know was if uncertainty about position ensures uncertainty about velocity – Greg Harrington Apr 21 '11 at 17:05 1 If we ignore relativistic effects, then velocity and momentum are directly proportional to each other with the particle's rest mass as the constant of proportionality, so if you know one exactly, you get the other one for free. – Lagerbaer Dec 3 '11 at 16:38 If in your theory the momentum operator and velocity operator are proportional to each other, then yes. Knowing one's eigenvalue means knowing the other's. It is always the case with any function of a "known" operator. - I'm in basic Physics 3 at Georgia Tech taking it as an elective, so I haven't gotten that far. I'll be sure to look into that though – Greg Harrington Apr 17 '11 at 21:41 The velocity eigenvalues of the Dirac equation are $\pm c$. This is well known since the equation was found; see Dirac's book, "The Principles of Quantum Mechanics, 4th Ed.,", Oxford University Press, Oxford 1958, Chapter XI "Relativistic Theory of the Electron", Section 69, "The motion of a free electron", page 262. It used to be a commonly taught fact of quantum mechanics, but I understand the down votes, it's now possible to get a PhD in physics without knowing the slightest thing about the following quite elementary calculation. Partly since this isn't taught much anymore, the derivation has been reappearing recently in the literature, for example see: Eur.Phys.J.C50:673-678,2007 Chiral oscillations in terms of the zitterbewegung effect / hep-th/0701091, around equation (11). We begin by noting that velocity is the time rate of change of position, and that you can define the time rate of change of position by using the commutator: $$\hat{v}_x = \dot{x} = -(i/\hbar)[\hat{x},H]$$ If the above appears to be magic to you, read the wikipedia entry on Ehrenfest's theorem which states the principle and gives the identical situation for non-relativistic quantum mechanics: $$\frac{d}{dt}\langle x\rangle = -(i/\hbar)\langle [\hat{x},H]\rangle = \langle p_x\rangle /m$$ and so $\;m v_x = m\dot{x} = p_x$ (for the non-relativistic case). Thus, for the non-relativistic electron model, it is possible to simultaneously measure velocity and momentum; their proportionality constant is the mass. But with relativity the proportionality does not happen so the situation is different. For a state to be an eigenstate of velocity requires that: $$\hat{v}_x\;\psi(x) = -(i/\hbar)[\hat{x},H]\;\psi(x) = \lambda\psi(x)$$ Dirac defined the free-particle Hamiltonian as $H=c\vec{\alpha}\cdot \vec{p} + \beta mc^2$. In modern notation, $\beta=\gamma^0$ and $\alpha^k = \gamma^0\gamma^k$, while $p$ is the usual momentum operator. Note that the only thing that doesn't commute with $\hat{x}$ is the x-component of the momentum operator, which gives $[\hat{x},\hat{p}_x]=i\hbar$. Thus the above reduces to: $$-(i/\hbar)[\hat{x},c\gamma^0\gamma^1p_x]\psi(x) = \lambda\psi(x)$$ $$-(ic/\hbar)\gamma^0\gamma^1[\hat{x},p_x]\psi(x) = \lambda\psi(x)$$ $$-(ic/\hbar)(i\hbar)\gamma^0\gamma^1\psi(x) = \lambda\psi(x)$$ $$c\gamma^0\gamma^1\psi(x) = \lambda\psi(x)$$ Using the wikipedia's choice of gamma matrix representation, we have: $$c\gamma^0\gamma^1 = c\left(\begin{array}{cccc} 1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{array}\right) \left(\begin{array}{cccc} 0&0&0&1\\0&0&1&0\\0&-1&0&0\\-1&0&0&0\end{array}\right) =c\left(\begin{array}{cccc} 0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{array}\right)$$ The eigenvalues are obtained by solving the characteristic polynomial. That is, compute the matrix determinant and set it to zero: $$\left[\begin{array}{cccc} -\lambda&0&0&c\\0&-\lambda&c&0\\0&c&-\lambda&0\\c&0&0&-\lambda\end{array}\right] = \lambda^4-2\lambda^2c^2 + c^4=0$$ I leave it as an exercise for the reader to show that there are two real roots, $\pm c$ each with order two. The four solutions to the velocity eigenvalue problem for the Dirac equation correspond to the the right and left handed electron and positron. That is, the velocity eigenstates of the Dirac equation are precisely the left and right-handed states used to represent fermions in the standard model. - There are two separate issues that might be causing downvotes (I didn't downvote yet, please fix). First, the Dirac Hamiltonian is in a discredited single-particle picture of the Dirac equation, where x is an operator describing the position of the electron. In the proper field theory picture, near Fock states have a momentum which is p and a velocity which is p/E in a wavepacket, and the two quantities can have simultaneous values (sort of, because particles are nonlocal). The other problem is that the equation you give for the speed eigenvalues has four solutions, (c,-c,ic,-ic). – Ron Maimon Dec 3 '11 at 6:01 – Carl Brannen Dec 3 '11 at 8:03 Okay, I'm fixing the eigenvalue calculation; I blew the determinant. – Carl Brannen Dec 3 '11 at 8:19 I don't think it's completely discredited, it just needs a discussion--- the zbw is a property of positron states mixing with electron states in the single particle picture, its the electron zigging back and forth in time in the Feynman description. It's physical, but only in the Feynman form of particle dynamics, not so much in the field theory form. I am sure that this is the reason that a lot of people automatically downvote single particle discussions of Dirac eqn. I don't think it is nonsense, it contains a lot of physics, but it requires a careful discussion. – Ron Maimon Dec 3 '11 at 9:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430011510848999, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/124225/factorial-divisors
# Factorial Divisors Q. For what maximum value of $n$ will the expression $\frac{10200!}{504^n}$ be an integer? I have the solution to this question and I would like you to please go through the solution below. My doubt follows the solution :) The solution can be found by writing $504 = 2^3 \cdot 3^2 \cdot 7$ and then finding the number of $2^3$s, $3^2$s and $7$s in the numerator, which can be obtained by Number of $2$s = $\left\lfloor\frac{10200}{2}\right\rfloor + \left\lfloor\frac{10200}{2^2}\right\rfloor + \left\lfloor\frac{10200}{2^3}\right\rfloor + \dots + \left\lfloor\frac{10200}{2^{13}}\right\rfloor= 10192$ where $\left\lfloor\dots\right\rfloor$ is the floor function. Therefore, the number of $2^3\textrm{s} = \left\lfloor\frac{10192}{3}\right\rfloor = 3397$ $\begin{align}\textrm{Similarly, the number of }3^2\textrm{s} &= 2457\\ \textrm{and the number of }7\textrm{s} & = 1698\end{align}$ The number of factors of $2^3 \cdot 3^2 \cdot 7$ is clearly constrained by the number of $7$s, therefore $n = 1698$. My question is, whether there is any way I can simply look at the prime factors of the divisor and know which prime factor is going to be the constraining factor? (as $7$ was, in this particular example) - ## 1 Answer In the section Counting Prime Factors of n!, It is shown that the number of factors of $p$ in $n!$ is $$\frac{n-\sigma_p(n)}{p-1}\tag{1}$$ where $\sigma_p(n)$ is the sum of the base-$p$ digits of $n$. Factor $$504=2^3\cdot3^2\cdot7\tag{2}$$ Write $10200$ in base-$2$, base-$3$, and base-$7$: $$\begin{array}{}10011111011000_2&111222210_3&41511_7\end{array}\tag{3}$$ The number of factors of $2$ in $10200!$ is $\frac{10200-8}{2-1}=10192$. The number of factors of $3$ in $10200!$ is $\frac{10200-12}{3-1}=5094$. The number of factors of $7$ in $10200!$ is $\frac{10200-12}{7-1}=1698$. Since $\left\lfloor\frac{10192}{3}\right\rfloor=3397$, $\left\lfloor\frac{5094}{2}\right\rfloor=2547$, and $\left\lfloor\frac{1698}{1}\right\rfloor=1698$, the maximum value of $n$ so that $\frac{10200!}{504^n}$ is an integer is $n=1698$. To answer the question asked: For large $n$, the sum of the digits of $n$ is small compared to $n$, so suppose $$d=p_1^{e_1}p_2^{e_2}\dots p_m^{e_m}\tag{4}$$ Following the computations above, the greatest power of $d$ that divides $n!$ is $$\min_k \left\lfloor\frac{n-\sigma_{p_k}(n)}{e_k(p_k-1)}\right\rfloor\tag{5}$$ Ignoring $\sigma_{p_k}(n)$ as negligible, the greatest of $e_k(p_k-1)$ is a strong indicator of which $p_k$ is the constraining factor. In the current case, the greatest of $3(2-1)=3$, $2(3-1)=4$, and $1(7-1)=6$ hints strongly that $7$ is the constraining factor. - Thanks for the answer Rob!! The last statement should probably have been "In the current case, the greatest of 3(2−1)=3, 2(3−1)=4, and $1(7−1)=6$ hints strongly that 7 is the constraining factor." Well thanks again! Much appreciated! – BumbleBee Mar 25 '12 at 14:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950000524520874, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/165405-give-equation-plane-parallel-plane-print.html
# give an equation for the plane that is parallel to the plane.... Printable View • December 5th 2010, 04:08 PM break give an equation for the plane that is parallel to the plane.... give an equation for the plane that is parallel to the plane 5x-4y+z=1 and that passes through the point (2,-1,-2). i used the formula f(a,b) + fx(a,b)(x-a) + fy(a,b)(y-b) + fz(a,b)(z-c) and get (-5x+4y+1) + (-5)(x-2) + (4)(y+1) + (1)(z+2) then plug in the points for the rest of the variables -13 + (-5x+10) + (4y+4) + (z+2) -5x+4y+z+3 i'm pretty sure this is wrong and i'm not sure if i used the right formula for this. can anyone explain to me how to do this? is there a simpler way? thanks! • December 5th 2010, 04:21 PM Plato You are making it far too hard. $5x-4y+z=5(2)-4(-1)+1(-2)$. DONE! • December 5th 2010, 04:57 PM SammyS give an equation for the plane that is parallel to the plane.... Quote: Originally Posted by break Give an equation for the plane that is parallel to the plane 5x-4y+z=1 and that passes through the point (2,-1,-2). i used the formula f(a,b) + fx(a,b)(x-a) + fy(a,b)(y-b) + fz(a,b)(z-c) and get (-5x+4y+1) + (-5)(x-2) + (4)(y+1) + (1)(z+2) then plug in the points for the rest of the variables -13 + (-5x+10) + (4y+4) + (z+2) -5x+4y+z+3 I'm pretty sure this is wrong and i'm not sure if i used the right formula for this. Can anyone explain to me how to do this? is there a simpler way? thanks! Any plane parallel to the plane, $5x-4y+z=1$ will be of the form: $5x-4y+z=D$, where $D$ is a constant. To find what that constant is, plug in the coordinates of any point in the plane. That's basically what Plato did in one step. • December 5th 2010, 05:46 PM break wow... thanks for clearing this up!! All times are GMT -8. The time now is 05:14 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007269740104675, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Coanalytic_set
# Coanalytic set In the mathematical discipline of descriptive set theory, a coanalytic set is a set (typically a set of real numbers or more generally a subset of a Polish space) that is the complement of an analytic set (Kechris 1994:87). Coanalytic sets are also referred to as $\scriptstyle\boldsymbol{\Pi}^1_1$ sets (see projective hierarchy). ## References • Kechris, Alexander S. (1994), Classical Descriptive Set Theory, Springer-Verlag, ISBN 0-387-94374-9 This set theory-related article is a stub. You can help Wikipedia by expanding it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8553028702735901, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/15941-august-2004-help.html
# Thread: 1. ## August 2004 Help Hello everyone. My name is Norman and although I'm asking specifically for help on one question now, I may have others later, so please don't get too angry at me if I start asking millions of questions. For now though, I have absolutely no idea in the name of heaven's sake how to do number 15 and would hope that someone on this nice little message board could direct their attention to me at some point before tomorrow at 11:30 a.m. EST. Thanks and see you all around. 2. Originally Posted by Norman Smith Hello everyone. My name is Norman and although I'm asking specifically for help on one question now, I may have others later, so please don't get too pissed at me if I start asking millions of questions. For now though, I have absolutely no idea in the name of heaven's sake how to do number 15 and would hope that someone on this nice little message board could direct their attention to me at some point before tomorrow at 11:30 a.m. EST. Thanks and see you all around. number 15 of what? you haven't posted the question 3. Originally Posted by Jhevon number 15 of what? you haven't posted the question My deepest apologies Jhevon. I should've known better than to come to conclusions, and shall now post the question. http://www.nysedregents.org/testing/...mathbaug04.pdf The answer lies in http://www.nysedregents.org/testing/...tbkeyaug04.pdf And it is number 15. Thanks for any help you or anyone else can offer. 4. Originally Posted by Norman Smith My deepest apologies Jhevon. I should've known better than to come to conclusions, and shall now post the question. http://www.nysedregents.org/testing/...mathbaug04.pdf The answer lies in http://www.nysedregents.org/testing/...tbkeyaug04.pdf And it is number 15. Thanks for any help you or anyone else can offer. $\frac { \left( b^{2n + 1} \right)^3 }{b^n \cdot b^{4n + 3}}$ $= \frac {b^{6n + 3}}{b^n \cdot b^{4n + 3}}$ .......since when we raised a number with a power to a power, we multiply the powers $= \frac {b^{6n + 3}}{b^{5n + 3}}$ ........since when we multiply numbers of the same base, we add the powers $= b^{6n + 3 - 5n - 3}$ ...........since when we divide numbers of the same base, we subtract the power of the base in the denominator from the power of the base in the numerator $= b^{n}$ which is choice 2 So to summarize everything you need to know for this problem: $\left( x^a \right)^b = x^{ab}$ $x^a \cdot x^b = x^{a + b}$ $\frac {x^a}{x^b} = x^{a - b}$ 5. ## January 07 I have a question about the January 07 Regents, number 5 the question: http://www.nysedregents.org/testing/mathre/b-107.pdf answer: http://www.nysedregents.org/testing/mathre/bkey-107.pdf i dont understand how i need to find the power of i when its higher than 4 6. i also have a quesiton with number 14 and for number 20 i got the right answer but i dont know how, i partially guessed. i found the other angle to be about 46 degrees so i knew that the other angle couldnt be a right angle. how would i find out the other angle for it? by using ambigous case? if so how do you do that? 7. Originally Posted by Jhevon $\frac { \left( b^{2n + 1} \right)^3 }{b^n \cdot b^{4n + 3}}$ $= \frac {b^{6n + 3}}{b^n \cdot b^{4n + 3}}$ .......since when we raised a number with a power to a power, we multiply the powers $= \frac {b^{6n + 3}}{b^{5n + 3}}$ ........since when we multiply numbers of the same base, we add the powers $= b^{6n + 3 - 5n - 3}$ ...........since when we divide numbers of the same base, we subtract the power of the base in the denominator from the power of the base in the numerator $= b^{n}$ which is choice 2 I've never seen a question like that before, but I really thank you for it and see it somewhat more clear now. I appreciate the help, and if you have any more time, is it possible if you could also help me with number 24 of August 2004. I found both of the answers, but don't see how it is an inequality. I also don't get numbers 26, 31 and 32 and would greatly appreciate help you could offer for those two questions as well. 8. Originally Posted by shanegoeswapow I have a question about the January 07 Regents, number 5 the question: http://www.nysedregents.org/testing/mathre/b-107.pdf answer: http://www.nysedregents.org/testing/mathre/bkey-107.pdf i dont understand how i need to find the power of i when its higher than 4 recall that, by definition: $i = \sqrt {-1}$ and $i^2 = -1$ so for 5. $i^{25} = i^{24} \cdot i = \left( i^2 \right)^{12} \cdot i = (-1)^{12} \cdot i = 1 \cdot i = i$ 9. and for number 20 i got the right answer but i dont know how, i partially guessed. i found the other angle to be about 46 degrees so i knew that the other angle couldnt be a right angle. how would i find out the other angle for it? by using ambigous case? if so how do you do that?[/quote] for 14 recall, if $a$ and $b$ are the roots of a quadratic equation, then $(x - a)(x - b) = 0$ (Can you tell me why?) so we are told the roots are $3 + i$ and $3 - i$ so, $(x - (3 + i))(x - (3 - i)) = 0$ .......expand within the brackets $\Rightarrow (x - 3 - i)(x - 3 + i) = 0$ ....now expand in general $\Rightarrow x^2 - 3x + xi - 3x + 9 - 3i - xi + 3i - i^2 = 0$ .....remember, $i^2 = -1$ $\Rightarrow x^2 - 6x + 10 = 0$ ........answer 10. Originally Posted by shanegoeswapow i also have a quesiton with number 14 and for number 20 i got the right answer but i dont know how, i partially guessed. i found the other angle to be about 46 degrees so i knew that the other angle couldnt be a right angle. how would i find out the other angle for it? by using ambigous case? if so how do you do that? 20 is a strange question. i disagree with the answer given. maybe i'm making an error that someone will soon point out to me. but by the sine rule, angle B is 45.585... which is an acute angle. so i'd say choice (1) is correct. however, they have choice (4) as the right answer. now i'm confused, lol. we'll get back to this later 11. Originally Posted by Norman Smith I've never seen a question like that before, but I really thank you for it and see it somewhat more clear now. I appreciate the help, and if you have any more time, is it possible if you could also help me with number 24 of August 2004. I found both of the answers, but don't see how it is an inequality. I also don't get numbers 26, 31 and 32 and would greatly appreciate help you could offer for those two questions as well. to 24 we make a profit when $P(x)>0$ (that's where the inequality comes from--do you see why it is an inequality?), so we simply must solve for that So, $-x^2 + 120x - 2000 > 0$ $\Rightarrow (-x + 100)(x - 20) > 0$ $\Rightarrow (-x + 100)> 0 \mbox { or } (x - 20)> 0$ $\Rightarrow x < 100 \mbox { or } x > 20$ But we have to check this! sometimes by solving inequalities, even if we do it correctly, the signs get messed up. check numbers in the different regions separated by the numbers 100 and 20. so you would check say, 19, 25 and 101. you will realize that our inequality holds when x is between 20 and 100. so our intial assertion was right. Combining the two inequalities we had, we get $20 < x < 100$ which is the answer 12. Originally Posted by Norman Smith I've never seen a question like that before, but I really thank you for it and see it somewhat more clear now. I appreciate the help, and if you have any more time, is it possible if you could also help me with number 24 of August 2004. I found both of the answers, but don't see how it is an inequality. I also don't get numbers 26, 31 and 32 and would greatly appreciate help you could offer for those two questions as well. 26 is just asking for the length of the arc HK recall the formula for length of arc: $s = \frac { \theta }{360}2 \pi r$ where $s$ is the length of the arc, $\theta$ is the angle that subtends the arc in degrees, and $r$ is the radius. So, $HK = \frac {9}{360}2 \pi (3954) \approx 621.1$ 13. again for my regents i have questions on both 26 and 31. binomial expansion im still alittle bit fuzzy on. im not sure when to include 0 when im putting the n term or if that is just for the "middle term" (MATH B REGENTS IN LESS THAN 12 HOURS!!!) 14. I just remembered that Soroban and I did some problems from Aug 2004 before. including 31. see here for the solution to 31 and others Originally Posted by Norman Smith I've never seen a question like that before, but I really thank you for it and see it somewhat more clear now. I appreciate the help, and if you have any more time, is it possible if you could also help me with number 24 of August 2004. I found both of the answers, but don't see how it is an inequality. I also don't get numbers 26, 31 and 32 and would greatly appreciate help you could offer for those two questions as well. Here's 32 (BY the way, you guys really should attempt the problems on your own before viewing my solutions--it makes a whole lot of difference when you do that, since it gets your mind used to thinking about the problems and forming ideas) $\frac { \sin^2 \theta}{ 1 + \cos \theta} = 1$ ....multiply both sides by $1 + \cos \theta$ $\Rightarrow \sin^2 \theta = 1 + \cos \theta$ $\Rightarrow - \sin^2 \theta + \cos \theta + 1 = 0$ .......recall $\sin^2 \theta = 1 - \cos^2 \theta$ $\Rightarrow - \left( 1 - \cos^2 \theta \right) + \cos \theta + 1 = 0$ $\Rightarrow \cos^2 \theta + \cos \theta = 0$ $\Rightarrow \cos \theta ( \cos \theta + 1 ) = 0$ $\Rightarrow \cos \theta = 0 \mbox { or } \cos \theta + 1 = 0$ $\Rightarrow \theta = 90^{ \circ}, 270^{ \circ} \mbox { or } \theta = 180^{ \circ}$ for $0^{ \circ} \leq \theta \leq 360^{ \circ}$ However, $180^{ \circ}$ is extraneous (we see that it doesn't work when we plug it into the original equation), so the answer is just $90^{ \circ} \mbox { and } 270^{ \circ}$ 15. Originally Posted by shanegoeswapow again for my regents i have questions on both 26 and 31. binomial expansion im still alittle bit fuzzy on. im not sure when to include 0 when im putting the n term or if that is just for the "middle term" $(a + b)^n = \sum_{k=0}^{n} {n \choose k} a^{n-k} b^{k}$ since k starts from 0, the fourth term is when k is 3 so the fourth term is given by: ${5 \choose 3} (2x)^2 (-y)^3 = -40x^2 y^3$ (MATH B REGENTS IN LESS THAN 12 HOURS!!!) Calm down! besides, shouldn't you be sleeping now? i told you to get a lot of rest tonight didn't i?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 52, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525942802429199, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/02/12/
# The Unapologetic Mathematician ## Intertwinors from Semistandard Tableaux Span, part 2 We continue our proof that the intertwinors $\bar{\theta}_T:S^\lambda\to M^\mu$ that come from semistandard tableaux span the space of all such intertwinors. This time I assert that if $\theta\in\hom(S^\lambda,M^\mu)$ is not the zero map, then there is some semistandard $T$ with $c_T\neq0$. Obviously there are some nonzero coefficients; if $\theta(e_t)=0$, then $\displaystyle\theta(e_{\pi t})=\theta(\pi e_t)=\pi\theta(e_t)=0$ which would make $\theta$ the zero map. So among the nonzero $c_T$, there are some with $[T]$ maximal in the column dominance order. I say that we can find a semistandard $T$ among them. By the results yesterday we know that the entries in the columns of these $T$ are all distinct, so in the column tabloids we can arrange them to be strictly increasing down the columns. What we must show is that we can find one with the rows weakly increasing. Well, let’s pick a maximal $T$ and suppose that it does have a row descent, which would keep it from being semistandard. Just like the last time we saw row descents, we get a chain of distinct elements running up the two columns: $\displaystyle\begin{array}{ccc}a_1&\hphantom{X}&b_1\\&&\wedge\\a_2&&b_2\\&&\wedge\\\vdots&&\vdots\\&&\wedge\\a_i&>&b_i\\\wedge&&\\\vdots&&\vdots\\\wedge&&b_q\\a_p&&\end{array}$ We choose the sets $A$ and $B$ and the Garnir element $g_{A,B}$ just like before. We find $\displaystyle g_{A,B}\left(\sum\limits_Tc_TT\right)=g_{A,B}\left(\theta(e_t)\right)=\theta\left(g_{A,B}(e_t)\right)=\theta(0)=0$ The generalized tableau $T$ must appear in $g_{A,B}(T)$ with unit coefficient, so to cancel it off there must be some other generalized tableau $T'\neq T$ with $T'=\pi T$ for some $\pi$ that shows up in $g_{A,B}$. But since this $\pi$ just interchanges some $a$ and $b$ entries, we can see that $[T']\triangleright[T]$, which contradicts the maximality of our choice of $T$. Thus there can be no row descents in $T$, and $T$ is in fact semistandard. ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230414032936096, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/04/28/measurable-subspaces-i/?like=1&source=post_flair&_wpnonce=39170bc4ce
# The Unapologetic Mathematician ## Measurable Subspaces I WordPress seems to have cleaned up its mess for now, so I’ll try to catch up. When we’re considering the category of measurable spaces it’s a natural question to ask whether a subset $X_0\subseteq X$ of a measurable space $(X,\mathcal{S})$ is itself a measurable space in a natural way, and if this constitutes a subobject in the category. Unfortunately, unlike we saw with topological spaces, it’s not always possible to do this with measurable spaces. But let’s see what we can say. Every subset comes with an inclusion function $\iota:X_0\hookrightarrow X$. If this is a measurable function, then it’s clearly a monomorphism; our question comes down to whether the inclusion is measurable in the first place. And so — as we did with topological spaces — we consider the preimage $\iota^{-1}(M)$ of a measurable subset $M\subseteq X$. That is, what points $x\in X_0$ satisfy $\iota(x)\in M$? Clearly, these are the points in the intersection $X_0\cap M$. And so for $\iota$ to be measurable, we must have $X_0\cap M$ be measurable as a subset of $X_0$. An easy way for this to happen is for $X_0$ itself to be measurable as a subset of $X$. That is, if $X_0\in\mathcal{S}$, then for any measurable $M\in\mathcal{S}$, we have $X_0\cap M\in\mathcal{S}$. And so we can define $\mathcal{S}_0$ to be the collection of all measurable subsets of $X$ that happen to fall within $X_0$. That is, $M\in\mathcal{S}_0$ if and only if $M\in\mathcal{S}$ and $M\subseteq X_0$. If $X$ is a measure space, with measure $\mu$, then we can define a measure $\mu_0$ on $\mathcal{S}_0$ by setting $\mu_0(M)=\mu(M)$. This clearly satisfies the definition of a measure. Conversely, if $(X_0,\mathcal{S}_0,\mu_0)$ is a measure space and $X_0\subseteq X$, we can make $X$ into a measure space $(X,\mathcal{S},\mu)$! A subset $M\subseteq X$ is in $\mathcal{S}$ if and only if $M\cap X_0\in\mathcal{S}_0$, and we define $\mu(M)=\mu_0(M\cap X_0)$ for such a subset $M$. As a variation, if we already have a measurable space $(X,\mathcal{S})$ we can restrict it to the measurable subspace $(X_0,\mathcal{S}_0)$. If we then define a measure $\mu_0$ on $(X_0,\mathcal{S}_0)$, we can extend this measure to a measure $\mu$ on $(X,\mathcal{S})$ by the same definition: $\mu(M)=\mu_0(M\cap X_0)$, even though this $\mathcal{S}$ is not the same one as in the previous paragraph. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 3 Comments » 1. [...] Subspaces II Last time we discussed how to define a measurable subspace of a measurable space in the easy case when is itself a measurable subset of : [...] Pingback by | April 28, 2010 | Reply 2. [...] got a measure space and we’re talking about what structure we get on a subset . If is measurable — if — then we can set =. Since each of these subsets is itself measurable as a [...] Pingback by | April 29, 2010 | Reply 3. [...] composition is not measurable. Specifically, will be the closed unit interval , considered as a measurable subspace of [...] Pingback by | May 5, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271495938301086, "perplexity_flag": "head"}
http://mathoverflow.net/questions/6704/how-to-think-about-cm-rings/6724
## How to think about CM rings? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There are a few questions about CM rings and depth. 1. Why would one consider the concept of depth? Is there any geometric meaning associated to that? The consideration of regular sequence is okay to me. (currently I'm regarding it as a generalization of not-a-zero-divisor that's needed to carry out induction argument, e.g. as in $dim \frac{M}{(a_1,\cdots,a_n)M} = dim M - n$ for $M$-regular sequence $a_1,\cdots,a_n$; correct me if I'm wrong!) But I don't understand why the length of a maximal regular sequence is of interest. Is it merely due to some technical consideration in cohomology that we want many $Ext$ groups to vanish? 2. What does CM rings mean geometrically? As I read from Eisenbud's book, there doesn't seem to be an exact geometric concept that corresponds to it. Nonetheless I would still like to know about any geometric intuition of CM rings. I know that it should be locally equidimensional. Some examples of CM rings come from complete intersection (I read this from wiki). But what else? 3. Why do we care about CM rings? If I understand it correctly, CM rings <=> unmixedness theorem holds for every ideal for a noetherian ring, which should mean every closed subschemes have equidimensional irreducible components (and there's no embedded components). This looks quite restrictive. Thanks! - ## 5 Answers "Life is really worth living in a Noetherian ring R when all the local rings have the property that every s.o.p. is an R-sequence. Such a ring is called Cohen-Macaulay (C-M for short).": Hochster, 1978 Section 3 of that paper is devoted to explaining what it "really means" to be Cohen-Macaulay. It begins with a long subsection on invariant theory, but then gets to some algebraic geometry that will interest you. In particular, he points out that if $R$ is a standard graded algebra over a field, then it is a module-finite algebra over a polynomial subring $S$, and that $R$ is Cohen-Macaulay if and only if it is free as an $S$-module. Equivalently, the scheme-theoretic fibers of the finite morphism $\mathrm{Spec}\ R \to \mathrm{Spec}\ S$ all have the same length. At the end of section 3, Hochster explains that the CM condition is exactly what is required to make intersection multiplicity "work correctly": If $X$ and $Y$ are CM, then you can compute the intersection multiplicity of $X$ and $Y$ without all those higher $\mathrm{Tor}$s that Serre had to add to the definition. He gives lots of examples and explains "where Cohen-Macaulayness comes from" (or doesn't) in each one. The whole thing is eminently readable and highly recommended. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If I remember correctly, CM is equivalent to asking the dualizing complex (the generalization of the canonical line bundle) to be a sheaf, rather than a more general complex (while Gorenstein is asking for it to be in fact a line bundle). In other words, we're classifying singularities according to how reasonable a theory of volume forms they admit. - Geometrically, depth is measuring "dimension" via hypersurfaces. The set of zero-divisors is the union of the associated primes, so to say that an element x is a non-zero-divisor is to say that it is not contained in any associated prime. Thus the hypersurface $V(x)$ does not intersect $M$ in a component. The other condition on a regular sequence is that $xM\neq M$, which amounts to saying that the hypersurface $V(x)$ must intersect $M$ somewhere. So basically, you're cutting down $M$ by hypersurfaces. Since they can't intersect in a component, they actually do cut it down by some amount, and since they must intersect somewhere they aren't throwing everything away at once. This is a very loose description, but I don't have time right now to make it more precise. If you look at quotients of polynomial rings you can actually see this at work. Here you can compute depth by drawing pictures: take an ideal $I$ in $k[x,y,z]$ say, and look at $V(I)$. Find some hypersurface (a plane in this case) that intersects $V(I)$ but not in a component. Then repeat on this intersection. Using this you should be able to find the classic example of a regular sequence that does not stay regular under permutation. You can also convince yourself that in a local ring, all permutations are regular. In this interpretation, CM rings are exactly those for which the dimension can be measured by using hypersurfaces in this way. I'm somewhat rushing to catch a flight (bad time to look at mathoverflow!), so there may be some mistakes but the idea is sound. - I guess I made some silly mistakes somewhere but can someone help me out? Using Justin's idea I tried to work with $Spec k[X,Y,Z]/(X,Y) \cap Z$. This shouldn't be CM at the origin because it is not locally equidimensional. Its Krull dimension should be 2. But when I compute the depth, I try to first use Y+Z = 0 to cut it, so that only the X-axis remains, and then use X+Z = 0 to cut it, so that the origin remains. This should then give me a regular sequence of length 2. But then this local ring can't be CM... – Ho Chung Siu Nov 25 2009 at 4:11 5 That ring is isomorphic to $k[X,Y,Z]/(XZ,YZ)$. Setting $Y+Z=0$ gives the quotient ring $k[X,Z]/(XZ,Z^2)$, in which $X+Z$ is not a nonzerodivisor. The big difference here is that it's not enough that the hypersurface "intersects $V(I)$ but not in a component". That corresponds to avoiding the minimal primes of the coordinate ring. What you need for a nonzerodivisor is to avoid the <em>associated</em> primes of the ring. For the cut-down ring $k[X,Z]/(XZ,Z^2)$, there is a unique minimal primes $(Z)$, but $(X,Z)$ is also an associated prime. – Graham Leuschke Nov 25 2009 at 14:19 Oh yes I forgot about the embedded primes.. thanks! – Ho Chung Siu Nov 25 2009 at 14:27 One should care about CM rings and schemes for example because they have good duality properties; see for example Serre's duality theorem in Hartshorne III.7. There are Serre's $S_n$ properties generalizing CM. $S_1$ means "no embedded components" (if X is reduced, this is automatic of course), and $S_2$ means "$S_1$ and X is saturated in codimension 2". Both of these properties have a clear geometric meaning. Now you can consider CM to be "this good, and even better". - Any links for Serre's S_n properties? Does CM mean to have property S_i for every i? – Gil Kalai Nov 28 2009 at 18:58 1 $depth(R_p) \geq \min\{\dim R_p, n\}$ for any $p \in Spec(R)$. Being CM does mean S_i for any i. – Hailong Dao Nov 29 2009 at 4:12 One way I think about Cohen-Macaulayness (probably not in largest generality but at least in context relevant to combinatorics) is as follows Think first of the ring of symmetric polynomials in n variables. A remarkable fact from first year linear algebra is that this ring is a polynomial ring in some other variables, the elementary symmetric polynomials. Being a polynomial ring is rare (but this is a sort of role model). Being Cohen Macaulay comes close. A Cohen Macaulay ring M can be described as a direct sum where each summand S_i is of the form eta_i times R, where R is a polynomial rings (whose variables are the elements of a system of parameters) and eta_i are elements. Being a direct sum is important here. For graded rings such a description has remarkable combinatorial consequences. - 2 What do you mean by the direct sum here? – Martin Brandenburg Feb 9 2011 at 20:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405179619789124, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/169742-factoring-cubic.html
# Thread: 1. ## Factoring a cubic I tried factoring by grouping and everything else I could think of but I can't seem to end up with a solution. $r^3 - 6r^2 + 11r - 6 = 0$ Would there be a systematic way of factoring this or is it just guesswork? The final factored form is: $(r-1)(r-2)(r-3)$ Any help is appreciated! 2. You can use the rational root theorem to find the first root since it must be a factor of -6. Since 1 is always easy to check start there: $f(1) = 1 - 6+11-6 = 0$. Hence r = 1 is a root. By the factor theorem, if 1 is a root then (r-1) must be a factor of the cubic $r^3-6r^2+11r-6 = (r-1)(Ar^2+Br+C)$ where A, B and C are constants to be found either by comparing coefficients or by long division. Once you've found the quadratic remember to check if it factors (it obviously does judging by the answer)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547083377838135, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4139/unsystematic-and-systematic-risk-of-a-portfolio?answertab=active
# Unsystematic and systematic risk of a portfolio I have 8 country stock indexes and 1 world stock index. I do not actually have time series data but I'm given the following data: • $\mu$, the vector of expected future returns for all 8 country indexes and world index (9 indexes). • $\Omega$, the variance covariance matrix of all 9 indexes. I'm forming a MV efficient and Michaud resampling portfolio over the 8 country indexes - the world index is not considered an investable asset class. I want to compare the two portfolios by looking at the systematic risk and unsystematic risk of both portfolios w.r.t. the world market index. So we have the two weights vectors produced by the two methodologies: • $_1w$ (MV) • $_2w$ (REF, Resampled Efficient Frontier). We can calculate the betas of both portfolios by going $_j\beta_p = \sum_{i=1}^8 (_jw_i )\frac{\sigma_{i,world}}{\sigma^2_{world}}$ for $j = 1,2$. Being able to sum the coefficients like this follows from OLS. How do I get from here to the unsystematic and systematic risk of the portfolios? I can't get the error from the specification that generates the betas so it seems I'm stuck? - ## 1 Answer Assuming those are arithmetic returns and covariances at the horizon, calculate a $9\times1$ vector containing the betas with respect to the world index using the covariance matrix, call it $\beta$. The covariance resulting from the world index can be described as $\beta\sigma_{world}^{2}\beta'$. The matrix $\Sigma_{residual}\equiv\Omega-\beta\sigma_{world}^{2}\beta'$ will then reflect the residual covariance. Note that this residual covariance matrix is not necessarily a diagonal matrix, as some CAPM-like models would require. To get a measure of the residual risk of the portfolio, you would then calculate $w'\Sigma_{residual}w$. - Are you aware of any papers that try an approach similar to what I've suggested/you've detailed? – user2921 Sep 19 '12 at 0:17 – John Sep 19 '12 at 3:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8915687799453735, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2841/how-to-obtain-true-probabilities-from-black-scholes?answertab=votes
# How to obtain true probabilities from Black-Scholes? How to obtain true probabilities from Black-Scholes option pricing equation? Suppose, that we know risk adjusted discount rate for the underlying asset (the drift term in the physical measure) and risk free rate. The task is to find a real (not risk neutral) expected payoff for a call option. - ## 3 Answers The true probabilities underlying the B-S equation are actually postulated. The pricing process is assumed to follow the stochastic process $d S_t =\mu S_t d t + \sigma S_t dW_t$, where $W_t$ is the Wiener process. It means that (for simplicity, let's talk about European call) $\ln S_T$ is distributed as $N(ln(S_0)+(\mu-\frac{1}{2}\sigma^2)T, \sigma^2 T)$ Correct me if I'm wrong, you'd like to find $E_P(C) = e^{-rT} E_P[max(S_T-K,0)]$, where P is a "physical" probability measure. Just to make sure, this expected value won't represent the fair price of the option. If my calculations are correct, this expected value is equal to $S_0 N(d_1(\mu)) e^{(\mu-r)T} - K N(d_2(\mu))e^{-rT}$ the terms $d_1$ $d_2$ are from the B-S formula, with the adjustment to replace risk-free rate $r$ there with "risky" $\mu$ Now, I write down some derivation steps, please check them. Let's rewrite expectation as follows, $E_P[...]=E_P[\textbf{I}(S_T\geq K)(S_T-K)]$, where $\textbf{I}(.)$ is the indicator function. Notice that the inequality $S_T\geq K$ is equivalent to $\ln S_T \geq \ln K$ Then, $... = E_P[S_T \textbf{I}(\ln S_T \geq \ln K)]-E_P[K \textbf{I}(\ln S_T \geq \ln K)]$ $= E_P[e^{\ln S_T} \textbf{I}(\ln S_T \geq) \ln K)]- K N(d_2(\mu))$ To calculate the first term, use the following lemma: if X distributed as $N(a,s^2)$ then $E(e^X\textbf{I}(l<X))=e^{s+\frac{1}{2}s^2} N (\frac{\mu+s^2-l}{s})$ Take $\ln S_T$ as $X$ and $l$ as $\ln K$, obtain $E_P[S_T \textbf{I}(\ln S_T \geq \ln K)]=e^{\ln S_0 + (\mu - \frac{1}{2}\sigma^2)T + \frac{1}{2}\sigma^2 T}N(\frac{\ln S_0 + (\mu-\frac{1}{2}\sigma^2)T + \sigma^2 T - \ln K}{\sigma\sqrt T}) = S_0 e^\mu N(d_1(\mu))$ Finally, discount it with the risk-free rate $r$ and we get the result. - You cannot get "true probabilities" (empirical distribution) from the BS model. Option price is required initial investment, which is risk neutral expectation of payout. “True probabilities” are irrelevant in Black-Scholes. However, you can estimate the risk-neutral probability distribution (i.e. implied risk-neutral density) of the stock returns through Breeden-Litzenberger formula. - You cannot deduce the real-world probabilities from the option prices. It may seem strange, but here is a simple example which might help you to understand. Suppose that everyone in the market agrees on the real-world probabilities, and that they are not changing for any external reason. Then suppose that the investment board of a large pension fund decides that they need to increase the amount of options they have bought because they get a feeling that they would like to hold more protection against an adverse move (and since most pension funds are net long equities, this is likely to mean that they want to buy out-of-the-money equity put options to protect against a sell off in the equity market). The pension fund will come to the dealers (investment banks probably) and will buy a whole load of put options, say. Naturally the price in the market will go up (simple law of supply/demand, and demand has increased), which implies that the implied vols will go up. In summary: no change in the real-world probabilities, but a big change in the implied volatilities which will in turn lead to a change in the implied underlying probability distribution. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492180347442627, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/155753-help-question.html
# Thread: 1. ## Help with question~~ Hi everyone, I need help with answering a question: Question: A pseudo-random number generates uniformly distributed random variables Xi on the interval [0,1]. This random number generator is to be used to produce random variables Yi, which have a Pareto distribution, described by Probability density function: p(y) = 0 where y < k p(y) = ak^a/y^(a+1) where y >= k Cumulative distribution function: P(Y<=y) = 0 where y < k P(Y<=y) = 1 - (k/y)^a where y >= k with parameters a = 1.5 and k = 1.0 What value of Yi would correspond to an Xi value of 0.821? Explain your derivations. ==end of question== Any help with this would be much appreciated. Thanks 2. Originally Posted by freakmoister Hi everyone, I need help with answering a question: Question: A pseudo-random number generates uniformly distributed random variables Xi on the interval [0,1]. This random number generator is to be used to produce random variables Yi, which have a Pareto distribution, described by Probability density function: p(y) = 0 where y < k p(y) = ak^a/y^(a+1) where y >= k Cumulative distribution function: P(Y<=y) = 0 where y < k P(Y<=y) = 1 - (k/y)^a where y >= k with parameters a = 1.5 and k = 1.0 What value of Yi would correspond to an Xi value of 0.821? Explain your derivations. ==end of question== Any help with this would be much appreciated. Thanks Use the probability integral transform theorem: Suppose that Y is a continuous random variable with pdf f(y) and continuous cdf F(y). Suppose that X is a continuous standard uniform random variable. Then $V = F^{-1}(x)$ is a random variable with the same probability distribution as Y. If you need more help, please show what you've tried and say where you get stuck. 3. Hi, thanks for the tip, but I'm not sure how to apply it in this case. Normally the cdfs/pdf would be y=f(x) but in this case the function has the variable y in it, so I'm not sure how to change the variables and do the inverse. I tried just changing y to x such that I get the following P(Y<=y) = 1 - (k/y)^a ---> P(X<=x) = 1 - (k/x)^a = 1 - (1/x)^1.5 (subbing in values) so if y needs to be bigger than k to use the function then x must be similar therefore when Xi = 0.821 then the corresponding Yi = 0??? Dosen't look right because it seems too simple and I've not even used a lot of the given values. When I sub in x=0.821 I get -0.344 which is not right. I'm not really sure how to do the transformation. When you say V=F^(-1)(x) (i guess it refers to the cdf) does it mean I just replace all x within the function with y and replace y with x? Say for example Pr[Y<y] = F(x) = 2x-3 then F^(-1)(x) becomes ---> Pr[X<x] = F(y) = 2y-3 Then rearrange the function to be y = (x+3)/2 is this how it works??
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8588814735412598, "perplexity_flag": "middle"}
http://openwetware.org/index.php?title=Biomod/2011/Caltech/DeoxyriboNucleicAwesome/Simulation&oldid=521965
# Biomod/2011/Caltech/DeoxyriboNucleicAwesome/Simulation ### From OpenWetWare Revision as of 19:40, 7 July 2011 by Gregory Izatt (Talk | contribs) Saturday, May 18, 2013 Home Members Project Protocols Progress Discussion References # Simulations ## Overview Our proposed sorting mechanism depends very heavily on a particular random-walking mechanism that has not been demonstrated in literature before. The verification of this mechanism is thus a vital step in our research. Verification of the random walk in one dimension is fairly straightforward: as discussed in <LINK TO THE EXPERIMENTAL DESIGN SECTION>, a one-dimensional track is easy to construct, and will behave like a standard 1D random walk, showing an average translation on the order of $n^{\frac{1}{2}}$ after n steps. Thus, we should expect the time it takes to get to some specific level of fluorescence to be proportional to the square of the number of steps we start the walker from the irreversible substrate. If we can, in an experiment, record the fluorescence over time when the walker is planted at different starting points and show that that fluorescence varies by this relationship, we'll have fairly certainly verified one-dimensional random walking. Our particular case of 2D random walking, however, is not as easily understood, especially considering the mobility restrictions (ability to move to only 4 of 6 surrounding locations at any particular time) of our particular walker. As a control for the verification of 2D random walking, though, we still need to get an idea how long the random walk should take, and how that time will change as we start the walker at different points on the origami. We opt to do this by simulating the system with a set of movement rules derived from our design. We also use the same basic simulation (with a few alterations and extra features) to simulate our entire sorting system in a one-cargo, one-goal scenario, to give us some rudimentary numbers on how long sorting should take, with one vs multiple walkers. Basic parameters and assumptions: • The unit of time is the step, which is the time it takes a walker to take a step given four good opposite track locations (good locations to step to) around it. • The walkable track are given coordinates like a grid (which shifts the even columns up by 0.5). The bottom-left is <1, 1>, the top-left <1, 8>, and the bottom-right <16, 1>. • Movement rules are based on column: • In even columns, a walker can move in directions <0, 1>, <0, -1>, <1, 0>, <-1, -1>. • In odd columns, a walker can move in directions <0, 1>, <0, -1>, <-1, 0>, <1, 1>. An illustration of the grid and motion rules used in the simulation. The bottom-left is the origin (<1,1> because MATLAB indexes by 1). The 2D platform, including track A (red), track B (blue), the marker (tan), cargo (gold), and goal (green), is shown on the left. The grid on the right -- the grid corresponding to our numbering system and representing viable track for a random walk -- is created by shifting even columns up by 0.5. This arrangement (which is, in essence, a visualization tool) reveals through the vertical symmetry of the arrangement that movement rules are going to vary by column only. The valid moves in even and odd columns shown on the left are mapped onto the grid on the right to derive the moveset listed above. ## MATLAB Code At the core of the simulation is a function which runs runs one random walk on an origami of specified size. It can run in both a cargo-bearing (one-cargo one-goal) and a purely random-walk mode. The former has cargo positions corresponding to our particular origami pre-programmed and starting with multiple (specified by user) walkers at random locations on the origami, and terminates when all of the cargos have been "sorted" to the goal location (the x axis). The latter runs one walker starting at a specified location, and terminates when that walker reaches the specified irreversible track location. The function returns a log of all walkers positions over time, a log reporting when cargos were picked up and dropped off, and a count of the number of steps the simulation took. This function is utilized by separate cargo-bearing and random-walk data collection programs that call the function many times over a range of parameters. The function code (saved as randomWalkFunction.m): ```function [log, cargoLog, steps] = randomWalkFunction(x, y, length, ... numWalkers, startPos, endPos, cargoBearing, restricted, error) %x: Width y: Height %length: max # of steps to run simulation %numWalkers = number of walkers to simulate in cargoBearing state %startPos = starting position for walker in randomwalk state %endPos = irreversible track location in randomwalk state %cargoBearing = running cargoBearing (1) vs randomWalking (0) %restricted = whether we're paying attention to borders %error = the chance of the failure of any single track %Random walking cargo pickup/dropoff simulation %for origami tile, x (horizontal) by y (vertical) dim. %Locations index by 1. x+ = right, y+ = up % Gregory Izatt & Caltech BIOMOD 2011 % 20110615: Initial revision % 20110615: Continuing development % Added simulation for cargo pickup/dropoff % Adding support for multiple walkers % 20110616: Debugging motion rules, making display better % 20110616: Modified to be a random walk function, to be % called in a data accumulator program % 20110628-30: Modified to take into account omitted positions % , new probe layout, and automatic halting when % starting on impossible positions. % 20110706: Fixed walker collision. It detects collisions properly % now. % 20110707: Adding support for errors -- cycles through and % omits each track position at an input error rate %Initialize some things: %Cargo positions: cargoPos = [[3, 5]; [9, 5]; [15, 5]; [7, 7]; [11, 7]]; filledGoals = []; omitPos = [[3, 6]; [7, 8]; [8, 5]; [11, 8]; [15, 6]]; steps = 0; hasCargo = zeros(numWalkers); sorted = 0; trackAPoss = [0, 1; 0, -1; 1, 0; -1, -1];  %Movement rules, even column trackBPoss = [0, 1; 0, -1; -1, 0; 1, 1]; %'', odd column log = zeros(length, 2*numWalkers + 1); cargoLog = []; collisionLog = []; %Walkers: % Set position randomly if we're doing cargo bearing simulation, % or set to supplied startPos if not. if cargoBearing currentPos = zeros(numWalkers, 2); for i=1:numWalkers done = 0; while done ~= 1 currentPos(i, :) = [randi(x, 1), randi(y, 1)]; done = checkPossible(numWalkers, currentPos, omitPos, ... cargoPos); end end else numWalkers = 1; %Want to make sure this is one for this case currentPos = startPos; if checkPossible(numWalkers, currentPos, omitPos, ... cargoPos) ~= 1 'Invalid start position.'; cargoLog = []; steps = -1; return end end %Error: If there's a valid error rate, go omit some positions: if error > 0 for xPos=1:x for yPos=1:y  %Only omit if it's not already blocked by something if checkPossible(0, [xPos, yPos], omitPos, cargoPos) if rand <= error omitPos = [omitPos; [xPos, yPos]]; end end end end end %Convenience: numOmitPos = size(omitPos, 1); numCargoPos = size(cargoPos, 1); %Main loop: for i=1:length, for walker=1:numWalkers  %Add current pos to log log(steps + 1, 2*walker-1:2*walker) = currentPos(walker, :);  %Update pos to randomly  %chosen neighbor -- remember,  %these are the only valid neighbors:  % (0, +1), (0, -1)  % IF x%2 = 0:  % (+1, 0), (-1, -1)  % ELSE:  % (-1, 0), (+1, +1) temp = randi(4, 1); if (mod(currentPos(walker, 1),2) == 0) newPos = currentPos(walker, :) + trackAPoss(temp, :); else newPos = currentPos(walker, :) + trackBPoss(temp, :); end  %If we tried to move onto the bottom two spots (in terms of y)  %on an even column (i.e. a goal), we drop off cargo if we had it  %and there wasn't one there already.  %% Specific: 8th column has no goals! It has track instead. if newPos(2) <= 2 && mod(newPos(1),2) == 0 && newPos(1) ~= 8 if cargoBearing && hasCargo(walker) == 1  %Drop cargo, increment cargo-dropped-count, but  %only if there isn't already a cargo here temp = size(filledGoals); match = 0; for k=1:temp(1) if filledGoals(k, :) == newPos match = 1; break end end if match ~= 1 hasCargo(walker) = 0; cargoLog = [cargoLog; steps, walker]; sorted = sorted + 1; filledGoals = [filledGoals; newPos]; end end  %Don't move newPos = currentPos(walker, :); end  %General out-of-bounds case without cargo drop: if restricted && ((newPos(1) > x || newPos(1) < 1 || ... newPos(2) > y || newPos(2) < 1))  %Don't go anywhere newPos = currentPos(walker, :); end  %Hitting cargos case: for k=1:numCargoPos if cargoPos(k, :) == newPos  %Remove the cargo from the list of cargos and "pick up"  % if you don't already have a cargo if hasCargo(walker) == 0 && cargoBearing cargoPos(k, :) = [-50, -50]; hasCargo(walker) = 1; cargoLog = [cargoLog; steps, walker]; end  %Anyway, don't move there newPos = currentPos(walker, :); end end  % Already on irrev. cargo case: if (currentPos(walker, :) == endPos) return end  %Hitting other walkers case: if numWalkers > 1 for k = 1:numWalkers if all(newPos == currentPos(k, :)) && k ~= walker newPos = currentPos(walker, :); collisionLog = [collisionLog; newPos, walker, k]; end end end  %Hitting the omitted positions case:  %If we have any position matches with "omitted" list  %, just don't go there. match = 0; for k=1:numOmitPos if omitPos(k, :) == newPos match = 1; end end if match == 1 newPos = currentPos(walker, :); end  %Finally actually update the position currentPos(walker, :) = newPos; end  % Step forward, update log steps = steps + 1; log(steps, 2*numWalkers + 1) = steps - 1; if (sorted == 5) log(steps+1:end, :) = []; break end end return %%Checks if a position is a possible place for a walker to be: function [possible] = checkPossible(numWalkers, currentPos, ... omitPos, cargoPos)  % If we're starting on an omitted position, or a goal, a cargo,  % or another walker, just give up immediately, and return a -1: numOmitPos = size(omitPos, 1); numCargoPos = size(cargoPos, 1); possible = 1; for walker = 1:numWalkers thisWalkerPos = currentPos(walker, :);  % Only run this for this walker if it's placed somewhere  % valid (i.e. not waiting to be placed, x,y = 0,0) if all(thisWalkerPos)  %Omitted positions: for k=1:numOmitPos if omitPos(k, :) == thisWalkerPos possible = 0; return end end  %Cargo positions: for k=1:numCargoPos if cargoPos(k, :) == thisWalkerPos possible = 0; return end end  %Other walkers: for k=1:numWalkers if (all(currentPos(k, :) == thisWalkerPos)) && ... (k ~= walker) possible = 0; return end end  %Goal positions: if mod(thisWalkerPos(1), 2)==0 && thisWalkerPos(1) ~= 8 ... && thisWalkerPos(2) <= 2 possible = 0; return end end end return ``` ### Examining Errors in Origami This code can be used to generate diagrams like those below, visualizing the mobility of the walker. One immediate question thus far unanswered is the vulnerability of this layout to errors in the laying of track. We investigate this by, when generating the track layout in the beginning of randomWalkFunction, introducing a small (specified by input) percent chance that any single track will be omitted. Error rates at around 10% are bearable; error rates greater than that, however, are catastrophic, causing walkers to become permanently trapped in small sections of the track field. Node graphs showing walker mobility of origami. Each junction represents a track, and each edge represents a step a walker can take. The left diagram shows no error, whereas the other two show increasing error rates. We observe that 10% error rates decrease walker mobility, but tend not to trap the walker in any particular location; 20% error rates or greater, over several tests, tend to cause catastrophic loss of mobility, making the sorting task impossible. ## Random-Walk Simulation The data we need from this simulator is a rough projection of the fluorescence response from our test of 2D random walking, which should change based on the starting location of the walker. Because this fluorescence is changed by a fluorophore-quencher interaction upon a walker reaching its irreversible track, in the case where we plant all of the walkers on the same starting track, the time it takes (fluorescenceinitial − fluorescencecurrent) in the sample to reach some standard value should be proportional to the average time it takes the walkers to reach the irreversible substrate. As this 'total steps elapsed' value is one of the outputs of our simulation function, we can generate a map of these average walk durations by running a large number of simulations at each point on the origami and averaging the results: ```%%% Random walk bulk simulation that %% runs a battery of tests and plots the results %% to see how long random walks take on average to complete %% based on distance from goal / platform size % Gregory Izatt & Caltech BIOMOD 2011 % 20110616: Initial revision % 20110624: Updating some documentation % 20110701: Updating to use new, updated randomWalkFunction % 20110707: Updated to use new error-allowing randomWalkFunction %% Dependency: makes calls to randomWalkFunction.m iterations = 2500; %Test each case of random walk this # times xMax = 16;  %Scale of platform for test yMax = 8; stopPos = [15, 7]; %Stop position averages = zeros(xMax, yMax); %Init'ing this trash = []; %Trash storing variable %Cycle over whole area, starting the walker at each position %and seeing how long it takes it to get to the stop position matlabpool(4) for x=1:xMax for y=1:yMax temp = zeros(iterations, 1); parfor i=1:iterations [trash, trash2, temp(i)] = randomWalkFunction(xMax, yMax, ... 10000, 1, [x, y], stopPos, 0, 1, 0.0); end stdDev(x, y) = std(temp); averages(x, y) = mean(temp) end end matlabpool close ``` A plot of the number of steps (on an average over 500 iterations) it takes a walker to random walk from any point on the origami to the irreversible track at <15, 7>. The holes are due to omitted, cargo, or goal strands blocking the walker's starting location. ### Results Results of the bulk data collection at right show that the average random-walk duration, and thus the time for (fluorescenceinitial − fluorescencecurrent) to reach some standard level, increases with distance, though it changes less significantly the farther out one gets. Also important to note is that the "effective distance" (in terms of steps) along the short axis of our platform is a significantly less than the same physical distance along the long axis. This difference is due to our arrangement of track A and B: as can be seen in the left half of the diagram at the end of the #Overview section, alternating tracks A and B create a straight vertical highway for the walker to follow. Horizontal movement, in contrast, cannot be accomplished by purely straight-line movement -- it requires a back-and-forth weave that makes motion in that direction slower. The disparity in "effective distances" between the vertical and horizontal dimensions is something, in particular, that we should test for; however, a simple series of tests running random walks at a variety of points across the surface, and the comparison of the resulting fluorescence data to the control provided by this simulation should be sufficient to prove that our walker can, indeed, perform a 2D random walk.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909304678440094, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4200118
Physics Forums Page 1 of 2 1 2 > ## induction, magnetism and conductivity‏ Relative motion between a permanent magnet and a conductor wire produces electric current in the wire. Would induced electric current be greater in a wire made of "magnetic" material like iron, "non-magnetic" material like copper, or it doesn't matter? In other words, is there relation between material magnetic properties and its inductivity? And similar but I think different question, is there relation between material magnetic properties and its conductivity? How come magnetic field of a permanent magnet can interact with magnetic fields inside a wire that is overall magnetically neutral? Does the same thing happen with electric fields, so if instead of permanent magnet we had some electrically charged object, could we also induce electric current in a conductor wire by their relative motion? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Homework Help Science Advisor Quote by MarkoniF Relative motion between a permanent magnet and a conductor wire produces electric current in the wire. Would induced electric current be greater in a wire made of "magnetic" material like iron, "non-magnetic" material like copper, or it doesn't matter? In other words, is there relation between material magnetic properties and its inductivity? And similar but I think different question, is there relation between material magnetic properties and its conductivity? I am not sure what you mean by "inductivity". The induced emf in a conducting loop depends on the time rate of change of the magnetic field through the area enclosed by the conducting loop (Faraday's law). So induced emf has nothing to do with the type of conductor. The current that flows in the conducting loop in response to that induce emf does depend on the type of conductor. In order to maximize the induced current you would want to minimize the factors that impede current flow in the conductor. How would you do that? AM Quote by Andrew Mason I am not sure what you mean by "inductivity". I was referring to Faraday's law of induction. The question is whether the amount of induced current would be greater in a wire made of iron or copper, as measured in volts and amperes. The induced emf in a conducting loop depends on the time rate of change of the magnetic field through the area enclosed by the conducting loop (Faraday's law). So induced emf has nothing to do with the type of conductor. Does that mean induced EMF would be the same for copper and plastic wire? Electromotive force doesn't seem to be what I'm asking about. I'd like to know the difference we would measure in different types of conductors in units of volts and amperes. The current that flows in the conducting loop in response to that induce emf does depend on the type of conductor. In order to maximize the induced current you would want to minimize the factors that impede current flow in the conductor. How would you do that? That's the question I'm asking, but it doesn't seem to be simply proportional to material conductivity or resistance. Both conductivity and magnetic properties, such as diamagnetic or paramagnetic index, are related to unpaired electrons or "free electrons". So to rephrase the question, having two wires made of a material with the same conductivity would amount of induced voltage and/or current (volts/amperes) be greater in the material with greater paramagnetic index? The other, perhaps the same question, is whether two materials with different paramagnetic index can have the same conductivity? Recognitions: Homework Help Science Advisor ## induction, magnetism and conductivity‏ Quote by MarkoniF Does that mean induced EMF would be the same for copper and plastic wire? Electromotive force doesn't seem to be what I'm asking about. I'd like to know the difference we would measure in different types of conductors in units of volts and amperes. Welcome to PF by the way! Yes. That means the induced emf around the wire loop is determined not by the type of wire but by the rate of change of the magnetic field and the area enclosed by the wire. That's the question I'm asking, but it doesn't seem to be simply proportional to material conductivity or resistance. The emf does not depend on the type of wire. The current produced by the wire does. If the wire is plastic, the conductivity is 0 so the induced emf produces no current. Both conductivity and magnetic properties, such as diamagnetic or paramagnetic index, are related to unpaired electrons or "free electrons". So to rephrase the question, having two wires made of a material with the same conductivity would amount of induced voltage and/or current (volts/amperes) be greater in the material with greater paramagnetic index? The other, perhaps the same question, is whether two materials with different paramagnetic index can have the same conductivity? I am not sure what you mean by diamagnetic or paramagnetic index. If you are asking if the induced emf depends on the wire, the answer, as I have said, is no. The current that flows in the wire due to that emf does. The current can be affected by the magnetic properties of the conductor. In order to maximize current flow, the magnetic field inside the conductor must be constant (that is just from Farraday's law). There are quantum effects that actually eliminate the magnetic field inside the conductor in order to achieve superconductivity. AM Quote by Andrew Mason I am not sure what you mean by diamagnetic or paramagnetic index. It's susceptibility of a material to respond to external magnetic field, or more precisely said I think what I'm talking about is called "permeability". For example, copper has lower paramagnetic index than iron, which means iron would be more attracted to external magnetic field than copper. If you are asking if the induced emf depends on the wire, the answer, as I have said, is no. The current that flows in the wire due to that emf does. The current can be affected by the magnetic properties of the conductor. In order to maximize current flow, the magnetic field inside the conductor must be constant (that is just from Farraday's law). Let me put it this way. We have two equally long wires, one made of copper and the other made of iron, but the iron wire is thicker so that both of them have the same conductivity. We attach voltmeter and ammeter to each wire and then we move permanent magnet next to one and then next the other in exactly the same way. You say the reading of volts and amperes would be the same for both wires? It seems to me there are more factors that would come into play than just conductivity, like the number and strength of magnetic fields per volume of substance that would be available (unpaired) to interact with this external magnetic field. But also not all of those magnetic fields would interact in the same way. Some would be due to electron spin and some would be due to electron orbits, some would repel while others would attract, so depending on how many of them there are, how "free" they are to move, and depending in what direction they prefer to move, on average and relative to that external magnetic field, it seems it would result in greater or less induced electric current. But then what I just described could be the same thing what defines conductivity as well, and if so it would all boil down to be the same after all. In any case the situation is very complex, especially considering the temperature and heating of the substance would be a factor too, so I'm afraid the answer I'm looking for is specific and experimental rather than general and theoretical. Recognitions: Homework Help Science Advisor Quote by MarkoniF Let me put it this way. We have two equally long wires, one made of copper and the other made of iron, but the iron wire is thicker so that both of them have the same conductivity. We attach voltmeter and ammeter to each wire and then we move permanent magnet next to one and then next the other in exactly the same way. You say the reading of volts and amperes would be the same for both wires? In your example there would appear to be no current because there is no circuit. Let's use two same sized loops of each kind of wire. A magnet passes through the loops. Are you suggesting that since the iron wire has a higher permeability, the magnitude of the magnetic field changes inside the iron wire will be somewhat greater than in the copper wire? Just applying Faraday's law, one can see that this would tend to increase the induced emf in the iron wire, and, therefore the current. But the increase would be in proportion to the area of the wire loop itself (the diameter x length of the wire) compared to the area enclosed by the wire loop. If you are making the diameter of the loop much larger than the diameter of the wire, it should not be significant. AM Quote by Andrew Mason I am not sure what you mean by "inductivity". The induced emf in a conducting loop depends on the time rate of change of the magnetic field through the area enclosed by the conducting loop (Faraday's law). So induced emf has nothing to do with the type of conductor. The current that flows in the conducting loop in response to that induce emf does depend on the type of conductor. In order to maximize the induced current you would want to minimize the factors that impede current flow in the conductor. How would you do that? AM But if the loop resistance is very small its own magnetic field affects the loop emf. The flux in the loop is a combination of the external field plus the internal mag field inevitable when current exists. If the loop R value is small in comparison with the self inductance reactance of said loop, then the emf around the loop decreases. But if R >> XL, then R has negligible influence on the emf around the loop. Claude Quote by Andrew Mason In your example there would appear to be no current because there is no circuit. Let's use two same sized loops of each kind of wire. A magnet passes through the loops. The circuit would be closed by connecting voltmeter at the ends, but loop is fine too. Are you suggesting that since the iron wire has a higher permeability, the magnitude of the magnetic field changes inside the iron wire will be somewhat greater than in the copper wire? I think yes, if by that you mean what I mean. Just applying Faraday's law, one can see that this would tend to increase the induced emf in the iron wire, and, therefore the current. But the increase would be in proportion to the area of the wire loop itself (the diameter x length of the wire) compared to the area enclosed by the wire loop. If you are making the diameter of the loop much larger than the diameter of the wire, it should not be significant. I'm not sure what you just said. Let's make it a bit more clear and take both iron and copper wires to have the same length and the same thickness, but if permeability plays any role, could that then compensate for the lesser conductivity of the iron wire so that we get about the same amount of induced current in both wires? Recognitions: Homework Help Science Advisor Quote by MarkoniF The circuit would be closed by connecting voltmeter at the ends, but loop is fine too. You cannot measure induced emf in a conductor with a voltmeter. You would be measuring the emf generated around the entire loop made by the conductor and the leads of the voltmeter, not the voltage across the ends of the conductor. Faraday's law has some very subtle aspects that cause all sorts of confusion. Professor Lewin at M.I.T. has a very good lecture on this that can be found here. . It can be found on Youtube here (it is easier to navigate in the Youtube version). I'm not sure what you just said. Let's make it a bit more clear and take both iron and copper wires to have the same length and the same thickness, but if permeability plays any role, could that then compensate for the lesser conductivity of the iron wire so that we get about the same amount of induced current in both wires? The magnetic fields inside the iron wire will have greater variation than in the copper. This will affect the emf in the wire and, hence, current. If the wire diameter/thickness is much smaller than the diameter of the loop, the effect will be rather small. To quantify the effect you would have to give all the dimensions and conductivities etc. and painstakingly apply Faraday's law. AM Quote by Andrew Mason You cannot measure induced emf in a conductor with a voltmeter. You would be measuring the emf generated around the entire loop made by the conductor and the leads of the voltmeter, not the voltage across the ends of the conductor. It wouldn't matter to measure the difference, but if you don't want leads of the voltmeter to play any part then you simply extend the ends of the wire further away from the loop(s), or make the leads parallel to the direction of magnet motion. The magnetic fields inside the iron wire will have greater variation than in the copper. What equation are you talking about? This will affect the emf in the wire and, hence, current. So having two equally thick wires with the same conductivity, would greater permeability of one of the wires lead to greater induced current in that wire? To quantify the effect you would have to give all the dimensions and conductivities etc. and painstakingly apply Faraday's law. Where do you see connection between Faraday's law and permeability? Recognitions: Homework Help Science Advisor Quote by MarkoniF It wouldn't matter to measure the difference, but if you don't want leads of the voltmeter to play any part then you simply extend the ends of the wire further away from the loop(s), or make the leads parallel to the direction of magnet motion. It does matter! That point was very well demonstrated by Prof. Lewin. The voltage measured by the voltmeter depends very much on how the leads are configured. The measured voltage depends on how much flux the circuit - which includes the voltmeter leads - encloses! If you extend the leads farther away from the loops the induced voltage will actually increase because the flux enclosed by the circuit increases. If you make the leads parallel to the direction of magnetic motion so that the circuit encloses no flux, the measured voltage will be 0! No one said Faraday's law was simple or intuitive. It is very counter-intuitive. What equation are you talking about? I wasn't talking about an equation. I was talking about the way iron responds to an applied magnetic field. Since the iron atoms are strong magnetic dipoles, they will align with the applied field so that the magnetic field inside the iron conductor will be greater than the applied magnetic field. Consequently, any changes in the applied magnetic field will result in greater changes of the magnetic field inside the iron conductor. So having two equally thick wires with the same conductivity, would greater permeability of one of the wires lead to greater induced current in that wire? I seems to me that it would result in a slightly greater magnetic flux enclosed by the iron wire. This in itself would result in a greater induced voltage (slightly) but there are other effects of a magnetic field inside a conductor that tends to decrease conductivity, so I think you will have to do some experiments to determine whether the induced current is greater. Where do you see connection between Faraday's law and permeability? Faraday's law says that the induced voltage around a closed path is equal to the time rate of change of the flux enclosed by the path. The strength of the magnetic field depends on the permeability of the space. That is why you have an iron core in a solenoid or transformer - to create a strong magnetic field. AM Quote by Andrew Mason It does matter! That point was very well demonstrated by Prof. Lewin. The voltage measured by the voltmeter depends very much on how the leads are configured. It does not matter for the difference of induced current between copper and iron wire. If anything about leads changed the measurement, then that impact would be the same for both wires and so it would not matter. Whether we measure voltage or amperes we would take the same measurement with both iron and copper wire, and the point in that lecture is about two DIFFERENT measurements. It's about two different measurements of the potential difference related to the direction of the current and two different resistors, which is indeed peculiar, so perhaps it's best to just measure amperes instead of voltage, although since we have no uneven distribution of resistance, like they did, we would not need to worry about anything like that. The measured voltage depends on how much flux the circuit - which includes the voltmeter leads - encloses! If you extend the leads farther away from the loops the induced voltage will actually increase because the flux enclosed by the circuit increases. If you make the leads parallel to the direction of magnetic motion so that the circuit encloses no flux, the measured voltage will be 0! No, voltmeter leads are not part of the loop, so at best they could contribute a little bit to the induced current, and if they are parallel to the direction of magnet motion they would be completely irrelevant. Also, nothing about voltmeter leads could ever make the voltage be zero if there is some induced current present in the loop, unless you disconnect them. Nothing like that was even addressed in that lecture. I wasn't talking about an equation. I was talking about the way iron responds to an applied magnetic field. Since the iron atoms are strong magnetic dipoles, they will align with the applied field so that the magnetic field inside the iron conductor will be greater than the applied magnetic field. Consequently, any changes in the applied magnetic field will result in greater changes of the magnetic field inside the iron conductor. Perhaps, however we are not interested in the changes of the magnetic fields in the conductor, we are interested only in the amount of induced current, and those two could be related, that's what I think too, but assumptions, either yours or mine, are not the answer I'm happy with. I seems to me that it would result in a slightly greater magnetic flux enclosed by the iron wire. This in itself would result in a greater induced voltage (slightly) but there are other effects of a magnetic field inside a conductor that tends to decrease conductivity, so I think you will have to do some experiments to determine whether the induced current is greater. That's what I think, and I could do experiment myself if I only had iron wire and more sensitive instruments. Faraday's law says that the induced voltage around a closed path is equal to the time rate of change of the flux enclosed by the path. The strength of the magnetic field depends on the permeability of the space. That is why you have an iron core in a solenoid or transformer - to create a strong magnetic field. That's not really what we are talking about, but I do agree it could be related. After all I have quite similar opinion, if not the same, which is why I ask the question in the first place. Recognitions: Homework Help Science Advisor Quote by MarkoniF It does not matter for the difference of induced current between copper and iron wire. If anything about leads changed the measurement, then that impact would be the same for both wires and so it would not matter. But the point is that you are not measuring the voltage between the two points that the voltmeter leads are connected to - ie across the ends of your two wires. You are measuring the integral of $E \cdot dl$ around the loop that the voltmeter and leads form with the conductor. This is a side issue to the issue you raise about the effect of the permeability of the conductor on the induced current, but it is important to appreciate it. Measuring voltages where there is a time dependent magnetic field present is very different than measuring voltages in a circuit containing a battery and resistance. No, voltmeter leads are not part of the loop, so at best they could contribute a little bit to the induced current, and if they are parallel to the direction of magnet motion they would be completely irrelevant. Also, nothing about voltmeter leads could ever make the voltage be zero if there is some induced current present in the loop, unless you disconnect them. Nothing like that was even addressed in that lecture. Perhaps, however we are not interested in the changes of the magnetic fields in the conductor, we are interested only in the amount of induced current, and those two could be related, that's what I think too, but assumptions, either yours or mine, are not the answer I'm happy with. If the material did not have any effect on the magnetic field inside the wire then there would be no difference in the voltage that is induced in any same sized loop, whether it is made of copper, iron, aluminum or plastic. The currents will differ depending on the resistance/conductivity of the material, but that is all. As you study Faraday's law it will be easier to see why this is. But iron does have an effect on the magnetic field, which is the basis of your original question. To determine how it would affect current is very complicated and involves quantum effects as well as Faraday's law. Physics is not intended to make you happy. That's not really what we are talking about, but I do agree it could be related. After all I have quite similar opinion, if not the same, which is why I ask the question in the first place. ?? It is exactly what we are talking about!! The magnitude of changes in a magnetic field, as well as the rate of change, determines the magnitude of the induced voltage in a closed path in that changing magnetic field. Anything that affects the magnitude of the magnetic field (such as the permeability of the region of space enclosed by and included in that path) will affect the induced voltage. That is just a natural consequence of Faraday's law. AM Quote by Andrew Mason But the point is that you are not measuring the voltage between the two points that the voltmeter leads are connected to - ie across the ends of your two wires. You are measuring the integral of $E \cdot dl$ around the loop that the voltmeter and leads form with the conductor. Leads don't form anything and have nothing to do with induction loop(s) where induction takes place. This is a side issue to the issue you raise about the effect of the permeability of the conductor on the induced current, but it is important to appreciate it. Measuring voltages where there is a time dependent magnetic field present is very different than measuring voltages in a circuit containing a battery and resistance. You are reading too much into that lecture making general conclusions from specific examples, thus applying them where they are not relevant at all. The difference pointed in that lecture is only due to having two different resistors and because voltage was measured at two points between them. We don't have any resistors and we do not measure anything directly on the loop. Our setup is like on that picture above, so given the same change in magnetic field we would measure the same amount of voltage regardless of how we connect the leads, only sign could be different. If the material did not have any effect on the magnetic field inside the wire then there would be no difference in the voltage that is induced in any same sized loop, whether it is made of copper, iron, aluminum or plastic. The currents will differ depending on the resistance/conductivity of the material, but that is all. As you study Faraday's law it will be easier to see why this is. But iron does have an effect on the magnetic field, which is the basis of your original question. To determine how it would affect current is very complicated and involves quantum effects as well as Faraday's law. You are mixing inductor coil with induction loop. First one is to be substitute for moving magnet, you don't induce electric current in inductor coil but supply it and induce magnetic field. We are talking about the second one, induction loop, where the current is not supplied but induced with moving permanent magnet or inductor coil, and the fact that it then creates magnetic field as well is irrelevant for the question which is only about electric current induced as compared between induction loop made of copper and iron. This is not addressed by Faraday's law, and if it is addressed anywhere at all it would be classical electrodynamics, not quantum mechanics. Physics is not intended to make you happy. Answers, not physics. Physics always makes me happy, but answers can make me sad. ?? It is exactly what we are talking about!! The magnitude of changes in a magnetic field, as well as the rate of change, determines the magnitude of the induced voltage in a closed path in that changing magnetic field. Anything that affects the magnitude of the magnetic field (such as the permeability of the region of space enclosed by and included in that path) will affect the induced voltage. That is just a natural consequence of Faraday's law. You are talking about inductor coil while the question is about induction loop. Faraday's law has nothing to do with magnetic permeability of induction loop, but only with external magnetic field or inductor coil which then induces current in induction loop. Forget inductor coil and iron cores, we don't need any of that as we have moving permanent magnet. Recognitions: Homework Help Science Advisor Quote by MarkoniF Leads don't form anything and have nothing to do with induction loop(s) where induction takes place. My comments about a voltmeter were with respect to your original suggestion that you measure the induced voltage in a section of a conductor. You had posted: "The circuit would be closed by connecting voltmeter at the ends, but loop is fine too." I simply observed that measuring the induced voltage in a section of conductor like that will not give you induced voltage between the two ends of the conductor. It measures the emf around the whole loop comprised of the conductor and the voltmeter leads. That is why I suggested you use loops. In the above example, the magnetic field enclosed by the voltmeter leads is not significant so a voltmeter will measure the induced emf across the ends of the coil, which is a function of the rate of change of the flux through the coil loops. If you were replace the coil with a single straight conducting wire and pass a magnet in a direction perpendicular to the direction of the wire (replace the coil with a straight wire and have the magnet moving perpendicular to the page) as you were suggesting, a galvanometer connected as shown in your diagram would not measure the induced voltage in the conductor. That was my point. You are mixing inductor coil with induction loop. First one is to be substitute for moving magnet, you don't induce electric current in inductor coil but supply it and induce magnetic field. We are talking about the second one, induction loop, where the current is not supplied but induced with moving permanent magnet or inductor coil, and the fact that it then creates magnetic field as well is irrelevant for the question which is only about electric current induced as compared between induction loop made of copper and iron. This is not addressed by Faraday's law, and if it is addressed anywhere at all it would be classical electrodynamics, not quantum mechanics. Answers, not physics. Physics always makes me happy, but answers can make me sad. You are talking about inductor coil while the question is about induction loop. Faraday's law has nothing to do with magnetic permeability of induction loop, but only with external magnetic field or inductor coil which then induces current in induction loop. Forget inductor coil and iron cores, we don't need any of that as we have moving permanent magnet. Faraday's law is a fundamental part of classical electromagnetic theory. To progress any further you will have to study Faraday's law and induction. Your comments show that you are not inclined to do that. You seem to want answers that fit with how you already view things, so I am afraid I cannot help you. AM Quote by Andrew Mason My comments about a voltmeter were with respect to your original suggestion that you measure the induced voltage in a section of a conductor. You had posted: "The circuit would be closed by connecting voltmeter at the ends, but loop is fine too." I simply observed that measuring the induced voltage in a section of conductor like that will not give you induced voltage between the two ends of the conductor. It measures the emf around the whole loop comprised of the conductor and the voltmeter leads. That is why I suggested you use loops. I does not matter, just the same. Take that same image, it could be no loops up there, just a wire bent in 'U' shape, and leads of the whatever meter could be little horizontal wires at the bottom, far away from the magnet. If you were replace the coil with a single straight conducting wire and pass a magnet in a direction perpendicular to the direction of the wire (replace the coil with a straight wire and have the magnet moving perpendicular to the page) as you were suggesting, a galvanometer connected as shown in your diagram would not measure the induced voltage in the conductor. That was my point. I didn't say wire is straight, or that it has to be. But even if it was straight, there would again be induced current and galvanometer would measure. Try it. Faraday's law is a fundamental part of classical electromagnetic theory. To progress any further you will have to study Faraday's law and induction. Your comments show that you are not inclined to do that. You seem to want answers that fit with how you already view things, so I am afraid I cannot help you. As I said, you were mixing two different types of induction loops. I's exactly how I told you, check it out. Recognitions: Homework Help Science Advisor Quote by MarkoniF I does not matter, just the same. Take that same image, it could be no loops up there, just a wire bent in 'U' shape, and leads of the whatever meter could be little horizontal wires at the bottom, far away from the magnet. I didn't say wire is straight, or that it has to be. But even if it was straight, there would again be induced current and galvanometer would measure. Try it. There would be no induced current in a straight wire. You need a circuit. That is why I suggested a loop. If you think it does not matter then apply Faraday's law and tell us what the induced voltage is in a straight conductor of length L as a function of dB/dt. As I said, you were mixing two different types of induction loops. I's exactly how I told you, check it out. You appear to be looking for someone to confirm your understanding of induction. In such circumstances, the only help anyone can give you is to suggest that you thoroughly study Faraday's law and then see if you still have the same questions. AM Page 1 of 2 1 2 > Thread Tools | | | | |-------------------------------------------------------------|------------------------------|---------| | Similar Threads for: induction, magnetism and conductivity‏ | | | | Thread | Forum | Replies | | | Special & General Relativity | 19 | | | Advanced Physics Homework | 0 | | | Chemistry | 1 | | | General Physics | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9641951322555542, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/174751-find-simple-expression-f28.html
# Thread: 1. ## Find simple expression for f28 For the functions f1(x)=(2x-1)/(x+1) f(n+1)(x)=f1(fn(x)) for n>=1 it can be shown that f35=f5.Find the simple expression for f28. 2. Originally Posted by chris86 For the functions f1(x)=(2x-1)/(x+1) f(n+1)(x)=f1(fn(x)) for n>=1 it can be shown that f35=f5.Find the simple expression for f28. If $f_{35} = f_5$ then $f_1$ composed with itself 35 times is the same as $f_1$ composed with itself 5 times. So $f_1$ composed with itself 30 times must be the identity function. Therefore $f_{28} = f_{-2}$. So find the inverse function of $f_1$ and take its composition with itself. That will give you $f_{-2}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8431844115257263, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/75785/prime-number-density-vs-connectedness-threshold-coincidence
## Prime number density vs. connectedness threshold: coincidence? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (1) $\pi(n)$, the number of primes at most $n$, is asymptotic to $n / \ln n$. (2) In the Erdős-Rényi random graph model, $p = \ln n / n$ is a sharp threshold for the connectedness of the graph $G(n,p)$ on $n$ vertices with edge-probability $p$. Is there any connection between these two, or is the ratio $n / \ln n$ natural enough to arise in several unrelated circumstances by happenstance? (I ask as neither an expert in random graphs nor in number theory.) - 2 It'd pretty natural, I have no reason to expect a connection. – Charles Sep 18 2011 at 22:31 @Charles: I thought perhaps that was a pun :-). You are likely correct, but it would be more interesting if there were a connection. However, desiring will not make it so. – Joseph O'Rourke Sep 18 2011 at 23:21 1 I imagine the relationship is much like the one offered in this answer mathoverflow.net/questions/53122/… . Gerhard "Ask Me About Symbolic Relationships" Paseman, 2011.09.18 – Gerhard Paseman Sep 18 2011 at 23:29 ## 1 Answer I'd lean towards "coincidence", for a number of reasons: 1. $\pi(n)$ is a cardinality, whereas $p$ is a density; one is comparing apples and oranges. The density of the primes is $1/\log n$, which is quite different from $\log n/n$. (It is true that the average number of divisors $\tau(n)$ of a natural number $n$ is $\log n$, which at first glance seems to match the average degree of a Erdos-Renyi graph of density $\log n/n$, but $\tau(n)$ is very irregularly distributed (its variance is comparable to $\log^3 n$, for instance), in contrast to the Erdos-Renyi degree which obeys a central limit theorem, so this does not seem to be a good match.) 2. For Erdos-Renyi graphs there is a second threshold at 1/n, which is where the giant component begins to emerge. There doesn't seem to be anything analogous for primes. 3. In an Erdos-Renyi graph, n is fixed, and all vertices are given equal weight. For the primes, it is much more natural to work on all the natural numbers at once, and give each natural number n a different weight ($1/n^s$ being a particularly good choice). This pulls the numerology of the two settings even further apart. Note that there certainly are useful probabilistic models of the primes, such as the Cramer model. However, there appears to be little relation between the Cramer model and the Erdos-Renyi model, other than that they are both random models with a density parameter that involves a logarithm in either the numerator or denominator. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467499256134033, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/11760/landaus-ambiguous-statement-about-the-existence-of-inertial-frames?answertab=votes
Landau's ambiguous statement about the existence of inertial frames Landau writes "It is found, however, that a frame of reference can always be chosen in which space is homogeneous and isotropic and time is homogeneous." Does he mean that we can prove the existence of an inertial frame or does he want to say that it is assumed by doing enough number of experiments? Can we start with some axioms and definitions of properties of space and time and then deduce the existence of such a frame in which space is homogeneous and isotropic and time is homogeneous? - 4 Can you give an exact citation? Is it in Classical Mechanics? Where exactly? Context would help a lot in evaluating this statement. – Ted Bunn Jun 30 '11 at 18:58 It is in "Mechanics" on page five, of the edition I looked at, in the part about Galileo's principle of relativity. – MBN Jun 30 '11 at 19:17 2 Answers I believe this is just a restatement of the first Newton's law. - 2 I think that's right, and I think Landau means the statement as an empirical fact (derived from experiments), rather than a mathematical theorem. It's not a mathematical theorem in any sense I can think of, and I think Landau's too smart to claim it is. – Ted Bunn Jun 30 '11 at 21:58 2 Perhaps the equivalence of this statement and the first law can be stated as a theorem – MBN Jun 30 '11 at 22:50 Hi, thanks for your replies. But, see, I didn't post this question to know the opinions of whether one thinks it is can be shown or it is assumed experimentally. I personally believe that it is taken as a granted fact and we assume (in classical mechanics) that the frame fixed to this universe as a whole is inertial. As Landau is just a starting book for Classical Mechanics and this statement does appear to be ambiguous, I want to know if using some advanced physics (string theory?) can we theoretically show the existence of such a frame in which space and time have the required properties? – Lakshya Bhardwaj Jul 1 '11 at 3:20 2 No. Those are postulated. Newtonian mechanics can be constructed on any Galilean manifold. In particular, Newtonian mechanics can be constructed with a spatially inhomogeneous geometry (take any Riemannian manifold for space and cross it against $\mathbb{R}$ for time). Even if you consider relativistic mechanics, the situation is not better. In general relativity already homogeneity and isotropy of space-time is abandoned (except in cosmological models); string theory won't make it better. – Willie Wong Jul 1 '11 at 13:25 1 @Willie Great to hear from you again. I was wondering if this statement is basically about the symmetry group - Poincare group lets say. You can always be sitting in a reference frame where the symmetry group is not the full Poincare group - if you are looking at the world by sitting on a merry-go-round..but you can always do a Lorentz transformation to such a frame where the symmetry group is the full Poincare group. Is that what is being said? – user6818 Jul 1 '11 at 18:09 show 5 more comments The edition Mechanics was 1st published at 1960 and was written earlier than that. Landau died in 1968 aged 60. Just 4 years before, in 1964, the CMB was discovered and the properties of it was unknown for years after. It appears that Landau's saying was vindicated with a referential with the properties of the CMB. The Earth (we, and the labs) is moving in relation to the CMB and the universe appear to us to be non isotropic. From the perspective of CMB, the referential of light, i.e. where light propagates equally in all directions, the universe is isotropic. In addition every observer in the universe can use (share) a common length and time base using the CMB properties for the calibration purposes. This special referential is not attached to any observer as with the Einstein ones. - So, did Landau sense the existence of something like CMB 4 years before its discovery? And can we arrive at all properties of CMB with something more fundamentally theoretical? :) – Lakshya Bhardwaj Jul 2 '11 at 17:07 @Helder I have little clue as to what you are trying to say. Just to point out that the existence of CMB does not provide a notion of any fixed/special reference frame (..in the Newtonian sense..) One still has the foundational ideas of Einstein that physical parameters like length, mass.time intervals and concepts like simultaneity of events are dependent on the observer's frame of reference. The CMB frame is as good or as bad as any other frame. – user6818 Jul 3 '11 at 7:17 1 – Helder Velez Jul 3 '11 at 12:58 @Helder I simply don't get what you are trying to say. Many things are fuzzy about your statement. Can you precisely define what would you call the CMB reference? Why is that an inertial frame? (..how are you defining it?..definitions can differ depending on what approximation you are making about gravity..) Why are you picking that out? Can you give a technical reference to what you are saying? May be a paper or a book reference? – user6818 Jul 3 '11 at 19:25 @Anirbit I think Helder has given this great idea on which we can think of finding some kind of universal inertial frame. See, at the start of universe, no direction had any kind of reservation as that Big Bang should favour that direction over others. So, it seems pretty reasonable to beleive that as a whole the universe is isotropic in nature although we still can't say its homogeneous. Moreover, CMB is the purest form of remains of early universe free from fluctuations and it should (acc to aesthetic appeal of CMB) form the basis of next theory which involves inertial & non-inertial frames. – Lakshya Bhardwaj Jul 4 '11 at 10:39 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551905989646912, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/207452-equation-line-help.html
2Thanks • 1 Post By darthjavier • 1 Post By HallsofIvy # Thread: 1. ## equation of a line - help Hi, I'm stuck on a problem and I need help... I'm supposed to find the general equation of a line (Ax+By+C=0) from two points. I did a couple of these problems and everything went well but not for this one... so here goes: points: (-sqrt(2)/2, sqrt(2)/2), (1/2, sqrt(3)/2) We are given the answer but I can't figure out how to get there... Here is the answer: (sqrt(3) - sqrt(2))x - (1 + sqrt(2))y + (sqrt(2) + sqrt(6)) / 2 = 0 Anyone can help me with the steps to get to this answer? This is my first post so go easy on me if there is a better way to format equations (just tell me how to). Thanks! 2. ## Re: equation of a line - help Hi You have these points: $\left(-\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2}\right); \left(\frac{1}{2},\;\frac{}{\sqrt{3}}{2}\right)$ The slope of the line is: $\mathrm{m}=\dfrac{\frac{\sqrt{3}}{2}-\frac{\sqrt{2}}{2}}{\frac{1}{2}+\frac{\sqrt{2}}{2} }$ Now consider a general point $\mathrm{(x,\;y)}$ of this line, which must have the same slope: $\mathrm{m=\dfrac{\frac{\sqrt{3}}{2}-\frac{\sqrt{2}}{2}}{\frac{1}{2}+\frac{\sqrt{2}}{2} }=\dfrac{y-\frac{\sqrt{2}}{2}}{x+\frac{\sqrt{2}}{2}}}$ Now you got it Greetings 3. ## Re: equation of a line - help Hi and thanks for your answer. I'm pretty new to maths so what might seem obvious for you is still a mystery for me I did understand the slope formula but I can't see why the y difference (sqrt(3) - sqrt(2)) transfers to the A variable in Ax+By+C=0 (same thing for the x difference that becomes B...) Also, how you get the C variable from these differences is beyond me. Sorry if this is too much explanation to have me understand something simple but I just don't get it Thanks 4. ## Re: equation of a line - help $\mathrm{m}$ is the slope $\mathrm{m=\dfrac{\frac{\sqrt{3}}{2}-\frac{\sqrt{2}}{2}}{\frac{1}{2}+\frac{\sqrt{2}}{2} }=\dfrac{y-\frac{\sqrt{2}}{2}}{x+\frac{\sqrt{2}}{2}}}$ or $\mathrm{m=\dfrac{\frac{\sqrt{3}}{2}-\frac{\sqrt{2}}{2}}{\frac{1}{2}+\frac{\sqrt{2}}{2} }=\dfrac{y-\frac{\sqrt{3}}{2}}{x-\frac{1}{2}}}$ Now multiply in each side $\\\mathrm{\dfrac{\frac{\sqrt{3}}{\rlap{/}2}-\frac{\sqrt{2}}{\rlap{/}2}}{\frac{1}{\rlap{/}2}+\frac{\sqrt{2}}{\rlap{/}2}}=\dfrac{y-\frac{\sqrt{3}}{2}}{x-\frac{1}{2}}}}\\\mathrm{x(\sqrt{3}-\sqrt{2})-\dfrac{\sqrt{3}}{2}+\dfrac{\sqrt{2}}{2}=y(1+\sqrt{ 2})-\dfrac{\sqrt{3}}{2}-\dfrac{\sqrt{6}}{2}}\\\mathrm{x(\sqrt{3}-\sqrt{2})-y(1+\sqrt{2})+\dfrac{\sqrt{2}+\sqrt{6}}{2}}=0$ $\mathrm{A=\sqrt{3}-\sqrt{2}}$ $\mathrm{B=-(1+\sqrt{2})}$ $\mathrm{C=\dfrac{\sqrt{2}+\sqrt{6}}{2}}$ 5. ## Re: equation of a line - help You give the equation as Ax+ By= C. Do you realize those numbers are not "unique"? You could multiply the entire equation by any constant. You could, for example, Divide the entire equation by C to get (A/C)x+ (B/C)y= 1. Rewriting that as A'x+ B'y= 1 (A'= A/C and B'= B/C), putting $x= -\sqrt{2}/2$, $y= \sqrt{2}/2$ we have $\frac{\sqrt{2}}{2}A'+ \frac{\sqrt{2}}{2}B'= 1$ and multiplying both sides by $\frac{2}{\sqrt{2}}= \sqrt{2}$, $A'+ B'= \sqrt{2}$. Now put x= 1/2, $y= \sqrt{3}{2}$ in the equation: $A'/2+ \sqrt{3}B'/2= 1$ so that $A'+ \sqrt{3}B'= 2$. Now, subtracting the previous equation from that, the A' is eliminated: $(\sqrt{3}- \sqrt{2})B'= 1$. Solve that for B', then put that into either of the previous equations. 6. ## Re: equation of a line - help Thank you very much guys, that was very helpful (a lot more than my exercise book). Looks like I still have some basics to grasp, but with some help like this, I'll get there eventually Cheers!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493470191955566, "perplexity_flag": "middle"}
http://freelance-quantum-gravity.blogspot.com/2010/06/heterotic-phenomenology.html
# DiY quantum gravity An independent viewpoint about quantum gravity. ## Tuesday, June 01, 2010 ### Heterotic phenomenology I have talked quite often in this blog about F-theory. This is partially due to "historical" reasons, that is, the F-theory GUT revolution happened recently, while this blog growth. Also the influence of a friend of mine to let learn algebraic geometry was a plus because F-theory relies a lot in that area of maths. Of course another reason is that they are very good developed framework. But that doesn't mean that there is not development in other areas of string theory. In particular from the eighties heterotic strings where the best candidate for a phenomenological model. Even today most books in string theory (such as the Becker- Becker-Swhartz one) use the heterotic to teach the math of compactification. Today it has appeared an interesting paper in heterotic phenomenology so I will say a few things about the subject. Heterotic string/m-Theory models are mainly build by the compactification mechanism. In that aspect they differ from many advances in string theory phenomenology. The fact that a group of parallel N D-branes automatically give an U(n) gauge theory was an star point for building local models in which gravitational degrees of freedom can be ignored for many purposes. That derived in a lot of development of D-brane models in Type II A and type II B string models, and, later, the non-perturbative counterpart of type II-B, F theory, where in addition to D-branes one has (P,q) branes. In fact something is done about non local F-Theory models and some is made for M-theory from the duality among some F-theory and M-theory set ups. But here I am going to talk about the most conventional approach, full compactifications. I am not sure about it, but I think that the reason why not too much development in local M-theory models is because not too much is known for certain (despite the Bagger-Lambert minirevolution of two years ago) about the M-theory branes, although possibly that would apply better to type II A M-theory that to heterotic M-theory. I am going now with some references. The article of today roots in his model of the year 2006: The Exact MSSM Spectrum from String Theory. I let here the abstract of the paper: We show the existence of realistic vacua in string theory whose observable sector has exactly the matter content of the MSSM. This is achieved by compactifying the E_8 x E_8 heterotic superstring on a smooth Calabi-Yau threefold with an SU(4) gauge instanton and a Z_3 x Z_3 Wilson line. Specifically, the observable sector is N=1 supersymmetric with gauge group SU(3)_C x SU(2)_L x U(1)_Y x U(1)_{B-L}, three families of quarks and leptons, each family with a right-handed neutrino, and one Higgs-Higgs conjugate pair. Importantly, there are no extra vector-like pairs and no exotic matter in the zero mode spectrum. There are, in addition, 6 geometric moduli and 13 gauge instanton moduli in the observable sector. The holomorphic SU(4) vector bundle of the observable sector is slope-stable. The observable sector of the theory has an SU(3)C × SU(2)L × U(1)Y × U(1)B−L gauge group. The B-L additional group is beyond the MSSM, but that is not as bad as it seems as they discuss in the paper of today. Additionally they have: Matter spectrum: – 3 families of quarks and leptons, each with a right-handed neutrino – 1 Higgs–Higgs conjugate pair – No exotic matter fields – No vector-like pairs (apart from the one Higgs pair) 3 complex structure, 3 K¨ahler, and 13 vector bundle moduli This sector is obtained by by two steps. First a Spin(10) group can arise from the spontaneous breaking of the observable sector E8 group by an SU(4) gauge instanton on an internal Calabi-Yau threefold. Later The Spin(10) group is then broken by discrete Wilson lines to a gauge group containing SU(3)C × SU(2)L × U(1)Y as a factor. The structure of the hidden sector depends on the choice of a stable, holomorphic vector bundle V ′. The topology of V ′, that is, its second Chern class, is constrained by two conditions: first, the anomaly cancellation equation: $$c2(V´) = c2(TX) - c2(V) - [W]$$ Here c2 means the second Chern class of the vector bundle V and [W] is a possible effective five-brane class. Ok, 'll stop writing the details that can be read in the paper. The important part is that they don't obtain in detail the aspects of the hidden sector (the sector of the other E8 group of the E8xE8 heterotic string). They simply assume it's existence. Since 2006 that model has been further developed and has lead to this paper today: The Mass Spectra, Hierarchy and Cosmology of B-L MSSM Heterotic Compactifications The two papers even share one co-author, Burt A. Ovrut. The abstract reads: The matter spectrum of the MSSM, including three right-handed neutrino supermultiplets and one pair of Higgs-Higgs conjugate superfields, can be obtained by compactifying the E_{8} x E_{8} heterotic string and M-theory on Calabi-Yau manifolds with specific SU(4) vector bundles. These theories have the standard model gauge group augmented by an additional gauged U(1)_{B-L}. Their minimal content requires that the B-L gauge symmetry be spontaneously broken by a vacuum expectation value of at least one right-handed sneutrino. In previous papers, we presented the results of a quasi-analytic renormalization group analysis showing that B-L gauge symmetry is indeed radiatively broken with an appropriate B-L/electroweak hierarchy. In this paper, we extend these results by 1) enlarging the initial parameter space and 2) explicitly calculating all renormalization group equations numerically, without approximation. The regions of the initial parameter space leading to realistic vacua are presented and the B-L/electroweak hierarchy computed over these regimes. At representative points, the mass spectrum for all sparticles and Higgs fields is calculated and shown to be consistent with present experimental bounds. Some fundamental phenomenological signatures of a non-zero right-handed sneutrino expectation value are discussed, particularly the cosmology and proton lifetime arising from induced lepton and baryon number violating interactions. Since the 2006 paper math sophistication has grown and in the way of the theory the have used things such as monads, spectral covers or cohomological methods to calculate the texture of Yukawa couplings and other parameters. The key ingredient is still a Calabi-Yau manifolds with Z3xZ3 homotopy and a vector bundle with SU(4) structure group. The observable matter spectrum is basically the same of the previous paper. As I said before they state that: The existence of the extra U(1)B��L gauge factor, far from being being extraneous or problematical, is precisely what is required to make a heterotic vacuum with SU(4) structure group phenomenologically viable. The reason is the following. As is well-known, four-dimensional N = 1 supersymmetric theories generically contain two lepton number violating and one baryon number violating dimension four operators in the superpotential. The former, if too large, can create serious cosmological di culties, such as in baryogenesis and primordial nucleosynthesis , as well as coming into conflict with direct measurements of lepton violating decays. Well, the details can be read in the paper. The important thing is that the model is mature enough to allow explicit and accurate renormalization group analysis of the effective field theory and do precise predictions of some aspects. This is not the only line of investigation in heterotic string theory. As I have stated this gives mainly an MSSM but no unification group scheme is followed. But there are such kind of constructions. As early as in 2005 there is a paper doing such a thing from heterotic M-theory: An SU(5) Heterotic Standard Model- The authors are Vincent Bouchard, Ron Donagi. and the abstract says: We introduce a new heterotic Standard Model which has precisely the spectrum of the Minimal Supersymmetric Standard Model (MSSM), with no exotic matter. The observable sector has gauge group SU(3) x SU(2) x U(1). Our model is obtained from a compactification of heterotic strings on a Calabi-Yau threefold with Z_2 fundamental group, coupled with an invariant SU(5) bundle. Depending on the region of moduli space in which the model lies, we obtain a spectrum consisting of the three generations of the Standard Model, augmented by 0, 1 or 2 Higgs doublet conjugate pairs. In particular, we get the first compactification involving a heterotic string vacuum (i.e. a {\it stable} bundle) yielding precisely the MSSM with a single pair of Higgs. If one reads the paper one can see that it cites the papers in heterotic string that are the basic of the other models. This can look a bit surprising since one article is about heterotic string (which has 10 dimensions)and the other about heterotic M-theory (which has 11 dimensions). Nut they actually work with compactifications in a calaby-Yau threfold. That is because the eleventh dimension of the heterotic M-theory has an special character and no compactification of is is done. On the contrary the type II A M-theory has a more conventional eleventh dimensions and it requires compactification on G2 holonomy bundles, which are harder to work. Well, I am far from being an expert in the heterotic phenomenology, but I thought that the today paper was a good occasion to say some things about it. Besides this paper today there has been many other interesting papers. Fortunately Lubos has written an entry doing a brief comments on them and I prefer link you to that entry to get the info: A generous hep-th Tuesday Update: In the Lubos entry (where very gently this post is linked, thanks Lubos ;)) the papers about heterotic phenomenology have been discussed and someone posted two papers about G2 heterotic compactifications, concretelly : http://arxiv.org/abs/0810.3285 and http://arxiv.org/abs/0905.1968. It has also been discussed an issue about the prediction of the paper considered here saying that the mass of the Higgs boson was around 101 - 106 GeV. while the LEP had excluded with a 95% confidence level a Higgs mass minor than 114 GeV. Lubos argues that 95% is not enought to exclude the value f there are goo theoretical reasons to do s. I would add that Jester (resonance) wrote a post stating that the LEP exclusion ny worked for conventional Higgs. I don't remember the details so I can't say if that is relevant, but I recommend the readers of this blog to make a search in resonances. Publicado por Javier Etiquetas: heterotic, supercuerdas, superstrings
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225373864173889, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-software/153088-how-do-i-solve-equation-b1-b2-b3-using-maple-mathematica.html
# Thread: 1. ## How do I solve this equation for b1, b2, b3 using Maple or Mathematica? (tH′′)^2 = [H′(2n + t + λ) - H]^2 − 4(tH′ − H + δ)[(H′)^2 + λH′] where H(t) / (n^2)λ = (summation from k=1 to infinity) of [bk / (t^k)] ((where bk means b with subscript k)) I hope someone can help me..I desperately need to know how to do this!! Thanks 2. Well, in Mathematica you'd type the following command (I'm assuming this is a differential equation, right?): DSolve[(t H''[t])^2==(H'[t](2n+t+lambda)-H[t])^2-4(t H'[t]-H[t]+delta)((H'[t])^2+lambda H'[t]),H[t],t] You should double-check that notation, however, and make sure it's correct against your DE. In particular, I'm not sure whether you meant H'[t](2n+t+lambda) or H'(2n+t+lambda). I don't think Mathematica is going to be able to handle that. At least, my copy of Mathematica 4 can't handle it. Maybe more recent versions have more powerful solvers. I'm talking here about exact, closed-form solutions. Mathematica can probably get you a numerical solution the same as MATLAB or any other solver routine. Do you need a numerical solution? 3. Thanks I meant H'[t](2n+t+lambda). Also, I need solutions in terms of n and lambda (except for the first value which my lecturer told me will be b1=1). I tried entering eval and then the above expression in Maple but it just kept giving me the same expression again and again. 4. Hang on a second. Let me rephrase the question. You're asked to solve $(t H''(t,n,\lambda))^{2}=[H'(t,n,\lambda)(2n+t+\lambda)-H(t,n,\lambda)]^{2}$ $-4(t H'(t,n,\lambda)-H(t,n,\lambda)+\delta)[(H'(t,n,\lambda))^{2}+\lambda H'(t,n,\lambda)],$ where $\displaystyle{H(t,n,\lambda)=\lambda n^{2}\sum_{k=1}^{\infty}\frac{b_{k}}{t^{k}},}$ and differentiation is with respect to $t$. Is that correct? If so, I think I would differentiate your expression for $H$ as required, and convert the DE into a difference equation. Do you have any initial conditions? So, the first step is differentiating: $\displaystyle{H'(t,n,\lambda)=\lambda n^{2}\sum_{k=1}^{\infty}\frac{(-k) b_{k}}{t^{k+1}},}$ and $\displaystyle{H''(t,n,\lambda)=\lambda n^{2}\sum_{k=1}^{\infty}\frac{(-k)(-k-1) b_{k}}{t^{k+2}}.}$ Then you could plug that into your DE. You'd get some messy series multiplication going on there, but it might be doable. You'd get a recurrence relation for the $b_{k}$'s. 5. That's right. That's exactly what I have to solve, but my lecturer wants me to do it using Maple and not by hand and since I've never used it before, I can't figure out what commands to enter. 6. I haven't a clue on how to use Maple to solve that. With Mathematica, I'd probably play around with the Series command, and the RSolve command, to simplify your work. Look up the help on those commands, if you decide on Mathematica. But first, you need to get your DE looking like an actual recurrence relation. I'd work with that first, referencing the usual series solution method, which you should be able to adapt to your DE, as nasty as it is.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461267590522766, "perplexity_flag": "middle"}
http://en.wikibooks.org/wiki/Microprocessor_Design/FPU
# Microprocessor Design/FPU Similar to the ALU is the Floating-Point Unit, or FPU. The FPU performs arithmetic operations on floating point numbers. An FPU is complicated to design, although the IEEE 754 standard helps to answer some of the specific questions about implementation. It isn't always necessary to follow the IEEE standard when designing an FPU, but it certainly does help. ## Floating point numbers This section is just going to serve as a brief refresher on floating point numbers. For more information, see the Floating Point book. Floating point numbers are specified in two parts: the exponent (e), and the mantissa (m). The value of a floating point number, v, is generally calculated as: $v = m \times 2^e$ ### IEEE 754 IEEE 754 format numbers are calculated as: $v = (1 + m) \times 2^e$ The mantissa, m, is "normalized" in this standard, so that it falls between the numbers 1.0 and 2.0. ### Floating Point Multiplication Multiplying two floating point numbers is done as such: $v_1 \times v_2 = (m_1 \times m_2) \times 2^{(e_1 + e_2)}$ Likewise, division can be performed by: $\frac{v_1}{v_2} = \frac{m_1}{m_2} \times 2^{(e_1 - e_2)}$ To perform floating point multiplication then, we can follow these steps: 1. Separate out the mantissa from the exponent 2. Multiply (or divide) the mantissa parts together 3. Add (or subtract) the exponents together 4. Combine the two results into the new value 5. Normalize the result value (optional). ### Floating Point Addition Floating point addition—and by extension, subtraction— is more difficult than multiplication. The only way that floating point numbers can be added together is if the exponents of both numbers are the same. This means that when we add two numbers together, we need first to scale the numbers so that they have the same exponent. Here is the algorithm: 1. Separate the mantissa from the exponent of each number 2. Compare the two exponents, and determine the difference between them. 3. Add the difference to the smaller exponent, to make both exponents the same. 4. Logically right-shift the mantissa of the number with the smaller exponent a number of spaces equal to the difference. 5. Add the two mantissas together 6. Normalize the result value (optional). ## Floating Point Unit Design As we have seen from the two algorithms above, an FPU needs the following components: For addition/Subtraction • A comparator (subtracter) to determine the difference between exponents, and to determine the smaller of the two exponents. • An adder unit to add that difference to the smaller exponent. • A shift unit, to shift the mantissa the specified number of spaces. • An adder to add the mantissas together For multiplication/division • A multiplier (or a divider) for the mantissa part • An adder for the exponent prts. Both operation types require a complex control unit. Both algorithms require some kind of addition/subtraction unit for the exponent part, so it seems likely that we can use just one component to perform both tasks (since both addition and multiplication won't be happening at the same time in the same unit). Because the exponent is typically a smaller field than the mantissa, we will call this the "Small ALU". We also need an ALU and a multiplier unit to handle the operations on the mantissa. If we combine the two together, we can call this unit the "Large ALU". We can also integrate the fast shifter for the mantissa into the large ALU. Once we have an integer ALU designed, we can copy those components almost directly into our FPU design.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.86119145154953, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/fresnel-integrals+calculus
# Tagged Questions 0answers 71 views +100 ### An integral involving Fresnel integrals $\int_0^\infty \left(\left(2\ S(x)-1\right)^2+\left(2\ C(x)-1\right)^2\right)^2 x\ \mathrm dx,$ I need to calculate the following integral: $$\int_0^\infty \left(\left(2\ S(x)-1\right)^2+\left(2\ C(x)-1\right)^2\right)^2 x\ \mathrm dx,$$ where $$S(x)=\int_0^x\sin\frac{\pi z^2}{2}\mathrm dz,$$ ... 1answer 60 views ### Evaluating $\int_0^1 \! C(x) \, \mathrm dx$ through integration by parts $$\int_0^1 \! C(x) \, \mathrm{d} x.$$ where $C(x) = \int_0^x \cos(t^2) \, \mathrm{d} t$. I am really not quite sure how to go about this one, especially given that it needs to be calculated ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182789325714111, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/138234/how-does-a-conformal-mapping-preserve-angles-in-hyperbolic-geometry/138292
# How does a conformal mapping preserve angles in hyperbolic geometry? Suppose I have a sector $D = \{0 < \arg z < \alpha\}$ where $\alpha \leq 2\pi$. If I apply the function $w = \frac{\zeta - i}{\zeta + i}$ from the upper half plane to the unit disc ($\zeta = z^{\frac{\pi}{\alpha}}$), I get that the vertex of the sector goes to -1 and $z = \infty$ goes to 1. I get the unit circle essentially. My question for this example is: How do we know that the angles are preserved? - ## 2 Answers The answer is that the function sending $\zeta$ to $\frac{\zeta-i}{\zeta+i}$ is holomorphic, and holomorphic functions are those with a complex derivative. If we think of derivatives as being linear maps of best approximation, then that means $f$ is holomorphic if and only if $f(z+w)=f(z)+f^\prime(z)\cdot w+o(w)$. In other words, we are locally just translating and multiplying by $f^\prime (z)$. Translation of course preserves angles, and multiplying by a complex number is the same as a scaling combined with a rotation, which also both preserve angles. - Care to explain the downvote? – Brett Frankel May 1 '12 at 0:46 Here's an approach that doesn't require much foundation in complex analysis. The map you have described is among the mappings known as mobius transformations. These are mappings of the form: f(z)=(az+b)/(cz+d) where a,b,c,d are complex and ad-bc is non-zero. What you should check is that all mobius transformations are generated by the following more elementary ones, namely z->z+l, z->-conj(z), z->kz for k>0 and finally z->1/conj(z). These you'll notice are all transformations of the upper half plane. In particular, the first two are actually ordinary isometries of the plane. Thusly, they preserve angles. The second to last one only dilates by a constant factor so it, too, preserves angle rather trivially. All that's left to be checked is that the last one preserves angles. You'll notice that this is a composition of z->-conj(z) and z->-1/z. So in fact, we need only check that z->-1/z preserves angles. It is now a not so hard geometric problem to see that this is indeed the case. So in particular, this general argument implies that your specific mobius transformation preserves angles. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394519329071045, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/313152/the-edge-set-grown-in-kruskals-algorithm
# The Edge Set Grown in Kruskal's Algorithm Let G = (V, E) be a weighted, connected and undirected graph. Let T be the edge set that is grown in Kruskal's algorithm and stopped after k iterations (so T might contain less than |E|-1 edges). Let W(T) be the weighted sum of this set. Let T’ be an acylic edge set such that |T| = |T’|. Prove that W(T) <= W(T’) I understand the original proof of the algorithm and I’ve tried several approaches to tackle this, neither worked. For example: I thought an induction on |T| might work. For |T| = 1 it’s obvious. We assume correctness for |T|=k and prove (or not…) for k+1. Assume by contradiction that there exists an edge set T’ such that |T’|=k+1 and W(T’) < W(T). Let e be the last edge added by Kruskal algorithm. So for any edge f in T’, W(f) < W(e) (otherwise we remove the edges from the 2 sets and get a contradiction). This can only happen if every edge in T’ is already in T or forms a cycle with T – {e}. … I have no idea what to do next. I would really appreciate any help, Thanks in advance - This isn't true unless $T^\prime$ is (the edgeset of) a spanning tree. You say you understand the original proof of the algorithm, but I don't see how that is possible if you can't answer this question. – Alexander Gruber Feb 24 at 18:20 You're right. I forgot to mention that T' doesn't contain any cycles – Robert777 Feb 24 at 18:25 So how did it work in the original proof of Kruskall? How did you know at the end that the spanning tree it produces was really minimal? – Alexander Gruber Feb 24 at 19:14 You can show that at each stage of the algorithm, when an edge e is added to the grown edge set, then the new set is contained in at least one minimal spanning tree. – Robert777 Feb 24 at 19:32 1 Right, so my suggestion is this: put your edges in order by weight, smaller first, and prove that at the $k$th step, having $W(T^\prime)<W(T)$ would produce a contradiction because $T^\prime$ would have been selected by the algorithm as the smaller spanning tree. Think about when the last edge could have been added. – Alexander Gruber Feb 24 at 20:09 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428656101226807, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/04/28/the-gram-schmidt-process/?like=1&source=post_flair&_wpnonce=fe7f791e1e
# The Unapologetic Mathematician ## The Gram-Schmidt Process Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: each pair of distinct vectors is perpendicular, and each vector has unit length. In formulas, we say that the collection $\left\{e_i\right\}_{i=1}^n$ is orthonormal if $\langle e_i,e_j\rangle=\delta_{i,j}$. These can be useful things to have, but how do we get our hands on them? It turns out that if we have a linearly independent collection of vectors $\left\{v_i\right\}_{i=1}^n$ then we can come up with an orthonormal collection $\left\{e_i\right\}_{i=1}^n$ spanning the same subspace of $V$. Even better, we can pick it so that the first $k$ vectors $\left\{e_i\right\}_{i=1}^k$ span the same subspace as $\left\{v_i\right\}_{i=1}^k$. The method goes back to Laplace and Cauchy, but gets its name from Jørgen Gram and Erhard Schmidt. We proceed by induction on the number of vectors in the collection. If $n=1$, then we simply set $\displaystyle e_1=\frac{v_1}{\lVert v_1\rVert}$ This “normalizes” the vector to have unit length, but doesn’t change its direction. It spans the same one-dimensional subspace, and since it’s alone it forms an orthonormal collection. Now, lets assume the procedure works for collections of size $n-1$ and start out with a linearly independent collection of $n$ vectors. First, we can orthonormalize the first $n-1$ vectors using our inductive hypothesis. This gives a collection $\left\{e_i\right\}_{i=1}^{n-1}$ which spans the same subspace as $\left\{v_i\right\}_{i=1}^{n-1}$ (and so on down, as noted above). But $v_n$ isn’t in the subspace spanned by the first $n-1$ vectors (or else the original collection wouldn’t have been linearly independent). So it points at least somewhat in a new direction. To find this new direction, we define $\displaystyle w_n=v_n-\langle e_1,v_n\rangle e_1-...-\langle e_{n-1},v_n\rangle e_{n-1}$ This vector will be orthogonal to all the vectors from $e_1$ to $e_{n-1}$, since for any such $e_j$ we can check $\displaystyle\begin{aligned}\langle e_j,w_n&=\langle e_j,v_n-\langle e_1,v_n\rangle e_1-...-\langle e_{n-1},v_n\rangle e_{n-1}\rangle\\&=\langle e_j,v_n\rangle-\langle e_1,v_n\rangle\langle e_j,e_1\rangle-...-\langle e_{n-1},v_n\rangle\langle e_j,e_{n-1}\rangle\\&=\langle e_j,v_n\rangle-\langle e_j,v_n\rangle=0\end{aligned}$ where we use the orthonormality of the collection $\left\{e_i\right\}_{i=1}^{n-1}$ to show that most of these inner products come out to be zero. So we’ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. So we normalize it: $\displaystyle e_n=\frac{w_n}{\lVert w_n\rVert}$ and we’re done. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 37 Comments » 1. Are there any applications of Gram-Schmidt to e.g. polynomials or Hilbert spaces of L^2 summable analytic functions on a domain? Comment by Zygmund | April 28, 2009 | Reply 2. To finite-dimensional subspaces, sure, but the procedure is essentially finite. Comment by | April 28, 2009 | Reply 3. I don’t think that’s quite true. If you start with an infinite sequence (v_n), then you can certainly keep applying the Gram-Schmidt process.: at level n, you only need to know v_1 up to v_n, so it is “finite” in this sense. Essentially because of the (abstract form) of Parseval’s Theorem the infinite series you end up with is still nicely related to what you started with (same closed linear span, for example). Lots of classical polynomails can be found in this case, if you start with [a,b] and some inner-product on the continuous functions of [a,b] (normally not just integration). For example, the Hermite polynomials, the Legendre polynomials etc. Comment by | April 29, 2009 | Reply 4. Well… the answer is “yes”. If H is a separable (pre)Hilbert space, and if you have a sequence x_n whose span is dense in H, then surely you can apply Gram-Schmidt to produce an orthonormal basis of H, in other words a complete/maximal orthonormal set. That would apply in particular to L^2 functions on a domain. When applied to polynomials on [-1, 1], starting with basis x^n, this gives essentially the Legendre polynomials. Comment by | April 29, 2009 | Reply 5. Sure, Matt, but at any point you’ve got a finite subspace. My point (and this also goes to Todd’s comment) is that you can apply Gram-Schmidt as far down the infinite sequence as you’d like, but you’ll never get them all done. Comment by | April 29, 2009 | Reply 6. What?! It’s simple induction! Comment by | April 29, 2009 | Reply 7. Induction tells you that for any finite $n$ you can orthonormalize the first $n$ vectors, but not that you can orthonormalize an entire infinite sequence. Comment by | April 29, 2009 | Reply 8. [...] Now that we have the Gram-Schmidt process as a tool, we can use it to come up with orthonormal [...] Pingback by | April 29, 2009 | Reply 9. Sigh. Actually, technically this extremely commonplace type of construction is correctly called recursive, but most non-logicians just call it a construction by induction. (One constructs things by recursion, but proves things by induction.) I’m a little surprised that you’re debating this, but let me try to keep this light and humorous. Please imagine me saying, in the friendliest way possible: I’m gonna have to get formal on yo’ ass! I think you’re probably familiar with Lawvere’s universal characterization of the natural numbers. It says that given a set X, a point p_0: 1 –> X, and a function f: X –> X, there is a unique function h: N –> X such that h(0) = x_0 and h(n+1) = f(h(n)). Let’s take this as given. Now on to Gram-Schmidt. Suppose given an inner product space H and a sequence x_n in H, which we’ll construe as an element of the function space H^N. Just for the sake of convenience, let me assume x_0 is a unit vector. Put X = N x H^N. Then define f: X –> X by the rule f(n; y_0, y_1, …) = (n+1; z_m := N(y_m – Sum_{j < n+1} (y_m, y_j)y_j) ) where N( ) denotes normalization v |–> v/|v|, and I’ve used ( , ) inside the sum for inner product. Then let p_0 in N x H^N be the point (0; x_0, x_1, …), where again x_j is the original sequence. So, by the universal property, we get a function N –> N x H^N. Follow this by projection N x H^N –> H^N. We get a function N –> H^N which we can construe as a double sequence N x N –> H. Take the diagonal of this double sequence, i.e., compose with the diagonal map N –> N x N. We get a map N –> H. Unless I’ve made an (easily correctable) mistake, this map is the orthonormal Gram-Schmidt sequence we’re after. Does this help clear it up? Comment by | April 29, 2009 | Reply 10. Todd, I understand your point, but I would say that the universal construction (which I used to define the natural numbers) shows that the result of applying Gram-Schmidt increasingly many times gives a unique sequence of vectors, but it doesn’t construct such a sequence. This is the universal failing of universal properties. Gram-Schmidt constructs the first $n$ terms of the orthonormal basis for any finite $n$, but it can’t get all of them at once. A high-level view (such as invoking the universal property) can show that they all exist, but doesn’t construct them all. Other views, tailored to specific circumstances, can give a closed form for the $n$th vector, and we can check that the results satisfy orthonormality, but Gram-Schmidt only hints at the pattern, or it can be used to check the inductive step of one possible avenue of checking. The point that I’m making is that Gram-Schmidt is a process and a construction, not a mere theorem. Comment by | April 29, 2009 | Reply 11. John, I have absolutely no idea what you’re talking about. I gave a rigorous explicit recursive construction of an orthonormal set in terms of a given sequence, which spans the same subspace. It’s an explicit construction, not just an existence statement. I can’t see a thing wrong with it. You surely agree (unless you’re a finitist and I’m learning this for the first time?!) that the recursion property of the natural numbers provides an absolutely standard typical way of constructing sequences. That’s how universal properties are used: to construct maps. For example, if you accept this universal characterization of N, as I thought you did, then you surely understand how it can be used to construct maps such as addition on N, etc., etc. The construction I gave above is no different in underlying principle. In other words, reading over your first sentence [where you actually use the word "construction", and then say it doesn't construct]: are you saying you disagree that the universal property of N can be used to _construct_ addition on N — that all we get from it is a “hint of the pattern”? Are you disavowing general constructions of this type? I’d be amazed and dumbstruck if you did! At any rate, whatever the objection may be, it sounds more philosophical than mathematical (and not in keeping with the realist that I thought you were). (Does anybody reading this understand what John is trying to say?) Comment by | April 29, 2009 | Reply 12. It’s not that I’m a finitist. It’s more that I’m begin very careful about the distinction between “for any finite $n$” and “for all finite $n$ [at once]“. Yes, finitists reject one side of this distinction, but just because I don’t follow them that far doesn’t mean I can’t notice the distinction. I also accept the axiom of choice, while still noting where I use it. The universal property you invoke just says that a certain function exists. It doesn’t “construct” the function any more than the universal property of products in $\mathbf{Set}$ “constructs” functions to the product set. No, the construction in that case comes from an explicit realization of the categorical product as the Cartesian product. Different models will give different — uniquely isomorphic — functions. I’m not even sure that your construction works, modulo that invocation. The vector you normalize is $y_m - \sum_{j < n+1} (y_m, y_j)y_j)$ which I’m not certain is actually orthogonal to the earlier vectors. Maybe this comes out in the wash when you invoke the universal property, but it’s not clear to me. Anyway, I don’t see what you’re getting so bent out of shape about. In practice you usually only need a finite-dimensional subspace in an application that calls for an explicit basis anyway. And there are other ways to get your hands on bases and show that they’re orthonormal. I’m never denying the validity of any of your constructions. I’m just saying that they take the willingness to jump from “for any finite $n$” to “for all finite $n$ at once”, and that it’s worthwhile to call explicit attention that jump and not make it unless there’s no other way around. Comment by | April 29, 2009 | Reply 13. If you’re not denying the validity of the constructions, then you’re not denying the validity of the statements made in comments 3 and 4, the way I see it. So then I don’t see why you felt a need to argue against them. But you avoided my question: do you see addition of natural numbers as something globally defined by a recursive construction — “all at once” as you put it — or do you see it otherwise: “for any given pair (m, n), one can ‘construct’ m + n, but one can never define all such additions at once”. Because if you accept that global construction, then I can’t understand why you wouldn’t accept more elaborate applications of the same underlying recursive principle. That was the basic point of my comment 11. I do grasp this distinction you’re making, but not why are you particularly drawing attention to it. For this type of thing may have been a burning issue in debates over potential and actual infinities more than a century ago, but it doesn’t really impinge on the way most mathematicians (except e.g. finitists) carry out discussions these days. In particular, I don’t understand why it should make any substantial difference in how one should reply to comment 1. In other words, comments 3 and 4 stand and are completely valid, according to standard mathematics as practiced today. Case closed. I don’t see much substance in your paragraph 2. As you know, I have a stylistic preference for foundational systems like ETCS which stress universal properties (which by the way don’t assert mere existence, but unique existence of suitable maps). Such systems are in no way dependent on other systems like ZF, and in particular do not depend on “explicit realizations” (as you put it) of objects like cartesian products. They are axiomatic theories, no more and no less than ZF, and are (for the purposes of discussing Gram-Schmidt, for example) equally as good. To repeat: they are neither more nor less “explicit” or “constructive” (in the way you are using the word) than ZF. (In other senses, one could say that certain topos-oriented forms of categorical set theory are *more* constructive, in that the axiom of choice is not assumed, and so on. See, I also pay attention to such issues.) Notice that in either case, when we read aloud the axioms of some set theory, we usually say phrases like “there exists”. It is customary not to read the same symbol as “we can construct” — even in the most constructive versions of set theory. But so what? In point of fact, we never in any literal sense ‘construct’ mathematical objects, and we don’t [or I don't] positively assert that objects like N truly ‘exist’! All that’s philosophy, not mathematics. But in ordinary language use, when it comes to applying statements of universal properties, which involve unique determination of maps, it’s completely okay to say things like “using the universal property ___, we may construct… ” — no one ever bats an eye. I’m sure if I comb through UM, I could spot you saying the same type of thing. So I can’t imagine why you’re quibbling over that of all things — all my constructions were completely constructive in the best mathematical sense of that word, and can be made perfectly rigorous in a perfectly acceptable ‘foundational system’ if you like, and that’s that. This is just completely standard stuff! And despite the occasional exclamation mark, I’m not getting bent out of shape. But you seemed to be arguing against the validity of 3 and 4, and for the sake of poor Zygmund in comment 1, I thought we should suss it out a bit. You say my constructions are valid, and I read that as your assenting to comments 3 and 4. Or are you actually still seriously debating this? I’ll let you have the last word, if you wish. I’ve spent more time on what I think is a really silly discussion than I should have. Comment by | April 30, 2009 | Reply 14. I still don’t feel entirely comfortable saying that it’s Gram-Schmidt doing the heavy lifting in comments 3 and 4. There are other ways around it, and to the extent Gram-Schmidt is useful we’re only actually using a finite-dimensional subspace anyhow. In fact, that’s exactly what Matt said, and what I agreed with there. I really don’t think that there’s ever been as much difference here as you seem to think. And, by the way, statements that amount to “are you seriously questioning me” go a lot further than exclamation points in giving the impression you’re upset about it. If nothing else it’s not exactly a light touch. Comment by | April 30, 2009 | Reply 15. Okay. I said you could have the last word, but I’ll agree with you that I got a little heavy there. Shake? Comment by | April 30, 2009 | Reply 16. Sure. Comment by | April 30, 2009 | Reply 17. Compare Paul Garrett’s notes http://www.math.umn.edu/~garrett/m/fun/Notes/02_hsp.pdf on Hilbert Space. Section 13: the Gram-Schmidt process. (I don’t understand either why John Armstrong doesn’t want to call this the Gram-Schmidt process.) Comment by | May 3, 2009 | Reply 18. Well, for one thing Garrett is using a different definition of “basis”, where finite linear combinations are only required to be dense in the space. But I suppose little details like definitions don’t matter in mathematics, right? We can be as sloppy as engineers and everything still works the same. I’ve already explained my position repeatedly, and Todd understands it even if he violently disagrees with the philosophy I’m acknowledging the existence of. If all you have is “me-too”ism, don’t bother. If you really want to understand, try reading. Comment by | May 3, 2009 | Reply 19. “Well, for one thing Garrett is using a different definition of “basis”, where finite linear combinations are only required to be dense in the space.” Right, this is one of the two kinds of basis which is used in the study of topological vector spaces. The term “Hamel basis” is used for a basis in the sense of linear algebra. Usually though in the study of Hilbert spaces the former notion of basis is more useful. I brought up Hilbert space because Zygmund asked about it. “But I suppose little details like definitions don’t matter in mathematics, right? We can be as sloppy as engineers and everything still works the same.” I didn’t say this, nor do I agree with it (nor is it fair to engineers!). I am surprised and disappointed that my brief comment engendered such a negative reaction. I am sorry to say that it may have the effect of discouraging me from posting here in the future. Comment by | May 3, 2009 | Reply 20. Just to be clear, my beef wasn’t actually against any philosophy per se. I was disagreeing with the mathematical content of comments 5 and 7 according to standard mathematical practice (i.e., valid constructions and arguments according to an accepted axiomatic system like ZF or ETCS). That is, I was asserting you can “get them all done” (cf. comment 5), and I was disagreeing with the asserted limitation on induction or recursion expressed in comment 7. I thought comment 9 had covered the underlying principle at stake, although I acknowledge a mistake in my “Gram-Schmidt formula”: I should have expressed the f: N x H^N –> N x H^N as f(n; y_0, y_1, …) = (n+1, z_m := y_m if m != n+1, z_m = N(y_m – Sum_{j < n+1} (y_m, y_j)y_j) if m = n+1) I was trying to be too slick before and avoid a casewise definition. I’m sorry for the error. (I also had a typo, where I wrote an x_0 where I should have had a p_0). I didn’t understand why you continued to argue back against this in comment 10 (where you wrote, “Gram-Schmidt constructs the first n terms of the orthonormal basis for any finite n, but it can’t get all of them at once”), which simply repeats what you said before, and which I thought was already effectively refuted in comment 9. For your continuing to argue that point only made sense to me if you were in effect denying the recursive property of N (which I formulated in comment 9 a la Lawvere), a prospect which I obviously considered highly surprising. For it would fly in the face of valid constructions and arguments according to standardly agreed-upon axiomatic foundations (e.g. ZF or ETCS) of working mathematicians — the kind of mathematics you’ve been expounding all along. In other words, it seemed to me your comments 5, 7, and 10 could be defended only if you were actively rejecting such standardly agreed-upon axiomatics (as a finitist might). If you weren’t rejecting it, then those comments were simply wrong on mathematical grounds. That was the core thing I was arguing. John, I sincerely apologize for getting overly aggressive, but I just couldn’t understand why you were arguing in the way you were about the limitations of induction or recursion, especially in light of comment 9, which I thought should have settled the matter definitively. I was nonplussed and frustrated that you weren’t owning up to a simple mathematical mistake [we all make them], but just arguing back, and on things (like existence and construction) which IMHO were not pertinent to the core of the discussion. But I shouldn’t have lost my cool, and again I’m sorry about that. If there’s some mathematical point that I didn’t make clear, I’ll be happy to try again. Comment by | May 3, 2009 | Reply 21. Todd, I don’t think that in practice there’s any limitation beyond what you claim can be done. In practice, almost all applications I see only ever need a sufficiently large finite-dimensional subspace. This is why in practice there’s only a semantic difference between “for any finite $n$” and “for all finite $n$ [at once]“, and it’s a subtlety that there’s almost no harm in sweeping aside. But just like one day you may find yourself working in a topos that rejects the Axiom of Choice, you might find it useful to accept finitist restrictions for a moment. Comment by | May 3, 2009 | Reply 22. John, here’s what you said: You can apply Gram-Schmidt as far down the infinite sequence as you’d like, but you’ll never get them all done. This is not a statement about what one needs in practice or whatnot. It’s a blunt positive declaration that flies in the face of the standard mathematics you’ve been expounding thus far. It is true only to the extent that you declare yourself to be working in some nonstandard (e.g., finitist) context. Which you haven’t. Not even now. I’m happy to work in whatever axiomatic framework seems useful for my purposes. You seem to believe that I have a bias against finitism. I don’t, but anyway that’s not the point. You’ve been expounding ordinary standard mathematics all along, and therefore that’s the context in which Zygmund’s question should have been addressed. Comment by | May 3, 2009 | Reply 23. I’m usually making existence statements, not giving constructive processes. And I fully recognize the validity of using a constructive process, extrapolating a pattern, and then proving that the result of that extrapolation satisfies the desired property. I just say that that constitutes an extra step beyond the constructive process itself. You disagree (still rather sneeringly, it would seem), and that’s your prerogative. Comment by | May 3, 2009 | Reply 24. Do you still stand behind this other positive declaration? Induction tells you that for any finite n you can orthonormalize the first n vectors, but not that you can orthonormalize an entire infinite sequence. Because aside from whatever semantic limitations you are imposing on the word “process”, this statement above sums up what in your line of argument seems obviously mistaken to me. It is this statement which prompted my comment 9. Comment by | May 3, 2009 | Reply 25. I’ll state explicitly the clause “at once” before I agree. The constructive process is, as I’ve put it before, inherently finite. It can orthonormalize as large a finite number of the vectors in the sequence as you’d like, but by means of the process alone you will never reach the end of the sequence and have it all done. Comment by | May 3, 2009 | Reply 26. Read again carefully the exact quote, where neither the non-mathematical phrase “at once” appears, nor does the phrase “constructive process” (which according to your pronouncement can’t be finite, although others may choose not to accept that semantic limitation), and please tell me if you still stand by it. It’s a statement about what induction (or perhaps better, recursion) can or cannot do. Comment by | May 3, 2009 | Reply 27. As written, I can see it being interpreted with neither of those clauses in mind. However, with those elisions replaced that has always been the position I’ve been holding. Gram-Schmidt on its own is a constructive process and cannot finish the infinite sequence. Comment by | May 4, 2009 | Reply 28. I meant “can’t be infinite” in the second line. Comment by | May 4, 2009 | Reply 29. Well, okay. I guess you understand clearly that contra comment 7, induction/recursion can do what I said it could do in comment 9, and that you can understand why comment 9 was a justifiable reaction to comment 7. To wit, that there is a simple recursive construction using a formula based on [ahem] the idea of the Gram-Schmidt prescription, which produces an orthonormal sequence given an initial linearly independent sequence. This isn’t philosophy, and this isn’t semantics about terms like “process”. It’s straight mathematics. Naturally you are free to declare, if you really want, that “the Gram-Schmidt process” must by definition be finitary — this is just semantics over the term “process” (which AFAIK is not a term with a standard widely-accepted definition). It seems to me a little doctrinaire and self-limiting, and not in the spirit of recognizing that the idea of Gram-Schmidt can be applied to infinitary contexts as well (speaking back to comments 1 and 2), but hey, it’s your blog and your prerogative. Comment by | May 4, 2009 | Reply 30. I haven’t managed to grasp every twist and turn of this argument, but here’s an issue I’m having with the idea that Gram-Schmidt ‘gets done’. If we start with the linearly independent set $\left{v_i\}_{i\in N}$, we can successively orthonormalize so as to get a sequence of increasingly inclusive subspaces with orthonormal bases, whose union will exist and be a subspace of the original that has an orthonormal basis. If I could prove that any vector in the original space appeared in this subspace, then I’d be entirely happy with the claim that Gram-Schmidt had been completed, but I can’t, so I’m not. Comment by Avery Andrews | May 4, 2009 | Reply 31. I mean $\{v_i\}_{\in N}$ for the non-parsing formula Comment by Avery Andrews | May 4, 2009 | Reply 32. Avery, the span of the set {v_i} equals the union of the spans of {v_i: i ≤ n}. When we apply Gram-Schmidt to the sequence to produce a new sequence {u_i}, we have span {v_i: i ≤ n} = span {u_i: i ≤ n} and so the union of the spans on the right equals the union of the spans on the left, which equals the span of the original set. Or was that what you were asking? See also Garrett’s notes referred to in comment 17. Comment by | May 5, 2009 | Reply 33. I don’t understand much of the Garrett notes (deficiency of time and background to crank thru details) but from the last paragraph on pg 10 it looks to me like the proof is inherently nonconstructive (not [union of chain = the whole space] => contradiction]; therefore [union of chain = whole space]), so if there is no constructive proof that the union of the chain is the whole space, could that perhaps justify John’s intuition here? Or am I way out in left field, which is sometimes the case Comment by Avery Andrews | May 5, 2009 | Reply 34. I think it’s much easier than that: every vector in the space spanned by the set {v_i} is by definition a finite linear combination, i.e., of the form a_0 v_0 + … + a_n v_n for some choice of n and a_0, …, a_n, and therefore in Span(v_0, …, v_n), one of the finite-dimensional spaces in the collection we are taking the union of. The word “choice” here is superfluous, since n is uniquely determined if we take it minimal, and then the a_0, …, a_n are uniquely determined by linear independence. Comment by | May 5, 2009 | Reply 35. Just to add a little more to that: Garrett on page 10 is discussing different issues from what we’ve been discussing above. For Gram-Schmidt, look at his section 13 instead. To tie it together with my previous comment, you can assume the v_i are a countable linearly independent set whose span is dense in the given Hilbert space H. (By “dense”, I guess you know that we mean that we also think of H as a metric space where the metric is defined via the inner product, hence H carries a topology, and “dense set” carries the usual topological meaning: its closure is all of H. In other words, every element of H is a limit of a sequence of finite linear combinations of the v_i which is a Cauchy sequence with respect to the metric.) Comment by | May 5, 2009 | Reply 36. [...] an orthonormal basis which also gives us an upper-triangular matrix. And of course, we’ll use Gram-Schmidt to do [...] Pingback by | May 8, 2009 | Reply 37. [...] this by picking a basis of and declaring it to be orthonormal. We don’t anything fancy like Gram-Schmidt, which is used to find orthonormal bases for a given inner product. No, we just define our inner [...] Pingback by | September 28, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303334951400757, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/121113/do-finitely-generated-groups-of-polynomial-growth-satisfy-a-uniform-covering-pro
## Do finitely generated groups of polynomial growth satisfy a “uniform covering property?” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a finitely generated discrete group with a finite symmetric generating set $S=S^{-1}\subset G$. For every group element $g$, define $\|g\|_S$ to be the length with respect to $S$, i.e. the minimal length of any word in $S$ that represents $g$. For a natural number $n$, define $B_n=${$g\in G : \|g\|_S\leq n$}. $G$ is said to have polynomial growth, if there exist constants $C,r$ such that $$|B_n|\leq C\cdot n^r$$ for all $n$. Note that this is really a property of the group itself, i.e. it does not depend on the choice of the generating set. One particular implication of this is that by going from $B_n$ to $B_{2n}$, the ratio of the cardinalities $|\frac{B_{2n}}{B_n}|$ can be controlled uniformly by a constant. Now I would like to strengthen this a little bit. For that, I mean the following sort of uniform covering property: (just a pragmatic notion for now) The pair $(G,S)$ has the uniform covering property, if there exists a constant $C\in\mathbb{N}$ such that for all $n$, there exist $g_1,\dots,g_C\in G$ such that $B_{2n}\subset \bigcup_{i=1}^C g_i\cdot B_n$. This means that there exists a constant that controls the number of copies (i.e. translates) of $B_n$ that one needs to cover $B_{2n}$. Presumably, this should not depend on the choice of $S$ either, although I have not bothered to check. So I state my question this way: If $G$ has polynomial growth, is there a generating set $S$ such that the pair $(G,S)$ has the uniform covering property? If not, could there be an alternative description of the class of groups that have this property? I should say that I have only limited knowledge/skills in questions concerning advanced group theoretic problems. But the questions seems elementary enough to hope that someone with more expertise in these areas could answer it. My motivation for asking this question is that this uniform covering property came up in my research as a sufficient condition for a theorem that states something very nice about free topological dynamical systems $(X,\alpha,G)$, where $G$ acts continuously via $\alpha$ on a compact metric space $X$. On the one hand, a positive answer to this question would mean that the theorem says something nice about a very large class of amenable groups, not just standard examples like $G=\mathbb{Z}^m$. On the other hand, if the answer is no, there could still be a description of what groups I actually talk about. My intuition says that assuming this property is something very artificial/technical and should be replaced by something more natural, e.g. polynomial growth. - 1 Note that the uniform covering property is often called the doubling property, and is linked to Poincaré inequalities in metric-measure spaces. – Benoît Kloeckner Feb 8 at 11:53 Thank you for pointing this out – Gabor Szabo Feb 8 at 12:31 ## 1 Answer The answer is yes. Moreover, given symmetric generating set $S$, any sets $B_n$ has uniform covering property for all large $n$. Indeed, let $C_n$ be the minimal number of $g\cdot B_{ n}$ needed to cover $B_{2\cdot n}$. Let $d$ be the word metric for $S$. Then the sequence $(G,d/n)$ is precomact in the Gromov--Hausdorff topology. I particular, any partial limit of $(B_{2\cdot n},d/n)$ is compact. It follows that $C_n$ is bounded. - Thank you very much for this quick answer! So I can find all of the details of this argument in the proof of Gromov's theorem? – Gabor Szabo Feb 7 at 21:28 Yes the precompactness is proved in the Gromov's paper (it is the easiest but also the game-changing part in the paper). – Anton Petrunin Feb 8 at 0:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938052237033844, "perplexity_flag": "head"}
http://mathoverflow.net/questions/59305?sort=votes
## Parabolic induction for GL(2,Z/pn) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix a finite extension $F$ of $\mathbb{Q}_p$. Consider its ring of integers $\mathfrak{o}$ with maximal ideal $\mathfrak{p}$. Set $R_n = \mathfrak{o}/\mathfrak{p}^n$. Let $\mathrm{B}$ be the upper triangular matrices. Consider a character $\mu : \mathrm{B}(R_n)\rightarrow \mathbb{C}^\times$. When is the induced representation $\mathrm{Ind}_{\mathrm{B}(R_n)}^{\mathrm{GL}_2(R_n)} \mu$ irreducible? - 1 I guess the exponent $r$ here should be $n$? – Jim Humphreys Mar 23 2011 at 15:44 ## 2 Answers A sufficient criterion for irreducibility is given, for example, in Theorem 4.6 in Hill: Semisimple and cuspidal characters of $\mathrm{GL}_n(\mathcal{O})$. Hill's result is more general, and holds for certain representations of $\mathrm{GL}_n(\mathcal{O})$, for $n\geq 2$. For $\mathrm{GL}_2(\mathcal{O})$ it says the following. Let $T$ be the diagonal torus, so that $T(\mathcal{O}_r)\cong\mathcal{O}_r^{\times}\times\mathcal{O}_r^{\times}$. Let $\theta=\theta_1\theta_2$ be a character of $T(\mathcal{O}_r)$, where $\theta_1$ and $\theta_2$ are characters of $\mathcal{O}_r^\times$. Suppose that the restriction of $\theta_1$ to $1+\mathfrak{p}^{r-1}$ is non-trivial, and that the restriction of $\theta_1$ to $1+\mathfrak{p}^{r-1}$ is not equal to that of $\theta_2$. Let $\tilde{\theta}$ denote the pull-back of $\theta$ to $B(\mathcal{O}_r)$. Then the representation $$\mathrm{Ind}^{\mathrm{GL}_2(\mathcal{O}_r)}_{B(\mathcal{O}_r)}\tilde{\theta}$$ is irreducible. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For PGL(2) I have the following reference : Silberger, A.J. : PGL 2 over the p-adics : its representations, spherical functions, and Fourier analysis. Springer Lecture Notes in Mathematics 166. Berlin, Heidelberg, New York: Springer 1970 I don't have this book with me, but the basic idea is to apply Mackey's irreducibility criterion. For this you have to determine the double cosets of $G={\rm GL}_2 (R_u )$ mod $B=B(R_u )$. For $n=1$, just use the Bruhat decomposition. To get a set of representatives of the double quotient for $n>1$, you introduce the "Iwahori" subgroup $I$ of matrices that are upper triangular mod ${\mathfrak p}$. Then $G=I\cup IwI$, where $w$ is the standard Weyl group element. Next you have the following set of representatives of right (or left) $B$-cosets in $I$ : the lower triangular unipotent matrices with the coefficient varying over ${\mathfrak p}/{\mathfrak p}^{n}$. - 1 In the end, I think the criterion would be that $\mu^{w}$ is not isomorphic to $\mu$. – Joël Cohen Mar 24 2011 at 13:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897642195224762, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3744/are-picture-files-random-enough-to-be-usable-as-a-one-time-pad?answertab=votes
# Are picture files “random enough” to be usable as a one-time pad? Say you have a picture with 1 megapixels taken at random and with $2^{24}$ possible colours per pixel (RGB-24). That image would be unique and the possible combinations $(2^{24})^{10^6}$ immense. However when taking a picture in the real world, say of a clear sky, there will be a lot of repetition. The question is: would such repetition present a security risk when used as a one-time pad, where the requirements of randomness is so high? My hunch is that it is, as true randomness would require the possibility of all pixels being #FF0020 or whatever, but I would like to be proven right or wrong. If I've been unclear at some point, please let me know and I will edit my post. - 1 I am pretty sure that at 1MP it's "okay" if it's not truly random. Also, is it really "less random" or does it just have "less distribution"? – pst Sep 7 '12 at 22:18 Oops, wrong click... And now I voted for a wrong comment.. Sorry – Alex Cohn Sep 10 '12 at 7:51 You need full control of the acquisition process in order to avoid fake images (as demonstrated by John Deters). You usually have better sources of random bits than camera. – Alex Cohn Sep 10 '12 at 7:54 ## 5 Answers No. This is not safe. The one-time pad requires that the pad be generated by a true-random process, where each bit of the pad is chosen uniformly at random (0 or 1 with equal probability), independent of all other bits. Any deviation from that, and what you haven't is no longer the one-time pad cryptosystem -- it is some kludgy thing. In particular, once you deviate from that requirement even a little bit (and you're talking about a huge deviation), you are skating on thin ice and there will probably be security problems with your scheme. If you're gonna use the one-time pad, you gotta use it exactly as it is defined, with a truly-random pad. There are no shortcuts, no halfway stuff. Messing around with this sort of thing is exactly what enabled the US to cryptanalyze Soviet use of a "one-time pad" in the VENONA project. But in practice, you probably don't want to use the one-time pad anyway. The key management issues are enough that it is rarely a good choice in practice. - I see, so despite the ridiculous number of combinations the pixels in a photo provide, the fact that there is a limit at all, makes it insecure? I appologize if I seem a bit thick, but it's hard to understand how the scattering of light such as that picked up by a camera does not fill the above criteria. The thought was to reduce key generation time but now I'm all confused (that's a good thing though :) – youjustreadthis Sep 8 '12 at 1:10 Yes, it is insecure. The "number of configurations" is not the parameter that is relevant to security. The security of the one-time pad relies upon the requirement that the pad be truly random, with no patterns whatsoever. Violate that requirement, and it's not a one-time pad any longer. Anyway, the one-time pad is not really suitable for practical use in any case, so this is pretty much moot in practice anyway. Use modern crypto; its key generation time is negligible. – D.W. Sep 8 '12 at 1:13 I see, thanks for the help. I know that key distribution issues and practicality makes it outdated, but the concept of "perfect security" (at least the cryptological part) is intriguing. – youjustreadthis Sep 8 '12 at 1:31 You should not use the raw data of any image as a one time pad. This is even worse with an image of a sky, because of the large amount of blue pixels. For all images, adjacent pixels tend to be the same colour - which means there is a large amount of repetition. If you want to use some of the data of the image as a one time pad, you will need to condition the data (concentrating the entropy present in the image). A simple example of concentration of entropy is to take 2 not-so-random integers, a and b, and perform an operation, such as `(a*b+a+b)`, then extracting the lower order bits (probably half). This scheme would eliminate bias present in the original integers. Of course, a more complex scheme is probably required. A simple scheme you could use, which would be quite random, is to use a digest on the data. If for example, you believe that a third of the bits in the image contain useful entropy, then from every 64 pixels, containing `64*24 = 1536` bits, feed it into a SHA512 hash function, which will output a `512` bit digest (that is 64 bytes). You can then use that output for your one-time pad. An IEEE article on Bull Mountain, Intel's Random Number Generator, includes some discussion on "concentration" of randomness, when the input data is not random enough. - Thanks for the thorough and useful answer. I do understand your point about repetition, however I'm not sure I managed to follow you on the useful entropy bit. Wouldn't the unpredictable nature of a large picture result in sufficient entropy for every bit involved, be it after a thorough shuffling or for example hashing pixel nr 1 with nr 10, 2 with 11 and so on? – youjustreadthis Sep 8 '12 at 0:01 2 I agree with the first paragraph, but the remaining paragraphs are problematic. Any scheme that begins with "take something not-so-random, then process it a bunch, and use it as a pad with the one-time pad" is deeply dubious, and probably violates all of the security benefits of a one-time pad. The main benefit of the one-time pad is it is "provably secure", but this only holds if the pad is truly random (all bits iid uniform random). If the pad is generated by taking something sorta-random and then processing them, the security proof no longer applies, and you're on sketchy ground. – D.W. Sep 8 '12 at 0:35 1 Fully random means having enough entropy for the number of bits. For example, if you have 10 bits of random data, it should have entropy equivalent to 10 bits. If you have instead 20 bits of data with entropy equivalent to 10 bits, it is not completely random. However, if you are able to shrink the data to 10 bits, so that it is 10 bits with 10 bits of entropy, it becomes fully random. – ronalchn Sep 8 '12 at 4:42 It means that even though the image might be 1MB, you do not have 1MB of fully random data, by concentrating the entropy down to say 100KB, you might then have random data. – ronalchn Sep 8 '12 at 4:43 4 This will only work if you have a good randomness extractor, which is not as trivial as you seem to think. – Paŭlo Ebermann♦ Sep 8 '12 at 14:34 show 1 more comment The reason repetition is so dangerous is imagine trying to attack a worst case scenario: a BMP picture file that contains all black. The contents of the image file will be #000000 #000000 #000000 #000000 ... Now consider how a one-time pad works: it XORs the cleartext with the bit stream. So if your plaintext was "ATTACK ON 10 SEPT", and you XORed it with an image that started with some repeating black pixels, the resulting "cipher text" would be "ATTACK ON 10 SEPT". I wouldn't be surprised if your enemy is not surprised. Any swath of repeating bytes in the key file will do the same. The attacker just has to try 255 guesses to look for stretches of intelligible ASCII text. Long ago a friend of mine wrote a proxy that used XOR "encryption" like this. My first attempt to discover his key was to download a black .GIF file, and his "secret key" printed itself in front of my eyes. - ... although if the process involves controlled capture with an attached camera, the `#000000` problem can be excluded. – Alex Cohn Sep 10 '12 at 7:50 1 @AlexCohn, certainly, a controlled environment can be different. That was the basis for LavaRand. But taking pictures of a stochastic system is different than taking random pictures, which is what the question was about. Regardless, you wouldn't use the photos of LavaRand directly anyway, as there was a lot of repeat in the background. LavaRand used the images as input to a hash, and derived relatively few bytes per frame from the pictures. – John Deters Sep 11 '12 at 3:54 methinks that taking random pictures should be OK. No way as a 1M*3/2*8 random bits, but (wild guess) 256K random bits, provided the lighting conditions are in natural range. – Alex Cohn Sep 11 '12 at 6:15 The amount of randomness in common pictures has actually been studied thoroughly, just not for applications to encryption, but rather for stenography. An artifact of images is that the least significant bit (it is what changes between slightly different shades of blue) has the highest entropy. A simple stego-system is to overwrite the least significant bits of a picture with, say, a ciphertext or key (both of which are random—either pseudo or truly random respectively), a compressed plaintext (high entropy) or a raw plaintext (which is of low entropy). In either case, it is generally practical to distinguish between the true distribution of LSBs of an image and either things of higher/lower entropy. A consequence of this is that an image does not make a good one-time pad, as even the most random aspect of the image is not random enough. - Great question. I had actually been thinking about the same thing some time ago, but I realized that using an image as a one-time pad isn't a good idea. Try to take some random pictures and then open the pictures with a hex editor (like XVI32). I did that and noticed that the bytes were not all that random, for example many picture files have a lot of 0x00 bytes. Even though this is only for a part of the picture, it would still give someone a head start on trying to decrypt. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499579071998596, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/189130/prove-that-if-n-is-not-the-square-of-a-natural-number-then-sqrtn-is-irra?answertab=active
# Prove that if $n$ is not the square of a natural number, then $\sqrt{n}$ is irrational. [duplicate] Possible Duplicate: $\sqrt a$ is either an integer or an irrational number. I have this homework problem that I can't seem to be able to figure out: Prove: If $n\in\mathbb{N}$ is not the square of some other $m\in\mathbb{N}$, then $\sqrt{n}$ must be irrational. I know that a number being irrational means that it cannot be written in the form $\displaystyle\frac{a}{b}: a, b\in\mathbb{N}$ $b\neq0$ (in this case, ordinarily it'd be $a\in\mathbb{Z}$, $b\in\mathbb{Z}\setminus\{0\}$) but how would I go about proving this? Would a proof by contradiction work here? Thanks!! - 2 – copper.hat Aug 31 '12 at 4:54 ## marked as duplicate by copper.hat, Sasha, sdcvvc, Andres Caicedo, The Chaz 2.0Aug 31 '12 at 6:16 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 3 Answers Let $n$ be a positive integer such that there is no $m$ such that $n = m^2$. Suppose $\sqrt{n}$ is rational. Then there exists $p$ and $q$ with no common factor (beside 1) such that $\sqrt{n} = \frac{p}{q}$ Then $n = \frac{p^2}{q^2}$. However, $n$ is an positive integer and $p$ and $q$ have no common factors beside $1$. So $q = 1$. This gives that $n = p^2$ Contradiction since it was assumed that $n \neq m^2$ for any $m$. - Oh, I see. We basically use the fact that $n$ must be an integer to draw that the denominator must be 1. Got it, thank you very much! I got up until then and couldn't progress further, but this makes it clear! – roboguy12 Aug 31 '12 at 4:57 2 It is important to explicitly mention that the proof uses unique factorization (or Euclid's Lemma) to deduce that $q = 1$. This can fail in rings lacking such properties. – Gone Aug 31 '12 at 5:07 This can also be done with the rational root test: consider the polynomial equation $$x^2 - n = 0$$ and suppose that it has a rational root. Then, this rational root must be an (integer) factor of $n$. So, if $\sqrt{n}$ is rational, then there exists $t\in \mathbb{N}$ (since $x^2 - n$ is an even function of $x$, we may assume, without loss of generality, that $t>0$) with $t \vert n$ such that $$t^2 - n = 0$$ which is to say $$n = t^2$$ and hence $n$ is the square of a natural number. In fact, this argument generalizes to showing that if $\sqrt[m]{n}$ is rational, then $n$ is an $m^{th}$ power. - Here’s an explanation that I find clearer, and that uses unique factorization explicitly: If a positive number $n$ is not the square of any integer, then when you write it as a product of primes, at least one prime shows up to an odd power. Let one such prime be $p$, and look at the supposed equation $\sqrt{n}=a/b$, with $a$ and $b$ positive integers. This gives $n=a^2/b^2$, hence $nb^2=a^2$. How many times does $p$ show up in the factorization of the left side and of the right? Oddly many times on the left, evenly many on the right. Contradiction to the unique factorization of $nb^2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539137482643127, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/240687/recursive-integration-over-piecewise-polynomials-closed-form
# Recursive Integration over Piecewise Polynomials: Closed form? Is there a closed form to the following recursive integration? $$f_0(x) = \begin{cases} 1/2 & |x|<1 \\ 0 & |x|\geq1 \end{cases} \\ f_n(x) = 2\int_{-1}^x(f_{n-1}(2t+1)-f_{n-1}(2t-1))\mathrm{d}t$$ It's very clear that this converges against some function and that quite rapidly, as seen in this image, showing the first 8 terms: Furthermore, the derivatives of it have some very special properties. Note how the (renormalized) derivatives consist of repeated and rescaled functions of the previous degree which is obviously a result of the definition of the recursive integral: EDIT Basically by trying around, I found the following probably more usable Fouriertransform of the expression above. I do not have a formal proof but it holds for all terms I tried it with (first 11). $$\mathcal{F}_x\left[f_n(x)\right](t)=\frac{\sin \left(2^{-n} t\right) \left(\prod _{k=1}^n \frac{2^{k+1} \sin \left(2^{-k} t\right)}{t}\right)}{\sqrt{2 \pi } t}$$ Here an image of how that looks like (first 10 terms in Interval $[-8\pi,8\pi]$): With this, my question becomes: "What, if there is one, is the closed form inverse fourier transform of $\mathcal{F}_x\left[f_n(x)\right](t)=\frac{\sin \left(2^{-n} t\right) \left(\prod _{k=1}^n \frac{2^{k+1} \sin \left(2^{-k} t\right)}{t}\right)}{\sqrt{2 \pi } t}$, especially for the case $n\rightarrow\infty$?" On an unrelated note: This is my first question here, so I hope I did pose it decently. I searched through mathstack, to my best knowledge using terms of the involved math, without success. (minor Edit: it seems like the image sharing service I'm using tends to be somewhat unreliable. I'm sorry for that. If some or all of the images don't show up, that's why.) - 2 The integral has unbalanced parentheses and no limits; the right-hand side isn't a well-defined function of $x$. Also, in mathematics, when defining functions by cases one doesn't usually follow the programming convention of implying a conjunction with earlier conditions in later conditions, but states mutually exclusive conditions. – joriki Nov 19 '12 at 16:46 @joriki: I tried to improve what you said. Is that correct now? I'm not sure what I'm supposed to do with the conditions. Can you clarify, please? – kram1032 Nov 19 '12 at 17:03 1 The limits are OK now; I would have added an opening parenthesis instead of removing the closing one, to clarify the scope of the integral; about the conditions, I'd write $\begin{cases}1/2&|x|\lt1\;,\\0&|x|\ge1\;.\end{cases}$ – joriki Nov 19 '12 at 17:13 @joriki thank you for your input! Is this how you'd like it? – kram1032 Nov 19 '12 at 17:17 1 nathang 19:50 Note f_n(0) = 1 for n>0, so (f_n)'(-1/2) = 4 for n>1. So it can't be the box function. – Peter Sheldrick Nov 20 '12 at 21:52 show 9 more comments ## 2 Answers Here is a formula for $f_n$: $$f_n(x) = \sum_{j=0}^{2^n} \left( \frac{c_n(j) - c_n(j-1)}{2}\frac{\left(2^n x + 2^n - 2j\right)^n H\left(2^nx + 2^n - 2j\right)} {n!2^{n(n-1)/2}} \right).$$ Here $H$ is the Heaviside step function, $c_n$ is defined by $$c_n(j) = \begin{cases} 0 & \text{if $j<0$}\\ (-1)^{s(j)} & \text{if $0\leq j < 2^n$} \\ 0 & \text{if $j\geq 2^n$} \end{cases}$$ and $s(j)$ is the sum of the digits of the binary representation of $j$. (For example $s(13) = s(0\text{b}1101) = 3$.) While the Heaviside function is crucial to deriving the formula, it can be removed from the final result using the floor function (denoted $\lfloor \cdot \rfloor$): $$f_n(x) = \sum_{j=0}^{\lfloor2^{n-1}(x+1)\rfloor} \left( \frac{c_n(j) - c_n(j-1)}{2}\frac{\left(2^n x + 2^n - 2j\right)^n} {n!2^{n(n-1)/2}} \right).$$ Here is a plot of $f_{15}$ using this formula: # Deriving the formula First, separate the definition into two integrals and change variables, $2t+1 \mapsto t$ in the first, and $2t-1\mapsto t$ in the second, giving $$f_{n+1}(x) = \int_{-1}^{2x+1} f_n(t)\ dt - \int_{-3}^{2x-1}f_n(t)\ dt$$ Of course, we can change the -3 to -1 and combine these to a single integral: $$f_{n+1}(x) = \int_{2x-1}^{2x+1} f_n(t)\ dt$$ Then rewrite $f_0=(1/2)(H(t+1) - H(t-1))$. Note that the integral of $H(t)$ is $tH(t)$, whose integral is $(t^2/2)H(t)$, and so forth. Now we can write $f_n$ as a single iterated integral, for example $$f_3(x) = \frac12 \int_{2x-1}^{2x+1} \int_{2y-1}^{2y+1} \int_{2z-1}^{2z+1} (H(t-1) - H(t+1))dt\ dz\ dy$$ Each integration can be done doing several different changes of variables. This gives rise to the powers of 2 in the denominator. # Notes Each $f_n$ is symmetric. The part from -1 to -0.5 is repeated four times. Due to the way that Heaviside functions work, it is computationally easiest to compute values for $f_n(x)$ for $x$ closer to -1. # Code Here is some Python code to compute $f_n(x)$. ````from __future__ import division from math import factorial def c(j, n): if j < 0 or j >= 2**n: return 0 else: return (-1)**bin(j).count("1") def f(x, n): numerator = 0 for j in xrange(int(2**(n-1) * (x+1))): numerator += (c(j, n) - c(j-1, n)) * (2**n * x + 2**n - 2*j)**n denominator = 2 * 2**(n*(n-1)/2) * factorial(n) return numerator/denominator print f(-0.75, 10) ```` - That is awesome! Definitely upvoted. I'll wait with accepting for now, since you said it's WIP. – kram1032 Nov 22 '12 at 9:37 Suppose $f$ is a fixed point of the iterations. Then $$f(x) = 2\int_{-1}^x\big(f(2t+1)-f(2t-1)\big)\,\mathrm{d}t,$$ which, upon differentiating both sides by $x$, implies that $$f'(x) = 2\big(f(2x+1)-f(2x-1)\big).$$ I'll assume that $f$ vanishes outside $[-1,1]$, which you can presumably prove from the initial conditions. Then we get $$f'(x) = \begin{cases} 2f(2x+1) & \text{if }x\le0, \\ -2f(2x-1) & \text{if }x>0. \end{cases}$$ This is pretty close to the definition of the Fabius function. In fact, your function would be $\frac{\text{Fb}'(\frac{x}{2}+1)}{2}$ The Fabius function is smooth but nowhere analytic, so there isn't going to be a nice closed form for your function. - 1 – Rahul Narain Nov 24 '12 at 16:32 Ah, nice. So my type of function now has a name. Good to know. - It's always hard to research stuff you have no idea how it's called. I already suspected this kind of thing to be tried. - It's too simple an idea to be new. – kram1032 Nov 24 '12 at 17:16 – Rahul Narain Nov 24 '12 at 17:31 Nice, so I guess our two questions are related. – kram1032 Nov 24 '12 at 17:35 – kram1032 Nov 25 '12 at 0:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270374178886414, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=580832
Physics Forums Blog Entries: 1 ## Writing Tensor Equations in Matrix Form I'm trying to express the tensor equation $F'^{\mu\nu}=\Lambda^{\mu}_{\sigma}\Lambda^{\nu}_{ \rho }F^{\sigma\rho}$ in matrix form. Here the indices range from 0 to 3, so we need 4 by 4 matrices. Let F', F, and $\Lambda$ be the matrices associated with the tensors appearing in our equation. Which of the following is the correct matrix translation of the tensor equation? $F'=\Lambda F \Lambda$ $F'=\Lambda \Lambda F$ $F'=\Lambda^{\top} F \Lambda$ $F'=\Lambda F \Lambda^{\top}$ Or something else entirely? I tried testing some of these out on the actual four-by-four matrices, but the algebra got too cumbersome. Usually when I figure out what order to put things in and where to put the transposes, I'm in a situation where I'm dealing with matrices and vectors, so that if you put it in the wrong order then the numbers of rows and columns don't match up. But in this case everything is four-by-four, so there is plenty of room for error. Any help would be greatly appreciated. Thank You in Advance. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Science Advisor Quote by lugita15 I'm trying to express the tensor equation $F'^{\mu\nu}=\Lambda^{\mu}_{\sigma}\Lambda^{\nu}_{ \rho }F^{\sigma\rho}$ in matrix form. $F'=\Lambda^{\top} F \Lambda$ $F'=\Lambda F \Lambda^{\top}$ It will be one of these two, depending on how you decide to map your tensors to matrices. You see, a tensor and a matrix are not quite the same thing. A matrix is most naturally thought of as a "mixed" tensor, with one index up and one down: $M^a{}_b$. Then the matrix product is quite natural to write: $M^a{}_c = K^a{}_b L^b{}_c$ On the other hand, a tensor with two "up" indices is technically considered a column vector whose elements are column vectors. However, the multiplication algorithm will end up being the same as matrix multiplication. To translate it to matrices, you just need to follow the indices carefully: $$\Lambda^a{}_c \Lambda^b{}_d F^{cd} = \Lambda^a{}_c F^{cd} \Lambda^b{}_d = \Lambda^a{}_c F^{cd} (\Lambda^\top)_d{}^b$$ where in the last step, we take the transpose because we need to switch the order of b and d to make it look like a matrix product. So we can write $$F' = \Lambda F \Lambda^\top$$ provided we interpret the first index of F as a row index, and the second as a column index. However, you should be careful with this notation to be clear what you mean. If we had a mixed tensor $G^a{}_b$, then in matrix notation its transformation would be $$G' = \Lambda G \Lambda^{-1}$$ For the pure "up" tensor $F^{ab}$, the most mathematically-correct way to write its transformation law is $$F' = (\Lambda \otimes \Lambda) \cdot F$$ where now F is unfolded into a single column vector with $n \times n$ entries. Thread Tools | | | | |--------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Writing Tensor Equations in Matrix Form | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 1 | | | Special & General Relativity | 4 | | | Calculus & Beyond Homework | 6 | | | Special & General Relativity | 4 | | | Calculus & Beyond Homework | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222500920295715, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/191400-how-long-does-take-3-people-do-job.html
# Thread: 1. ## How long does it take 3 people to do a job? 8. If Sam can do a job in 4 days that Lisa can do in 6 days and Tom can do in 2 days, how long would the job take if Sam, Lisa, and Tom worked together to complete it? A. 0.8 days B. 1.09 days C. 1.23 days D. 1.65 days E. 1.97 days So I know that the answer is B but I have no idea what formula I am supposed to use to actually get that answer.... My friend and I did this but there has to be an easier way to do it, right? [03:57:51 p.m.] Blaze: sam can do 25% in a day... [03:58:13 p.m.] Blaze: lisa can do... 16.66% [03:58:23 p.m.] Blaze: and tom can do 50%.... [03:58:59 p.m.] Blaze: in 1 day they can do 91.66%... right? which also means that together they accomplish about 3.82% an hour so far i've got 1 day 2 hours... let me go further [04:01:49 p.m.] Blaze: its also about 0.063% a minute [04:02:55 p.m.] Blaze: now im at 1 day 2 hours and 11 minutes and like 30 or so seconds So as you can see... this is not going well... ((DISCLAIMER: I am new to the site and have no idea what I am doing. xD So I am trying to follow the rules but I am not sure if I am putting this in the right section or giving the right amount of information or anything. Sorry if I am doing anything wrong ^^; )) 2. ## Re: How long does it take 3 people to do a job? Originally Posted by ButterflyGirl18 8. If Sam can do a job in 4 days that Lisa can do in 6 days and Tom can do in 2 days, how long would the job take if Sam, Lisa, and Tom worked together to complete it? Change the units and you'll be done in a moment. Sam: 1 job in 4 days ==> 4 jobs in 1 day ==> 1/4 jobs per day Lisa: 1 job in 6 days ==> 6 jobs in 1 day ==> 1/6 jobs per day Tom: 1 job in 2 days ==> 2 jobs in 1 day ==> 1/2 jobs per day How many jobs can they complete in one day? 1/4 + 1/6 + 1/2 Did we get there. yet? 3. ## Re: How long does it take 3 people to do a job? But I am not looking for how many jobs they can do in one day I am looking for how many days it takes to do one job. If they can do 11/12 of the job in one day..... (1/4 + 1/6 + 1/2) How do I change it to find out how many days it takes to do the job? 4. ## Re: How long does it take 3 people to do a job? Originally Posted by ButterflyGirl18 But I am not looking for how many jobs they can do in one day I am looking for how many days it takes to do one job. If they can do 11/12 of the job in one day..... (1/4 + 1/6 + 1/2) How do I change it to find out how many days it takes to do the job? $\frac{12}{11}=?$ 5. ## Re: How long does it take 3 people to do a job? (combined rates)(number of days) = 1 job $\left(\frac{1 \, job}{4 \, days} + \frac{1 \, job}{6 \, days} + \frac{1 \, job}{2 \, days}\right) (t \, days) = 1 \, job \, done$ solve for $t$ 6. ## Re: How long does it take 3 people to do a job? So 11/12 * T = 1 Ah! Ok! I get it now. =^.^= Thank you so much guys!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385362267494202, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/111008/use-of-axiom-of-choice-for-creating-an-infinitely-decreasing-sequence?answertab=oldest
# Use of Axiom of Choice for creating an infinitely decreasing sequence Suppose I have a set with total order but no least element $S$. I create an infinitely decreasing sequence $\{ a_i : i \in \mathbb N \}$ in the following way. Now I pick one element $a_1 \in S$. I pick $a_n$ such that $a_n < a_i$ for all $i \in \{1,..., n-1\}.$ Is the above construction correct or do I have to use axiom of choice for this sequence? - I think he means that he will create such a sequence, and picks $a_1$ to initiate such a construction. I believe this does require the axiom of choice, but only the countable version. – Isaac Solomon Feb 19 '12 at 18:48 1 – Alexander Thumm Feb 19 '12 at 18:50 ## 1 Answer You used the fact that indeed the set $S$ has a countable subset, which is unbounded from below. Indeed the first model of ZF without the axiom of choice was one where the real numbers has a subset $D$ which is of course linearly ordered; this subset was unbounded but alas has no decreasing sequence. In fact every function from $\mathbb N$ into $D$ has a finite range (either increasing, decreasing, or both). Note that indeed you only need to have some choice, in particular you need enough to allow an inductive definition on linear orders. This would be the principle of dependent choice or at least its restriction to linear orders. The answer, in a nutshell, is that you have to use some choice to create this sequence. Enough to allow the inductive definition work "enough" to create the infinite sequence. The set $D$ which I have mentioned is a Dedekind-finite set of real numbers, its existence contradicts the axiom of choice (indeed, even the axiom of countable choice). However since the existence of such set is not provable from ZF alone (it requires a rather strong negation of the axiom of choice) it is impossible to describe the set without resorting to a full tutorial about forcing, symmetric extensions and the axiom of choice. Your proof is using the axiom of choice. Since you do not specify which element you choose at each step but rather just "pick" one, you have implicitly used some axiom of choice. It isn't wrong, indeed as I remark above - you cannot avoid it (in the general case). I do think that it is important to know where and how it comes into play. - I mean I need to some apply some version of AoC to create the decresing sequence of my question? And what was the subset $D$ of $\mathbb R$ which has unbounded but had no decreasing sequence? – Mohan Feb 19 '12 at 19:08 @user774025: Did my edit answer your question? – Asaf Karagila Feb 19 '12 at 20:27 Yes. Thank You. :) – Mohan Feb 20 '12 at 3:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535251259803772, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/32266/dual-modules-and-first-cohomology
# Dual modules and first cohomology Let $G$ be a finite group, $K$ a characteristic-$p$ algebraically closed field (say $p$ divides $|G|$), and let $M$ be a finite-dimensional $KG$-module. What hypotheses are needed on $G$, $M$ to ensure that $H^{1}(G,M) \cong H^{1}(G,M^{*})$ where $M^{*}$ denotes the dual module Hom$(M,K)$? In fact, in the situation I'm faced with, $G$ is a finite simple group and calculation suggests the stronger result that $Ext^1(M,N) \cong Ext^1(N,M)$ when $M$ and $N$ are irreducible, and at least one of $M$,$N$ is self-dual. There are counterexamples when the latter condition fails, e.g. Alt$_9$ has non-self-dual modules of dimension 8 and 20 with $Ext^1(M,N) \neq Ext^1(N,M)$. I feel like this result should be well-known but I've been unable to find a reference for it yet. A weaker result that would also satisfy me is any criterion to ensure $H^1(G,M) \neq 0 \Rightarrow H^1(G,M^{*}) \neq 0$. Thanks in advance! - 1 There is a cup-product pairing $H^1(G,M) \times H^1(G,M^*) \to H^2(G,K)$. I don't think this will typically be a perfect pairing, but in some situations it may be. (This is the kind of thing that came to mind when I read your question.) – Matt E Apr 12 '11 at 0:08 1 If it helps, it is not generally true that $H^1(G,M) \neq 0 \implies H^1(G,M^*) \neq 0$. One can take G to be a smallish symmetric group and M a Specht (or dual Specht, depending on your definition). I found this particularly irritating since M and M* lifted to isomorphic QG modules, but had different first cohomology mod 2. In this example, neither G nor M is simple, but I believe it should be easy to find examples where G is simple (and M is not). One should be able to find examples where G has a cyclic Sylow p-subgroup. – Jack Schmidt May 19 '11 at 16:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260063767433167, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/113549-quick-question-homeomorphisms.html
# Thread: 1. ## Quick question... homeomorphisms... Can a homeomorphism exist between an open and a half open set? (ie: (0,1) and [0,1)) I know that to be a homeomorphism, a bijection must exist, they must be continuous, and they must have a continuous inverse... Where to go from here? The fact that 0 is not included within the first set, would that alone make it so that it is not homeomorphic? Thank you. Jia Lin 2. I don't think there is even a surjective continous function from $(0,1)$ to $[0,1)$ (I'm still trying to prove it though) 3. I know what the answer should be... they definitely aren't homeomorphic, but it is an awkward proof. 4. Nevermind, I just found a counterexample to my argument. 5. Originally Posted by Majialin Can a homeomorphism exist between an open and a half open set? (ie: (0,1) and [0,1)) I know that to be a homeomorphism, a bijection must exist, they must be continuous, and they must have a continuous inverse... Where to go from here? The fact that 0 is not included within the first set, would that alone make it so that it is not homeomorphic? Thank you. Jia Lin Let A = [0, 1) and B = (0, 1). Suppose there exists an homeomorphism f between A and B. Then f((0,1)) has to be a connected set, because (0,1) is a connected set and "connectedness" is a topological invariant. We see that A\{0} is a connected set, but B\{f(0)} is a disconnected set. Contradiction ! Thus there exists no homeomorphism between A and B. 6. Thank you very much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629627466201782, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/51195/winning-strategy-at-chomp-a-chocolate-bar-game/51199
## Winning strategy at chomp (a chocolate bar game)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The game of chomp is an example of a game with very simple rules, but no known winning strategy in general. I copy the rules from Ivars Peterson's page http://www.maa.org/mathland/mathtrek_03_24_03.html "Chomp starts with a rectangular array of counters arranged neatly in rows and columns. A move consists of selecting any counter, then removing that counter along with all the counters above and to the right of it. In effect, the player takes a rectangular or square "bite" out of the array—just as if the array were a rectangular, segmented chocolate bar. Two players take turns removing counters. The loser is the one forced to take the last "poisoned" counter in the lower left corner." A nice non-constructive argument shows that the first player has a winning strategy. The winning strategy can be made explicit in very specific cases. As far as I know, the more general setting for which the winning strategy is known is when we have 3 rows and any number of columns, see http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/chomp.html My question is Are there any recent advances on chomp? - 3 I don't think this is "recent", but in case you're not aware, it's an interesting exercise to see who has the winning strategy in $n\times \infty$ chomp ($2\leq n\leq \infty$). The usual argument doesn't work, but one can construct an explicit winning strategy. I heard this from Shmuel Zamir at BWGT $2010$. – Noah Stein Jan 5 2011 at 14:03 2 A closely related question is mathoverflow.net/questions/41913/…. – Richard Stanley Jan 5 2011 at 17:08 I don't understand the question. Can't you just calculate nimbers for all possible chocolate bars inductively? See en.wikipedia.org/wiki/Nimber – Theo Johnson-Freyd Jan 5 2011 at 18:00 1 A small point, but technically the array can't be 1x1. – Henry Towsner Jan 5 2011 at 19:00 @Theo: More easily, for any size of the bar, you can inductively determine winning and losing positions. But this task is very long and probably untractable for more than 10 rows and 10 columns, say. – Pierre Dehornoy Jan 6 2011 at 10:12 ## 2 Answers Here are two papers, from 2002 and 2007; not sure if they are new to you. Jan Draisma and Sander van Rijnswou show in their paper, "How to chomp forests, and some other graphs" (2002), that Chomp can be completely solved when the underlying graph $G$ is a forest. From the Abstract: Interesting consequences are: first, that the starting player has a winning strategy for any non-empty tree; and second, that he has a winning strategy for the complete graph on $n$ vertices if and only if $n$ is not a multiple of 3. A second relatively recent paper is, "Scaling, Renormalization, and Universality in Combinatorial Games: The Geometry of Chomp," by Eric J. Friedman and Adam Scott Landsberg (Combinatorial Optimization and Applications, Lecture Notes in Computer Science, 2007, Volume 4616/2007, 200-207). Their results resist succinct summary, but, for example, they are able to compute "the expected number of winning moves" under certain circumstances. - Thanks, I did not know the second paper. I will have a look. – Pierre Dehornoy Jan 6 2011 at 10:12 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The non-constructive proof you refer to is proving a $\Pi_2$ statement, and therefore can be unwound to give an explicit proof. (This was pointed out to me by Mints, in the context of the game Hex, for which the same situation occurs.) If the argument is what I expect it to be The strategy is to simply produce the tree of all possible moves and then label them as winning or losing (for player 1) by induction: and end-state is winning if player 2 takes the poison, a node where player 1 moves is winning if any of its children are winning, and a node where player 2 moves is winning if all of its children are winning. Roughly the same argument as in the non-constructive proof shows that the root node is winning, and so player 1's strategy is just to always move so they end up on a winning node. (Of course, this isn't an elegant strategy, so there's still a reasonable open question there, but it is a known winning strategy in any formal sense of the term.) - I'd call it a known winning strategy for a fixed size of rectangle... – Cam McLeman Jan 5 2011 at 18:58 Technically, this gives a computable algorithm which transforms the parameters of the rectangle (assuming they're not 1x1) to a strategy for the game on that rectangle. Most of the strategies we consider explicit seem like they're more uniform in the parameters, in some sense, but I think you'd run into difficulty making that formal. – Henry Towsner Jan 5 2011 at 19:04 1 @Henry: This gives a (long) algorithm to determine the strategy, but you will agree that it is not very satisfying: it does not make me understand this game better or give any structure to the set of positions. Thanks nevertheless – Pierre Dehornoy Jan 6 2011 at 10:16 Here's a question that may help clarify the issue: Consider the problem of determining whether an arbitrary Ferrers shape is a first-player win. Is this problem PSPACE-complete? As far as I know, this is an open problem. (I believe that the analogous question for Hex, which also admits a similar strategy-stealing argument, is known to be yes.) If the answer is yes then that means that for general (finite) Chomp positions, there is no winning strategy in the sense of something polynomial in length that can be used to find a winning move in polynomial time, unless NP = PSPACE. – Timothy Chow Nov 22 2011 at 21:27 Note, however, that it would still be technically possible for there to be a satisfactory winning strategy for rectangular shapes, if the Ferrers shapes arising from perfect play from a rectangular starting position happened to be particularly tractable for some reason. – Timothy Chow Nov 22 2011 at 21:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386307001113892, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4182399
Physics Forums Page 2 of 3 < 1 2 3 > Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus ## Linear Independence/Dependence Quote by Studiot Vectors of the form {a,b} with a, b contained in R. (I wish I could find the appropriate symbols more easily) You just mean the couple (a,b) ?? That's very different from {a,b}. No book I've ever seen talks about sets {a,b} and {c,d} being linearly dependent... When you check such a book sometimes, as in this case, you need to check more than one definition. You will find the mention of sets in the cross-referenced definition of 'linear combination'. You have to follow the chain of definitions through. However the entry did reference sets in another way as well. linearly dependent. adjective. such that there is a linear combination (note cross reference) of the given elements with not all the coefficients equal to zero. Now sets have elements do vectors? Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Studiot When you check such a book sometimes, as in this case, you need to check more than one definition. You will find the mention of sets in the cross-referenced definition of 'linear combination'. You have to follow the chain of definitions through. OK, so here are the two relevant definitions: linearly dependent, adj. such that there is a LINEAR COMBINATION of the given elements, with not all coefficients zero, that equals zero. For example, u, v and w are linearly dependent vectors if there exist scalars a, b and c, not all zero, such that $$au + bv + cw = 0.$$ Elements are said to be K-linearly dependent if there is a set of such constants that are elements of some given K; for example, the vectors $(1,\pi)$ and $(\pi, \pi^2)$ are ($\mathbb{R}$-linearly dependent, but not $\mathbb{Q}$-linearly dependent (where $\mathbb{R}$ is the set of real numbers and $\mathbb{Q}$ is the set of rationals), since one of the required coefficients is a multiple of the irrational number $\pi$. See also BASIS. linear combination, n. a sum of the respective products of the elements of some set with constant coefficients. (It is sometimes required that not all the constants are zero.) For example, a linear combination of vectors u, v and w is any sum of the form $$au + bv + cw,$$ where a, b, and c are scalars. OK, I'm still not seeing anything about sets $\{1,\pi\}$ and $\{\pi,\pi^2\}$ being linearly independent... And I'm also not reading anything about equations being linearly independent. Now sets have elements do vectors? No, vectors do not have elements in general. So let us start with the definition given of linear combination. It specifies a method of combining elements of a set. It does not prohibit those elements being sets, numbers or other elements capable of forming combinations according to the rules specified. It does allow those elements to be vectors and uses this case as an example. But it does not restrict those elements to being vectors. I am quite sure that had the authors wanted this restriction they would have specified as such, as for instance they have done in their definition of a linear mapping. Since it specifies the word elements in the plural it implies that there is more than one, which tallies with one of my earlier comments. I am equally sure that I have seen many (mostly engineering it is true) books that talk of linearly dependent equations. Along the lines, for instance of 3x+2y=6 and 12x+8y=24 are linearly dependent since there exists a coefficient (4) that you can multiply equation 1 by to obtain equation 2. The above example is very obvious but some are not and I was going to develop this to help the OP when these other issues arose. Please also look at the second half of my post#13, where I note the definition scheme is the complement of that offered by Fredrik and also give my reasons for preferring it. Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Studiot So let us start with the definition given of linear combination. It specifies a method of combining elements of a set. It does not prohibit those elements being sets, numbers or other elements capable of forming combinations according to the rules specified. It does allow those elements to be vectors and uses this case as an example. OK. But in order for the definition to even make sense, we need to have a notion of addition and multiplication. So we need to make sense of $\mathbf{u}+\mathbf{v}$ and $a\mathbf{u}$, where a is a scalar. Furthermore, you need a notion of equality and of a zero. So, for sets and equations, how do you define these notions? To keep it simple, a linear combination is just the product of the columns of matrix A and some corresponding entry as weights from the column vector x. In other wards it's just Ax. Studiot, I'm quite sure that equations/planes/spaces cannot be added and subtracted. Only vectors and matrices can, and hence only vectors and matrices are expressible as linear combinations of one another. My professor has drilled this to my head quite thoroughly, and I find the concept quite convenient from a rigorous point of view BiP Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Bipolarity Studiot, I'm quite sure that equations/planes/spaces cannot be added and subtracted. Only vectors and matrices can. My professor has drilled this to my head quite thoroughly, and I find the concept quite convenient from a rigorous point of view BiP Well, you certainly can add sets by $A+B=\{a+b~\vert~a\in A,~b\in B\}$. That's even going to be a vector space if A and B are. But this is merely a convenient notation. I have not yet seen sets being linearly independent of each other or something of the kind. To be honest, I am quite interested in the "sets being linearly independent"-idea. Too bad Studiot doesn't have any references except an entry in a math encyclopedia. So if anybody has some actual references, I would be very happy to read about it!! OK. But in order for the definition to even make sense, we need to have a notion of addition and multiplication. So we need to make sense of u+v and au, where a is a scalar. Furthermore, you need a notion of equality and of a zero. Agreed, but I find the proceedure I was taught when I was 12 or 13 perfectly adequate and still in perfect accord with both the Fredrik definition and the Borowski definition. Going back to my post# taking equations1,2 and adding a third 3x+y=18 I can:- Form the linear combinations: 4(equation1) + (-1)(equation2) this is seen to equal 0 so the equations are linearly dependent and no solution is possible. 1(equation1) + (-1)(equation3) this is seen to be ≠ 0 and therefore the equations can be solved for x and y Nothing in the Borowski definition implies we need the entire space of expressions of the form gx+hy=k to be able to perform these manipulations, although Fredrik's definition does seem to suggest this, though I won't deny that such a space is useful. Studiot, I'm quite sure that equations/planes/spaces cannot be added and subtracted. Only vectors and matrices can, and hence only vectors and matrices are expressible as linear combinations of one another. My professor has drilled this to my head quite thoroughly, and I find the concept quite convenient from a rigorous point of view I wonder if, perhaps, there was a particular context in which this was said by your professor? halo31 has put is pretty shortly and the equations I presented can be expressed in that format, but the expression 3x+2y is neither a matrix nor a vector. Of course you can add equations. The technique is extremely widely used in Physics and Engineering and of great importance. It is called superposition. Further if those equations represent some region then that is equivalent to adding those regions, but not all equations represent regions and not all regions have single equations. Blog Entries: 2 I have not yet seen sets being linearly independent of each other or something of the kind. Oddly enough, I do recall a minor problem in a text book that described linear depedence and indepedence a property of sets. I don't remember much but here were some basic properties. If there exist two linearly indepedent sets, then must be linearly indepedent. The complement of a linearly indepedent set is linearly depedent. I wish I could find this book again and see if it relates... wish I could find this book again and see if it relates Nering p11? Kreysig p53? Griffel p89? Gupta 2.23, 1.17? Hoffman and Kunze p40? Too bad Studiot doesn't have any references except an entry in a math encyclopedia. So if anybody has some actual references, I would be very happy to read about it!! Don't know how to take these and other remarks. My post was written first and I used Borowski to check my statements against. Mentor Quote by MarneMath Oddly enough, I do recall a minor problem in a text book that described linear depedence and indepedence a property of sets. I don't remember much but here were some basic properties. If there exist two linearly indepedent sets, then must be linearly indepedent. The complement of a linearly indepedent set is linearly depedent. I wish I could find this book again and see if it relates... When we say that x and y are linearly independent, we really mean that the set {x,y} is linearly independent. The thing that's linearly independent or linearly dependent is always a subset of a vector space. If we say that {x,y} and {u,v} are linearly independent, it just means that {x,y} is linearly independent and {u,v} is linearly independent. There is no notion of sets being linearly independent of each other that I know of. It's fairly obvious that the intersection of two linearly independent sets is either empty or linearly independent. Maybe that's the result you had in mind. Mentor Quote by Studiot Nering p11? Kreysig p53? Griffel p89? Gupta 2.23, 1.17? I looked at Kreyszig (Introductory functional analysis with applications). His definition is the same as mine. It tells us what it means for a subset of a vector space to be linearly independent. It doesn't tell us what it means for one subset to be linearly independent of another. When we say that x and y are linearly independent, we really mean that the set {x,y} is linearly independent. The thing that's linearly independent or linearly dependent is always a subset of a vector space. If we say that {x,y} and {u,v} are linearly independent, it just means that {x,y} is linearly independent and {u,v} is linearly independent. There is no notion of sets being linearly independent of each other that I know of. Yes, but consider The set of all values of p, q for which 3p+2q=6 and the set of all values for which 12p+8q=24 Put these into your x,y format and you can see that you have two sets which are linearly dependent, since they are essentially the same set. Mentor Do you mean that I should write these two lines as ##K=\{(p,q)\in\mathbb R^2|3p+2q=6\}## and ##L=\{(p,q)\in\mathbb R^2|12p+8q=24\}##? I wouldn't say that K and L are linearly dependent. I would just say that they're equal. Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Studiot Agreed, but I find the proceedure I was taught when I was 12 or 13 perfectly adequate and still in perfect accord with both the Fredrik definition and the Borowski definition. So why did you ignore my question?? It was a standard question: how do you define addition, multiplication and equality of sets? And how did you define the "zero" set?? Quote by Studiot Nering p11? Kreysig p53? Griffel p89? Gupta 2.23, 1.17? Hoffman and Kunze p40? I searched for Griffel, but I couldn't find it. As for Kreyszig and Gupta, they have multiple books, so I don't know which one you mean. As for Nering and Hoffman & Kunze: Quote by Nering A set of vectors is said to be linearly dependent if there exists a non-trivial linear relation among them. Otherwise, the set is said to be linearly independent. Quote by Hoffman and Kunze Definition. Let V be a vector space over F. A subset S of V is said to be linearly dependent (or simply, dependent) if there exist distinct vectors $\alpha_1,\alpha_2,...,\alpha_n$ in S and scalars $c_1, c_2,...,c_n$ in F, not all of which are 0, such that $$c_1\alpha_1+c_2\alpha_2 + ... + c_n\alpha_n=0$$ A set which is not linearly dependent is called linearly independent. If the set S contains only finitely many vectors $\alpha_1,...,\alpha_n$, we sometimes say that $\alpha_1,...,\alpha_n$ are dependent (or independent) instead of saying S is dependent (or independent) . So the notion defined here is the linear independence of a set. I do not see a definition here of the linear independence of two sets or the linear independence of equations. These definitions are perfectly compatible with what Fredrik has said. So none of these books actually agree with what you are saying. No offense, but I am starting to think that you are just misunderstanding the entire concept. Don't know how to take these and other remarks. Take it how you want. I meant what I said: I am interested in finding out more of this "linear dependence of sets", but I have yet to find a reference about it. Page 2 of 3 < 1 2 3 > Thread Tools | | | | |-----------------------------------------------------|----------------------------------|---------| | Similar Threads for: Linear Independence/Dependence | | | | Thread | Forum | Replies | | | Precalculus Mathematics Homework | 3 | | | Calculus & Beyond Homework | 6 | | | Linear & Abstract Algebra | 3 | | | Introductory Physics Homework | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517536759376526, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3742/find-solutions-that-solve-the-equations?answertab=votes
# find solutions that solve the equations I am implementing a timing attack on RSA for school and I need to generate two sets of messages $Y$ and $Z$ for which holds: $(Y^d \mod N) \cdot Y < N$ and $(Z^d \mod N) \cdot Z > N$ where $d$ and $N$ are known. How can I efficiently find solutions for $Y$ and $Z$? I have tried using a random function but it takes too long to complete or it doesn't complete at all. - 1 I agree with @CodesInChaos . If we assume $d$ is random, then $Y=k$ has probability $1/k$ of working; likely there is a small $Y$ that would work (and starting at the smallest and working your way up seems like the obvious approach (unless, of course, $Y=0$ or $Y=1$ isn't disallowed for some reason). – poncho Sep 7 '12 at 19:57 1 Do you have a typo in the condition for $Z$? The $*Z>Z$ part is weird. – CodesInChaos Sep 7 '12 at 19:57 2 For Z, try any Z > N/2; that should work. – poncho Sep 7 '12 at 20:04 2 You state $d$ is known. By convention, $d$ stands for the private exponent (which is one of the targets if you're trying to recover the private key; if you get that, you've won). Do you really mean the private exponent $d$? Or, do you really mean the public exponent (and are using nonstandard terminology to describe it)? – poncho Sep 7 '12 at 20:19 1 – poncho Sep 7 '12 at 20:26 show 3 more comments ## 2 Answers Finding a $Z$ is easy. This condition is fulfilled for all $Z>n/2$, and most of the smaller values of $Z$ too. There are only a few values fulfilling the condition for $Y$, and since this is almost the negation of the condition for $Y$, most numbers will fulfill it. When looking at the condition for $Y$, you can model $Y^d \mod N$ as a random number between 0 and $N$. This leads to a success chance of approximately $1/Y$. Thus a good strategy for finding a $Y$ is starting with $Y = 2$ and incrementing by one on each attempt, i.e. trying 2, 3, 4,... This should find a number fulfilling $(Y^d \mod N) \cdot Y < N$ quite quickly. Total chance of finding an $Y \le n$ is: $1-\Pi^n_{i=2}(1-1/i) =$ $1 - \frac{(n-1)!}{n!} =$ $1- 1/n$ Which quickly converges to 1 - sorry there was a typo in the Z condition. For the Y condition - what would be a good increase on each attempt (N is 512bit at least) – blejzz Sep 7 '12 at 20:03 @blejzz The best increase would be 1. You simply want to try the smallest $Y$s first, since those have the highest success chance. – CodesInChaos Sep 7 '12 at 20:10 @CodesInChaos gives an excellent answer. I'll add one more point: if you are not able to find a $Y$ satisfying your condition using CodesInChaos's approach, here is one more approach you can try as a fallback. Pick a small value $i$, set $Y=i^e \pmod{N}$, and try $Y$. Note that $Y^d \bmod N$ will be equal to $i$, and $Y$ can be modelled as a random number between 0 and $N$. This means that you have a success chance of approximately $1/i$, with this strategy. So, you can try $i=2$, $i=3$, $i=3$, \dots, in succession until you find the first success. By the same argument CodesInChaos gives, there is likely to be a small $i$ for which this succeeds. Again, this doesn't really add anything. There is no particular reason to prefer this strategy over CodesInChaos's, if both $e$ and $d$ are known. However, this is available as a fallback if CodesInChaos's method fails. Also, this method is available if $d$ is not known but $e$ is known. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955691397190094, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/225875/calculating-the-absolute-value-of-a-complex-number-am-i-right
# Calculating the absolute value of a complex number - am I right? To calculate the absolute value of a complex number u must use the following formular $(a^2+b^2)^½$=|a+bi| So for instance with -4-5i would have the absolute value $((-4)^2+(-5)^2)^½$=$(16+25)^½$=$\sqrt(41)$ But how come it is so? - ## 2 Answers In short, that's how it's defined. Geometrically, it might be more intuitive if you identity each complex number $a + bi$ with a vector of the form $(a,\ b)$. The length of the vector is then given by the familiar Pythagorean theorem from which the norm of the complex number is derived. Much of the intuition regarding complex numbers comes from their interpretation as essentially $2$-dimensional analogues of real numbers. In particular, this definition is especially natural when taking advantage of the "built in" polar coordinate representation. - Thanks, it makes alot more sense with that describition than just using a "random" formular :) – Alek Oliver Oct 31 '12 at 7:09 Think of the absolute value as the distance from zero. If you represent your number $z=a+bi$ in the complex plane what is the distance of $z$ from zero? Use Pythagorean theorem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072556495666504, "perplexity_flag": "head"}
http://mathoverflow.net/questions/79360/independence-of-rotated-spherical-harmonics/79368
## Independence of rotated spherical harmonics ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, Consider a spherical harmonic of degree $l$, denoted by $y_l^m$. I rotate this harmonic using $2l+1$ different rotations. The set of functions I get is not an orthogonal set, but the functions are still harmonics. The question is: does this set still spans the entire space of spherical harmonics of degree $l$ ? My intuition is that it almost always does, but I can't say what are the non trivial configurations where it does not. Thanks a lot for any help! Cyril - It seems (because of $2\ell+1$) that your harmonic polynomials depend on three variables $x_1,x_2,x_3$. Right ? – Denis Serre Oct 28 2011 at 8:49 you can see them as harmonic polynomials of 3 variables restricted to the unit sphere. – Cyril Soler Oct 28 2011 at 9:39 ## 1 Answer Let $G$ be the isometry group of a polyhedron (tetrahedron, ..., icosahedron), its order being $2n$. The natural representation of $G$ over the space $H_\ell$ of harmonic polynomials of degree $\ell$ may or may not be irreducible. It is certainly not if $2\ell+1\ge\sqrt{2n}$. Thus let us take $(n/2)^{1/2}\le l\le n.$ Because the representation is reducible, there exists a strict invariant subpace, thus a non-zero $P\in H_\ell$ such that the set of $P\circ R$ with $R\in G$ does not span $H_\ell$. Because $|G|>2\ell+1$, this is a counter-example. Update. Suppose that the representation of $G$ over $H_\ell$ admits an irreducible component of multiplicity $\ge2$ (I suspect that there are exemples; does somebody knows one?). Then there does not exist a spherical harmonics $P$ such that the $P\circ R$ span $H_\ell$ when $R$ covers $G$. This is because we may decompose $H_\ell=F\oplus^\bot K$ with $K$ irreducible component and $P\in F$. - We're talking about spherical harmonics specifically. The dimension of the space of spherical harmonics of degree $l$ is $2l+1$. I don't understand in what way your answer can be helpful to me. – Cyril Soler Oct 28 2011 at 9:45 the sum of the squares of the degrees of irreducible representation of $G$ is $|G|$. Thus an I.R. cannot have a large degree. In my construction, the representation cannot be irreducible, so there is a non-trivial invariant subspace $E$. If you take $P\in E$, the set of $P\circ R$, with $R\in G$ is in $E$, thus does not span $H_\ell$. – Denis Serre Oct 28 2011 at 12:48 So it means that if I take my $2l+1$ rotations to be distinct elements of the isometry group of a polyhedron, the rotated spherical harmonics will not be independant? – Cyril Soler Oct 28 2011 at 13:15 Yes and no: for some choice of the first spherical harmonics, the rotated ones will not be independent. Some' becomes every' if the group $G$ is finite (polyhedron isometry grpoup) and its representation over $H_\ell$ admits an irreducible component of multiplicity $\ge2$. – Denis Serre Oct 28 2011 at 14:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9076207876205444, "perplexity_flag": "head"}
http://www.birs.ca/events/2008/5-day-workshops/08w5061
# Recent Developments in Elliptic and Degenerate Elliptic Partial Differential Equations, Systems and Geometric Measure Theory (08w5061) Arriving Sunday, March 30 and departing Friday April 4, 2008 ## Organizers David Cruz-Uribe (Trinity College) Steven Hofmann (University of Missouri-Columbia) Marius Mitrea (University of Missouri, Columbia) Salvador Perez Esteva (Universidad Nacional Autonoma de Mexico) Cristian Rios (University of Calgary) Eric Sawyer (McMaster University) ## Objectives The proposed workshop is envisioned as a culmination of a flurry of fruitful, recent collaborative activities between mathematicians based at institutions in Canada, Mexico and the United States of America. The purpose and mission of this workshop fully concurs with the vision and objectives of the Pacific Rim Mathematical Association (PRIMA). The workshop will focus on the exciting recent advances in the theory of Elliptic, and Parabolic Partial Differential Equations with rough coefficients as well as equations and systems which fail to be elliptic in a traditional sense, due to various sources of degeneracy. The proposed areas of focus have a broad impact, and they have been the scene of several major developments in the recent years. The timeliness of an workshop emphasizing the intricate connections between such developments in this interdisciplinary field, is therefore most opportune. Leading international researchers from related areas of expertise will gather to exchange scientific communications, ideas and suggestions that will boost and foster new developments and collaborations. This interaction will greatly promote each particular specialty by incorporating the enriching experience, approaches and ideas from other areas. Some of the topics to be discussed include: - The mathematics of the Kato problem. Given an elliptic second order operator in divergence form $L$ with complex coefficients, the square root operator $sqrt{L}$ satisfies $||sqrt{L} f ||_{L^2} approx || nabla f ||_{L^2}$ for all $f$ in $H^1(R^n)$. The path originating from the very formulation of the conjecture and leading all the way to the recently obtained full solution of this major problem contains a wealth of innovative ideas and techniques, and opens new frontiers to explore. Some of the themes orbiting around the Kato problem are: Heat Kernels, Evolutionary Equations, Operator Theory, Semigroup Theory, Functional Analysis, Functional Calculus, Holomorphic Calculus, Singular Integrals and Calderon-Zygmund theory, Riesz Transforms, and Carleson measure criteria (T1/Tb) for the solvability of boundary problems. - The degenerate Kato problem and related topics. It is natural to conjecture that the Kato square root problem could be generalized to operators with degenerate ellipticity, where the degeneracy is controlled by a weight in some Muckenhoupt class. Recent developments in this direction indicate that that a suitable “Weighted Kato Conjecture” might indeed hold for certain degenerate elliptic operators. The battery of related questions arising in the context of the classical Kato problem can be phrased in the more general context when certain types of degeneracies are allowed. In this workshop we propose to discuss the state of the art of this area and explore the aforementioned issues. - Geometric measure theory and PDEs. The theory of uniformly rectifiable sets, and applications in elliptic and parabolic PDEs, and in the theory of quasiconformal mappings. Relationship between the geometry of a domain and the regularity of its harmonic measure. Free boundary regularity problems, singular sets. Following the ground-breaking work of Kenig-Toro on the regularity of the Poisson kernel in vanishing cord arc domains, and of David-Semmes on the singular integral operators and rectifiability, a naturally emerging direction is exploring the effectiveness of the method of layer potentials for BVPs in vanishing cord arc domains. Some of the participants in the workshop have already made substantial progress in this direction. - A priori regularity of solutions of subelliptic systems of equations. The Dirichlet problem. Systems with Infinite-degenerated ellipticity. Nash-Moser techniques, Campanato and Schauder methods are techniques originally developed to obtain a priori estimates and interior regularity for elliptic problems. These paradigms have evolved into more sophisticated, broad ranging, and powerful methods. Some examples are the treatment of quasilinear subelliptic systems satisfying Hormander’s commutation condition (Xu and Zuily (1997)), and recent generalizations to systems of the subelliptic regularity theorem by P. Guan (1997). - Applications to Monge-Ampere equations. The n-dimensional Partial Legendre Transform (PLT) (Rios-Sawyer-Wheeden Adv.Mat. 2005) converts the Monge-Ampere (MA) equation into a system of equations. This is the first PLT-based technique to be successfully implemented to treat equations of MA type in dimensions higher than two since Alexandrov first used the PLT in the plane over half a century to that purpose. In this workshop recent applications of this technique to subelliptic and infinite degenerate MA equations will be presented. One goal is to explore possible generalizations of this approach to treat other degenerate nonlinear equations. The schedule will include an opening colloquium at an introductory level surveying the state of the art of some important areas in Elliptic and Degenerate Elliptic PDEs. The plenary talks will address specific topics and individual projects. The program will also allocate time for short communications in order to maximize the exposure opportunities for younger researchers participating in the workshop. This workshop will provide an excellent opportunity for researchers from related areas to work in concert and pave the way for integrating the recent developments in their respective fields in a coherent, unified body of results which, at its core, can be viewed as a far-reaching extension of the classical theory. The most prominent open problems and future directions for the areas described above will be discussed. For these reasons this event is very suitable for young researchers and advance graduate students, who will be encouraged to attend; and the timing of such an event is very appropriate and favorable to the progress of the different subjects.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086621403694153, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/57965/list
## Return to Question 2 added 148 characters in body I can understand why $P^A = NP^A$ does not imply $P=NP$, $A$ can "contain" the powers of NP. However, why does $P^B \neq NP^B$ not imply $P \neq NP$? It seems like if $P$ and $NP$ denote the same classes; then we should be able to arbitrarily substitute one for the other (as long as the only thing of interest is the computational model), and everything should stay the same. More generally, why is $A^X \neq B^X$ not a poof for $A \neq B$ ? I feel like there's a very fundamental piece of logic / reasoning I'm missing here. Thanks! EDIT: I understand the construction of the oracles A & B. However, I still don't understand why the existence of $B$ not prove that $P \neq NP$. 1 # Oracle, Relativization, and P vs NP, [Philosophical] I can understand why $P^A = NP^A$ does not imply $P=NP$, $A$ can "contain" the powers of NP. However, why does $P^B \neq NP^B$ not imply $P \neq NP$? It seems like if $P$ and $NP$ denote the same classes; then we should be able to arbitrarily substitute one for the other (as long as the only thing of interest is the computational model), and everything should stay the same. More generally, why is $A^X \neq B^X$ not a poof for $A \neq B$ ? I feel like there's a very fundamental piece of logic / reasoning I'm missing here. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518994092941284, "perplexity_flag": "head"}