url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/163779/ocr-some-app-to-calculate-the-derivative-on-a-line-graph-with-iphone-ipad?answertab=votes
# OCR: some App to calculate the derivative on a line graph with iPhone/iPad? Problem I have a set of points like the ones shown on the right hand side of the image. So for each 'Ships Head' there is a corresponding value for 'Deviation'. In this example we can treat west as negative and east as positive values. On the left of the image there is a graph made from the points on the right. What I need to do is find a way of using a setup like this to work out the deviation for any given point on the graph. So for example, given the value of 10 degrees, we should be able to calculate the deviation as something around 3.8 degrees. Obviously its easy to manually draw a graph and then read off values however I need a way of doing this in code. I have never had to solve a problem like this before and I don't know where to start. I was thinking I could make use of trigonometry and maybe some sort of cosine wave but i don't know how to do this. What is the way of calculating the deviation for a given degree value based on the values on the right hand side of the image? - Are you asking how to calculate a derivative? – Cake Jun 27 '12 at 14:03 Welcome to math.SE. Do you mean you'd like to perform a regression on the given table of values? – user2468 Jun 27 '12 at 15:38 ## 2 Answers It looks like a sine wave, so I would just use Deviation (deg E) = $6 \cos (heading - 135^\circ )$. - I understand your want as a procedure below. There are different options for it, I cover both scenarios below. Procedure 1. shoot photo from the picture 2. you move your finger along the graph where the program tells you the derivative along the curve Options 1. if you have some software, it may have OCR -capabilities to do this but now no mention about the software 2. try to find out some OCR -software which calculates the derivation along the curve I haven't yet found anything like this, perhaps such software will emerge after this feature-request and app-request. DocScan -app basically has similar technology to do this but meant for storing notes, not for analyzing photos and certain details about it. I am still investigating whether some math -software with OCR -capabilities exist. You can find here a similar question but for formulae, not for graphs -- anyway the same problems with cluster-analysis, feature-extraction, feature-selection, image-processing, pre-processingg and decision-making. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466300010681152, "perplexity_flag": "head"}
http://mathoverflow.net/questions/61659?sort=votes
## Spaces of filters ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question arose more from curiosity than from an actual problem. There are situations when you embed some space $X$ in a set of filters on $X$, which inherits properties of $X$ or has even better properties. The prominent examples are (and in fact the only ones I am aware of): (1) $X$ is a topological space and we consider the Stone-Cech compactification $\beta X$, which can be constructed from the set of all ultrafilters on $X$. We gain the obvious advantage of compactness. (2) $X$ is a uniform space, and the completion of $X$ can be constructed from a set of Cauchy filters on $X$. In some examples algebraic structures can be preserved. For example when $X$ is a a SIN topological group, the completion is a complete topological group. The question is: What other examples with an interesting application of spaces of filters like in the above examples are there? For a motivation consider the Stone-Cech compactification $\beta \mathbb{N}$ of the natural numbers, which has a natural structure of a (noncommutative) monoid with a multiplication, which is only left continuous. Nevertheless it can be used to sho interesting results, for example about IP-sets. (http://en.wikipedia.org/wiki/IP_set) (I hope this question is not too vague, in order to qualify as a real question.) - 1 Wait, when $X$ is a topological group, it is not that case that $\beta X$ is also a group. See for example $\beta \mathbb Z$, which is also just a left topological semigroup. – Greg Graviton Apr 14 2011 at 7:39 2 Also, since this question seems to be asking for a big list rather than a specific answer, I think it should be made community wiki – Yemon Choi Apr 14 2011 at 11:54 @Greg: Right. I messed this up with the Bohr compactification. Sorry for being too rash! – Abel Stolz Apr 14 2011 at 16:05 ## 5 Answers Some partial answers taken from Neil Hindman, Dona Strauss, "Algebra in the Stone-Čech compactification", chapter 21 (aptly named "Other Semigroup Compactifications"). The most general is probably Theorem 21.31: • If $X$ is discrete, $Y$ compact, $g: \beta X \rightarrow Y$ continuous and onto, then $Y$ is isomorphic to a space of filters on $X$ (simply intersect the preimages of points in $Y$) • In particular, every compactification of $X$ can be viewed as a space of filters. If you're interested in algebraic aspects, there's section 21.3 of the book. If $(X,\cdot)$ is a semigroup (not necessarily discrete, but completely regular), "nice" semigroup compactifications such as the AP and WAP compactifications, i.e., the (maximal) topological and semitopological semigroup compactifications respectively, are very interesting objects. They also have nice descriptions as filters. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The ultrafilters are, of course, measures on the natural numbers and can therefore be considered as elements of the dual space of $\ell^\infty$. A generalization of the semigroup construction on $\beta\mathbb{N}$ is the Arens product on the double dual of a Banach algebra. Many of the concepts that arise in the study of $\beta\mathbb{N}$ have analogues in this context. - I would argue that, morally, any kind of topological completion can be described as a space of filters. In other words, interesting spaces of filters are equivalent to interesting completions and the question reduces to "What are other interesting examples of completions?". I don't know any good examples for the latter, though. Consider an ultrafilter $p$ and write $A \in p$ as "$p \in A$" for a moment. Then, the ultrafilter axioms read 1. $p\not\in \emptyset$ 2. $p \in A \wedge A\subseteq B \implies p \in B$ 3. $p \in A \wedge p\in B \implies p \in A\cap B$ 4. $p \in A \vee p\in A^c$ In other words, ultrafilters are just an axiomatization of the notion of "point". Now, completing a space means adding points. But since points can be described by ultrafilters, it is no surprise that a completion can be described as a collection of ultrafilters, or a quotient thereof. In any case, that's how I like to think about ultrafilters. Another point of view with the same effect would be the observation that in order to call a space $Y$ a completion of $X$, it should be compact, which immediately gives a surjective morphism $$\beta X \to Y$$ allowing us to write $Y$ as a quotient of a space of ultrafilters. - You can view the profinite completion of a group as the space of ultrafilters on the Boolean algebra of finite unions of cosets of finite index subgroups. A similar thing is true for semigroups. For example, points of the free profinite monoid are ultrafilters of regular languages. - I like this answer for the lattice theoretic point of view. Do you have a reference for your example? – Abel Stolz Jun 30 2011 at 6:49 Look at the books Finite semigroups and Universal Algebra by Almeida, The q-theory of finite semigroups by Rhodes and Steinberg or Pippenger's paper diku.dk/hjemmesider/ansatte/henglein/papers/…. Also Gehrke and Pin have been exploiting this to study equational theories of lattices of regular languages. – Benjamin Steinberg Jun 30 2011 at 14:24 I will try your references. Thank you! – Abel Stolz Jul 1 2011 at 9:38 Let $X$ be a separating proximity space. Then the Smirnov compactification of $X$ can be defined in terms of filters. A filter $F$ on a proximity space $X$ is said to be a round filter if for each $R\in F$, there is an $S\in F$ with $S\preceq R$. An end is a round filter $F$ such that if $A\preceq B$, then $B\in F$ or $X\setminus A\in F$. It can be shown that the ends are precisely the maximal round filters. The Smirnov compactification of $X$ is simply the collection of all ends. See [1] for details on proximity spaces and the Smirnov compactification. In my research, I used the Smirnov compactification in finding the maximal ideal space of $L^{\infty}(\mu)$ for any measure $\mu$. One can generate other examples of spaces of ultrafilters by looking at zero-dimensional spaces. Recall that a space is zero-dimensional if it has a basis consisting of clopen sets. The advantage of looking at zero-dimensional spaces is that the ultrafilters are generally ultrafilters on Boolean algebras. If $X$ is a zero-dimensional space, then the Banaschewski compactification of $X$ is simply the collection of all ultrafilters on the Boolean algebra of clopen sets of $X$. A space $X$ is said to be strongly zero-dimensional if whenever $Z_{1},Z_{2}\subseteq X$ are disjoint zero sets, then there is a clopen set $C$ with $Z_{1}\subseteq C,Z_{2}\subseteq C^{c}$. For example, every zero-dimensional Lindelof space is strongly zero-dimensional. One can show that a space $X$ is strongly zero-dimensional if and only if $\beta X$ is zero-dimensional. It turns out that the Banaschewski compactification of zero-dimensional space $X$ is the Stone-Cech compactification if and only if $X$ is strongly zero-dimensional. 1. Naimpally, S. A., and B. D. Warrack. Proximity Spaces. Cambridge [Eng] : University, 1970. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913558840751648, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/107425-related-rates-water-running-into-trough-print.html
# related rates, water running into trough Printable View • October 11th 2009, 02:57 PM oblixps related rates, water running into trough Water is running into a trough at rate 2 ft^3 / min. The cross section of the trough is an isosceles trapezoid with two bases 5 ft and 11 ft and a height of 4 ft. The length of the trough is 20 ft. How fast is the water level rising one hour later? so i have the equation V = (1/2) h (b1 + b2) 20 = 10 (h) (b1 + b2), i drew the trapezoid so that the smaller base is on top and since water is filling the trough, the bottom base won't be changing so my volume equation is now V = 10 (h) (b1 + 11). since i am given dV/dt and i need to find dh/dt, i need to somehow get rid of b1 and express it in terms of h. that's where i'm stuck at. how do i express b1 in terms of h. do i have to use similar triangles? if so, which triangles do i use? • October 11th 2009, 03:30 PM skeeter Quote: Originally Posted by oblixps Water is running into a trough at rate 2 ft^3 / min. The cross section of the trough is an isosceles trapezoid with two bases 5 ft and 11 ft and a height of 4 ft. The length of the trough is 20 ft. How fast is the water level rising one hour later? so i have the equation V = (1/2) h (b1 + b2) 20 = 10 (h) (b1 + b2), i drew the trapezoid so that the smaller base is on top and since water is filling the trough, the bottom base won't be changing so my volume equation is now V = 10 (h) (b1 + 11). since i am given dV/dt and i need to find dh/dt, i need to somehow get rid of b1 and express it in terms of h. that's where i'm stuck at. how do i express b1 in terms of h. do i have to use similar triangles? if so, which triangles do i use? the 3-4-5 triangles on both end sides of the trapezoid. trapezoidal cross-section of water in the trough ... upper base = $5+2x$ lower base = $5$ height = $h$ relationship between x and h ... $\frac{x}{h} = \frac{3}{4}$ $x = \frac{3h}{4}$ area of the trapezoidal cross-section of water ... $A = \frac{h}{2}\left[(5+2x) + 5\right]$ $A = \frac{h}{2}\left[\left(5+ \frac{3h}{2}\right) + 5\right]$ can you take it from here? All times are GMT -8. The time now is 01:19 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9116789102554321, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/128299-numbers-out-hat.html
# Thread: 1. ## numbers out of hat A hat contains 5 tickets numbered 1,2,3,4,5. Two tickets are drawn at random from the hat. Find the chance that the number on the two tickets differ by two or more if the draws are made 2. Originally Posted by billy A hat contains 5 tickets numbered 1,2,3,4,5. Two tickets are drawn at random from the hat. Find the chance that the number on the two tickets differ by two or more if the draws are made You didn't finish your question "if the draws are made..." Made what? Are the draws with replacement or without? With replacement, sometimes it is easier with these kinds of problems just to enumerate the choices. If you draw a 1 (1/5) probability, you have a 3/5 (3,4,5) chance that the numbers will be two or more apart. So you multiply these two events to get the probability. The rest of the probabilities are similar. 1: (1/5) * (3/5) 2: (1/5) * (2/5) 3: (1/5) * (2/5) 4: (1/5) * (2/5) 5: (1/5) * (3/5) Since each of these events are independent, you add them all together, giving 12/25 or a bit less than 1/2. Without replacement, the same idea applies but now you remove one the card you drew, making it easier to get one two or more apart: 1: (1/5) * (3/4) 2: (1/5) * (2/4) 3: (1/5) * (2/4) 4: (1/5) * (2/4) 5: (1/5) * (3/4) Giving 12/20 = 3/5, or a bit more than 1/2. 3. Hello, billy! A hat contains 5 tickets numbered 1,2,3,4,5. Two tickets are drawn at random from the hat. Find the probability that the numbers on the two tickets differ by two or more. With such a small number of outcomes, we can list all of them. If the tickets are drawn with replacement, the outcomes are: . . $\begin{array}{ccccc}(1,1) & (1,2) & (1,3) & (1,4) & (1,5) \\ (2,1) & (2,2) & (2,3) & (2,4) & (2,5) \\<br /> (3,1) & (3,2) & (3,3) & (3,4) & (3,5) \\ (4,1) & (4,2) & (4,3) & (4,4) & (4,5) \\ (5,1) & (5,2) & (5,3) & (5,4) & (5,5) \end{array}$ . . 25 outcomes If they are drawn without replacement, the outcomes are: . . $\begin{array}{ccccc} & (1,2) & (1,3) & (1,4) & (1,5) \\ (2,1) & & (2,3) & (2,4) & (2,5) \\<br /> (3,1) & (3,2) & & (3,4) & (3,5) \\ (4,1) & (4,2) & (4,3) & & (4,5) \\ (5,1) & (5,2) & (5,3) & (5,4) & \end{array}$ . . 20 outcomes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8990637063980103, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/54991/whitehead-doubles-of-any-knots
## Whitehead doubles of any knots ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was curious about the fact that the Whitehead doubles of all knots have Alexander polynomial equal to 1, which is the same as a unknot. How to prove this? - ## 3 Answers One way to view this: take the infinite cyclic cover of the Whitehead link complement over one component. We get an infinite chain link complement: If we fill in the other component of the Whitehead link by a solid torus ($\infty$ surgery), then we get the unknot, and it lifts to this cyclic covering space since the linking number is zero. The cover is then just $\mathbb{R}^3$ (just fill each component of the chain link back in with a solid torus). Taking a Whitehead double corresponds to gluing in a knot complement to the other link, which is a homology solid torus. The cyclic cover is no longer contractible, but it has the same homology as a contractible space, and in particular has trivial first homology (we take out infinitely many solid tori which are chains in the link, and replace them with infinitely many homology solid tori, so this follows by Mayer-Vietoris). The Alexander polynomial is the generator of the annihilator ideal of the first homology of this infinite cover (as a $\mathbb{Z}[\mathbb{Z}]$ module). Since $H_1=0$, the Alexander polynomial is 1. - 2 Nice picture Ian! – Jim Conant Feb 10 2011 at 11:39 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Ian's answer is very elegant, but in case you're looking for a more computational approach, you could use the Seifert form. Namely, if you take a Seifert surface $\Sigma$ for a knot, look at the form $\Theta\colon H_1(\Sigma)\otimes H_1(\Sigma)\to \mathbb Z$ given by $\Theta(x,y)=lk(x^+,y)$ where $x^+$ is a push-off of $x$ along a consistently chosen positive normal direction. Then one can show that the Alexander polynomial is expressible as $\det(t\Theta-\Theta^T)$. Note that for a Whitehead double, there is an obvious Seifert surface with one band being a thickening of the original knot, and one band being a small twisted dual band. In particular, the Seifert form looks something like `$$\left(\begin{array}{cc}0&1\\0&1\end{array}\right)$$` which yields a trivial Alexander polynomial, which is only well-defined in this formula up to powers of $t$. Or you could notice that the unknot has a Seifert surface with the same Seifert form as this, by Whitehead doubling the unknot! - I haven't figured out how to do matrices well. – Jim Conant Feb 10 2011 at 11:52 1 Backticks, the universal solvent. – Greg Kuperberg Feb 10 2011 at 11:54 This is slightly more mechanistic variant of Ian's response. When a knot is a Whitehead double, that means you've obtained it by doing a splicing construction (Larry Siebenmann's terminology, originally -- this is a particular formalism for describing satellite operations on knots, one that's friendly with the JSJ-decomposition for 3-manifolds). Alexander polynomials behave very nicely with respect to splicing, I'll describe it a bit below. The input in a splicing construction is an $(n+1)$-component link $L=(L_0,L_1,\cdots,L_n)$ such that the sublink $(L_1,L_2,\cdots,L_n)$ is the trivial $n$-component link. The other input is $n$ knots $K_1,\cdots,K_n$. The splice knot I like to denote $L \bowtie K$. In the above Whithead double case, the input link $L$ is the Whitehead link, $n=1$ and the input knot $K$ is the figure-8 knot. For the knot above, the input link $L$ has the same complement as the Borromean rings (but it isn't quite the Borromean rings). $n=2$ and the two knots are both trefoils (with the same handedness). Alexander polynomials behave well under splicing, in particular: $$\Delta_{L\bowtie K}(t) = \Delta_{L_0}(t) \Delta_{K_1}(t^{l_1}) \Delta_{K_2}(t^{l_2}) \cdots \Delta_{K_n}(t^{l_n})$$ where $l_i$ is the linking number between $L_i$ and $L_0$ in the link $L$. So the "reason" the Alexander polynomial of a Whitehead double is trivial (from this perspective) is that (1) the linking numbers of the components of the Whitehead link are zero, and (2) the "base" component of the Whitehead link ($L_0$) has trivial Alexander polynomial (both components are unknots so they have to have trivial Alexander polynomials). Anyhow, the proof of this is just the generalization of Agol's argument to this setting, by lifting everything to the abelian cover, Mayer-Vietoris type arguments. But just staring at the above formula you see there are all kinds of other constructions on knots that produce new knots with trivial Alexander polynomials. More precisely, the example below is of a spliced knot which is the splice of the Borromean rings with two figure-8 knots, so its Alexander polynmomial is trivial. There's no requirement to use figure-8 knots, any knots work but the way splicing works is you have to be quite careful about your framing conventions. A diagram that indicates the framing conventions: So this knot also has trivial Alexander polynomial because $L_0$ is the unknot and all the respective linking numbers are zero. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924774169921875, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/194880-period-trig-function-print.html
# period of trig function Printable View • January 3rd 2012, 05:49 AM furor celtica period of trig function Write down the period of the graph of y=3/(2+(cos2x)^2), and also the coordinates of a maximum value of y. For the second question I answered correctly. For the first, however I didnt. It seems obvious to me that the period is 90, due to x being multiplied by 2 and the expression being squared; however the answer is 180. Its also clear by the graph that the period is 90; is this a typo? • January 3rd 2012, 06:58 AM Siron Re: period of trig function modified. • January 3rd 2012, 12:41 PM Opalg Re: period of trig function Quote: Originally Posted by furor celtica Write down the period of the graph of y=3/(2+(cos2x)^2), and also the coordinates of a maximum value of y. For the second question I answered correctly. For the first, however I didnt. It seems obvious to me that the period is 90, due to x being multiplied by 2 and the expression being squared; however the answer is 180. Its also clear by the graph that the period is 90; is this a typo? If it's $y = \frac3{2+(\cos(2x))^2}$ then your answer 90º is correct. If it was supposed to be $y = \frac3{(2+\cos(2x))^2}$ then it would be 180º (but in that case the coordinates for the max value of y would be different). All times are GMT -8. The time now is 02:01 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9678086638450623, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Divergence_Theorem&oldid=6468
# Divergence Theorem ### From Math Images Revision as of 15:48, 29 June 2009 by Bjohn1 (Talk | contribs) Fountain Flux The water flowing out of a fountain demonstrates an important property of vector fields, the Divergence Theorem. Fountain Flux Field: Calculus Created By: Brendan John # Basic Description Consider a fountain like the one pictured, particularly its top layer. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume. # A More Mathematical Explanation Note: understanding of this explanation requires: *Some multivariable calculus [Click to view A More Mathematical Explanation] The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considere [...] [Click to hide A More Mathematical Explanation] The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considered a vector field because at each point the water has a position and a velocity vector. Faster moving water is represented by a larger vector in our field. The divergence of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field $F$ is $\nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}$, where $F _i$ is the component of $F$ in the $i$ direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. The divergence theorem requires that we sum divergence over an entire volume. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of flux, the flow through a surface, to quantify this movement through the boundary, which itself is a surface. The divergence theorem is formally stated as: $\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$ The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the flux through the boundary. ### Example of Divergence Theorem Verification The following example verifies that given a volume and a vector field, the Divergence Theorem is valid. Consider the vector field $F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}$. For a volume, we will use a cube of edge length two, with one corner at the origin and all points within positive coordinates. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9165257215499878, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/07/14/homogenous-linear-systems/?like=1&source=post_flair&_wpnonce=134bd2d71b
# The Unapologetic Mathematician ## Homogenous Linear Systems An important special case of a linear system is a set of homogenous equations. All this means (in this case) is that the right side of each of the equations is zero. In matrix notation (using the summation convention), we have the equation $a_i^jx^i=0$. Remember that this is actually a collection of $n$ equations, one for each value of the index $j$. And in our more abstract notation we write $Ax=0$, where the right had side is the zero vector in $\mathbb{F}^n$. So what is a solution of this system? It’s a vector $x\in\mathbb{F}^m$ that gets sent to $0\in\mathbb{F}^n$ by the linear transformation $A$. But a vector that gets sent to the zero vector is exactly one in the kernel $\mathrm{Ker}(A)$. So solving the homogenous system $a_i^jx^i=0$ is equivalent to determining the kernel of the linear transformation $A$. We don’t yet have any tools for making this determination yet, but we can say some things about the set of solutions. For one thing, they form a subspace of $\mathbb{F}^m$. That is, the sum of any two solutions is again a solution, and a constant multiple of any solution is again a solution. We’re interested, then, in finding linearly independent solutions, because from them we can construct more solutions without redundancy. A maximal collection of linearly independent solutions will be a basis for the subspace of solutions — for the kernel of the linear map. As such, the number of solutions in any maximal collection will be the dimension of this subspace, which we called the nullity of the linear transformation $A$. The rank-nullity theorem then tells us that we have a relationship between the number of independent solutions to the system (the nullity), the number of variables in the system (the dimension of $\mathbb{F}^m$), and the rank of $A$, which we will also call the rank of the system. Thus if we can learn ways to find the rank of the system then we can determine the number of independent solutions. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 5 Comments » 1. [...] Linear Systems In distinction from homogenous systems we have inhomogenous systems. These are systems of linear equations where some of the constants on [...] Pingback by | July 15, 2008 | Reply 2. [...] we’re still considering the solution set of an inhomogenous linear system and its associated homogenous system . Remember that we also wrote these systems in more abstract notation as and . The solution space [...] Pingback by | July 16, 2008 | Reply 3. [...] of them. Given a particular solution, it defines a coset of the subspace of the solutions to the associated homogenous system. And that subspace is the kernel of a certain linear [...] Pingback by | July 18, 2008 | Reply 4. [...] is the dimension of the kernel of . In terms of the linear system, this is the dimension of the associated homogenous system . If there are any solutions of the system under consideration, they will form an affine space of [...] Pingback by | July 22, 2008 | Reply 5. [...] is the matrix of the system. If is the zero vector we have a homogeneous system, and otherwise we have an inhomogeneous system. So let’s use the singular value [...] Pingback by | August 18, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197394251823425, "perplexity_flag": "head"}
http://mathoverflow.net/questions/119240/majority-vote-of-total-orders
## Majority vote of total orders ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix an odd natural number $k$. Suppose we have $k$ total orders on the same (finite) set $X$. Define a tournament on the vertex set $X$ by putting a directed edge $x\rightarrow y$ if a majority of the total orders compare $x > y$. 1. What tournaments can be obtained this way? Of course, if $k = 1$, only linearly ordered tournaments are possible. I am most interested in the case of small $k$. For example, is there an excluded-substructure characterization of these tournaments? 2. What if we make the problem harder and ask whether a given directed graph $G$ can be extended to a tournament $T$ such that $T$ can be obtained in this way? Again, if $k = 1$, there are various simple characterizations, such as all digraphs that contain no directed cycles. 3. What can be said about the computational problem of determining the smallest $k$ that can represent a given tournament or digraph? I assume, perhaps naively, that this problem already occurs in the literature, perhaps in the theory of voting/social choice, so I would be happy with references instead of solutions if that's easier. - 2 Are the total orders allowed to occur with multiplicity? Then clearly if a tournament occurs then all subtournaments occur. So there is some list of excluded substructures, and the question is if we can characterize it. But if not then it is not even obvious that this is closed under substructures. – Will Sawin Jan 18 at 8:06 Yes, I'm allowing multiplicity. So I would be interested, for example, in a description of the excluded sub-tournaments for k=3. – aorq Jan 18 at 16:50 ## 4 Answers Every possible tournament on $n$ vertices is realisable with polynomially many voters. This recent paper cites D. C. McGarvey, A theorem on the construction of voting paradoxes, Econometrica 21 (1953), 608-610. - I think Gil Kalai proved some generalization of this result, but I'm not sure. – Michael Greinecker Jan 26 at 10:14 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You say you are interested in small $k$. This makes sense, because allowing an arbitrarily large $k$ makes the question trivial (provided you allow repetition of a linear order with any multiplicity as well). You can get any tournament as the majority vote of some number of linear orders. Indeed, suppose you have $n$ vertices (where $3 \le n$) and a tournament on this you want to obtain. For every arc $(u, v)$ in the tournament, take all $(n - 1)!$ linear orders in which $v$ is greater than $u$ and they are adjacent so there is no vertex between them. In these tournaments, any edge other than ${u, v}$ occurs the same number of times in the two directions. Gather these linear orders for all edges in the tournament (that's $n(n-1)(n-1)!/2$ linear orders), and add any one linear order to make the total odd. The majority vote of these shall give your tournament. Remark. I don't claim this construction to be optimal, indeed I think instead of the factorial order here, I think that you might be able to choose $k$ to grow only polynomially in $n$. Update: it seems Ben Barber was a bit faster than me to post an answer that proves a bit more than this one. - These tournaments are called majority tournaments and are studied in several papers, e.g. http://www.math.dartmouth.edu/~pw/papers/dice.pdf http://arxiv.org/abs/1109.6172 - For $k=3$, the following paper shows an example of a non-3-majority tournament with 8 vertices. http://www2.isye.gatech.edu/~ctovey/publications/papers/voting__19_may_08.pdf A few years ago, I checked that every tournament with 7 vertices (even the Paley tournament) is 3-majority by using a computer. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371383786201477, "perplexity_flag": "head"}
http://arachnoid.com/gravitation_equations/ballistics.html
Home | Science | * Gravitation Equations | * Gravitation Description and Equations 02. Force & Weight 03. Power, Energy & Gravity 04. Ballistics 05. Orbital Mechanics 06. The Pendulum Share This Page Ballistics Projectile paths without air resistance — P. Lutus — Message Page — Acceleration | Differential Equations | Ballistics Applet Variations on an Equation | Free-Fall with Air Resistance (double-click any word to see its definition) Note: This is a multi-page article. To navigate, use the drop-down lists and arrow keys at the top and bottom of each page. Acceleration In this section we will cover the dynamic motion of objects in free-fall, that is, in a state of motion where gravitational mass and inertial mass cancel, objects of different masses fall at the same rate, and acceleration is the predominant issue rather than force. This is not to say there are no forces — there always are — but that two forces acting on a small free-falling mass (gravitational and inertial) cancel, leaving only the gravitational attraction of a primary mass like a planet: (1) $\displaystyle a_g = \frac{GM}{r^2}$ Where: • $a_g$ = gravitational acceleration • G = universal gravitational constant • M = mass of attracting body • r = distance between the accelerated object and the center of mass of the attracting body Equation (1) is a formality — in most cases we will use little-g to accelerate our objects, because we know its value, because our results will then agree with those from other sources, and because most of the equations require that g be a constant. On that issue, most of the methods to come require some simplifying assumptions: • That the gravitation acceleration is constant (only really true for small altitude changes) • The there is no air resistance (true on the moon but not Earth). • That there is no gravitational field curvature (only apparently true on a small scale) Differential Equations In setting up the theoretical basis for what follows, it's best to use some ideas from the field of differential equations. Those who don't want to dwell on this aspect of ballistics can just skip forward — the ideas that follow won't suddenly become incomprehensible because you skipped this section. We will construct a differential equation for the position of a mass with respect to time. The terms for the unknown function are: • $p''(t) = -g$ (acceleration) • $p'(0) = v_0$ (velocity) • $p(0) = p_0$ (position) Where: • $t$ = Time, seconds • $g$ = Little-g, m/s2 • $v_0$ = Initial velocity, m/s • $p_0$ = Initial position, meters For those unfamiliar with differential equation nomenclature, the above describes an unknown function p(t) that is meant to define position with respect to time. The term p'(t) (note the asterisk) refers to the first derivative of position or the rate of change in position, i.e. velocity. The term p''(t) refers to the second derivative of position, or acceleration. In what follows, remember: • Velocity is the rate of change in position with respect to time, or the time derivative of position. • Acceleration is the rate of change in velocity with respect to time, or the time derivative of velocity. Here is the solution for the above terms, the result of a mathematical process that provides the unknown function: (2) $\displaystyle p(t) = p_{0} + t v_{0} -\frac{g t^{2}}{2}$ Equation (2) is a position function that produces a y (vertical) coordinate for a free-falling object as a function of time. We will be using this equation as the basis for a number of derived equations and results below, so it might be wise to describe it in some detail: • Variable p0 is used to set an initial position. For example if we wanted to model an object falling from height h, we would set p0 = h. • Variable v0 is used to set an initial velocity, separate from gravitation. This might be used while simulating throwing a ball into the air — v0 would be the ball's initial upward velocity. • Variable g is the gravitational acceleration. In many cases this will be set to little-g from our earlier discussion, but it could be any gravitational acceleration — that of the the moon, Mars, or another planet. • Variable t is time. The time variable is the only value not assumed to be a constant. I have written Equation (2) in a very general way, with plenty of flexibility to handle different kinds of physical problems. For example: • For the case of an object released from a height h, we would set p0 to h, the initial height, and v0 to zero, since we're just releasing the object, not giving it an initial velocity. • For the case of a ball thrown into the air, we would set p0 to zero (ground level) and v0 to the vertical part of the throw velocity. Given time arguments, the function would show the arc described by the object as it flew through the air to its landing spot. Ballistics Applet I include a Java Ballistics Applet on this page (Figure 3) so the reader can set up and run dynamic ballistic experiments. Here are some examples: Zoom = mouse wheel, Translate = mouse drag Figure 3: Ballistics Applet • Ball throw — the default settings. Press "Defaults" or: • Set p0 = 0 (always press Enter when making a change) • Set v0 = 30 • Verify that g = 9.80665 • If the image doesn't update, click your mouse in the applet window. • The graph line should achieve a maximum height of 45.88 meters and cross zero at a time of 6.12 seconds. • Ball drop from a height: • Set p0 = 40 • Set v0 = 0 • Verify that g = 9.80665 • If the image doesn't update, click your mouse in the applet window. • The graph line should cross zero at a time of 2.85 seconds. • Ball drop from higher: • Change only p0 = 200 • Point your mouse at the applet and use your mouse wheel to zoom out to see the drop point at 200 meters. • Drag your mouse on the applet to change the viewing position. • Now zoom in to the point where the line crosses zero. • The graph line should cross zero at a time of 6.38 seconds. Feel free to experiment with this applet — you can't break it, and you can always recover by pressing the "Defaults" button. Remember about the outcomes that they make some simplifying assumptions, like no air resistance — the drop from 200 meters wouldn't really work as shown if there was any air resistance. Variations on an Equation Equation (2) has many practical applications, as long as we remember that it is only accurate at velocities below that at which air resistance becomes significant. Here are some variations on equation (2) to solve specific problems: • When using these equations, remember these associated values: • $t$ = Time, seconds • $g$ = Little-g • $v_0$ = Initial velocity • $p_0$ = Initial position • Ballistic flight values as a function of time t: • Vertical velocity: (3) $v = v_0 - g t$ • Height: (4) $h = p_0 + t v_0 - \frac{g t^2}{2}$ • Apex Values: • Velocity: 0 • Time: (5) $t = \frac{v_0}{g}$ • Height: (6) $h = \frac{v_0^2}{2 g}$ • Terminal Values: • Velocity: (7) $v = -v_0$ • Time: (8) $t = \frac{2 v_0}{g}$ • Height: 0 • Freefall: • Fall distance d for time t: (9) $d = \frac{g t^{2}}{2}$ • Time t for fall distance d: (10) $t = \sqrt{\frac{2d}{g}}$ • Velocity v for time t: (11) $v = gt$ • Velocity v for fall distance d: (12) $v = \sqrt{\frac{2dt}{g}}$ • Kinetic energy ke of a mass m moving at velocity v: (13) $ke = \frac{mv^2}{2}$ • Kinetic energy ke of a falling mass m for time t: (14) $ke = \frac{g^2 m t^2}{2}$ Free-Fall with Air Resistance It is important to stress that a very accurate result for an object falling through air cannot be expressed in a closed-form equation — it must be modeled using numerical methods. But if we accept certain limitations, there are useful closed-form solutions. We can't assume that the result rigorously takes changes in altitude into account (air pressure varies with altitude, and gravitational acceleration does also), but it's useful as long as these limitations are kept in mind. Here are new differential terms for an unknown equation that takes air resistance into account: • $\displaystyle p''(t) = -g + p'(t)^2\ k$: This acceleration term incorporates the fact that air resistance varies as the square of velocity ($p'(t)^2$) times an empirical factor k that depends on the object's shape and surface roughness. In free-fall, gravitation is a constant, but as air resistance builds up, it eventually equals the gravitational force, acceleration falls to zero, and velocity becomes constant (see Figure 2). • $\displaystyle p'(0) = 0$: Initial velocity is zero • $\displaystyle p(0) = h$: Initial height h Figure 4: Free-fall altitude profile As before, a mathematical analysis process leads to this result: (15) $\displaystyle p(t) = \frac{h k - \log\left(\cosh\left(t \sqrt{g k}\right)\right)}{k}$ Where: • p(t) = position with respect to time • t = Time, seconds • h = Initial height • k = An empirical factor that takes the object's air friction into account • g = Little-g, described previously Figure 4 shows the altitude profile for this function, where the chosen air friction coefficient leads to relatively fast convergence on an unchanging descent rate of 53 m/s or 190.8 kph, a typical skydiver terminal velocity. There is more on this topic here. Note: This is a multi-page article. To navigate, use the drop-down lists and arrow keys at the top and bottom of each page. Home | Science | * Gravitation Equations | * Gravitation Description and Equations 02. Force & Weight 03. Power, Energy & Gravity 04. Ballistics 05. Orbital Mechanics 06. The Pendulum Share This Page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8831663727760315, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/142932-trig-identity.html
# Thread: 1. ## Trig Identity Hello all. I am having trouble with a trig identity, and was wondering if you could help. This is a problem from a take home quiz, and unfortunately I can't find anything in my text about how to deal with the $\cos{4\theta}$. Here is the identity. $\sin^2\theta\cos^2\theta=\frac{1}{8}[1-\cos{4\theta}]$ 2. Originally Posted by Deimos Hello all. I am having trouble with a trig identity, and was wondering if you could help. This is a problem from a take home quiz, and unfortunately I can't find anything in my text about how to deal with the $\cos{4\theta}$. Here is the identity. $\sin^2\theta\cos^2\theta=\frac{1}{8}[1-\cos{4\theta}]$ You can write the given expression as: $8 \sin^2\theta\cos^2\theta=[1-\cos{4\theta}]$ Firstly, I assume you know that: $1 - cos2x = 2sin^{2}x$ $\therefore RHS = 1 - cos(4\theta) = 1 - cos2(2\theta) = 2 sin^{2}(2\theta) = 2 \times [sin(2\theta)]^2$ $= 2 \times [ 2 \sin(\theta) \cos(\theta)]^2 = 2 \times 4 \sin^{2}(\theta) \cos^{2}(\theta) = 8 \sin^{2}(\theta) \cos^{2}(\theta) = LHS$ 3. Thank you very much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9705724120140076, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Henon_Attractor&diff=32960&oldid=32959
# Henon Attractor ### From Math Images (Difference between revisions) | | | | | |----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 51: | | Line 51: | | | | | | | | | ==Chaotic System== | | ==Chaotic System== | | - | {{hide|1= | + | | | | [[Image:Henon2.jpg|right|thumb|Original Henon Attractor, a = 1.4, b = 0.3]] | | [[Image:Henon2.jpg|right|thumb|Original Henon Attractor, a = 1.4, b = 0.3]] | | | The Henon system can be described as [[Chaos|chaotic]] and random. However, the system does have structure in that its points settle very close to an underlying pattern called a <balloon title="In a chaotic system like this fractal, over time plotted points are evolving towards (or being 'attracted' to) a region that is called the chaotic attractor.">chaotic attractor</balloon>. The basic Henon Attractor can be described by the equations, where <math>x_n</math> is the x-value at the nth iteration. | | The Henon system can be described as [[Chaos|chaotic]] and random. However, the system does have structure in that its points settle very close to an underlying pattern called a <balloon title="In a chaotic system like this fractal, over time plotted points are evolving towards (or being 'attracted' to) a region that is called the chaotic attractor.">chaotic attractor</balloon>. The basic Henon Attractor can be described by the equations, where <math>x_n</math> is the x-value at the nth iteration. | | Line 63: | | Line 63: | | | | | | | | | Astronomer Michel Henon created the original Henon Attractor using the values ''a'' = 1.4 and ''b'' = 0.3 and starting point (1,1). These are also the values used by the artist to create the featured image at the top of the page. However, by changing the values of ''a'' and ''b'', we can obtain Henon Attractors that look slightly different. | | Astronomer Michel Henon created the original Henon Attractor using the values ''a'' = 1.4 and ''b'' = 0.3 and starting point (1,1). These are also the values used by the artist to create the featured image at the top of the page. However, by changing the values of ''a'' and ''b'', we can obtain Henon Attractors that look slightly different. | | - | }} | + | | | | | | | | | | | | | | ==Changing "a" and "b"== | | ==Changing "a" and "b"== | | | Although the original Henon Attractor uses ''a'' = 1.4 and ''b'' = 0.3, we can alter those values slightly to produce various-looking Henon Attractors. However, the values of ''a'' and ''b'' are limited to a small range of values, outside of which the fractal ceases to resemble the Henon Attractor. | | Although the original Henon Attractor uses ''a'' = 1.4 and ''b'' = 0.3, we can alter those values slightly to produce various-looking Henon Attractors. However, the values of ''a'' and ''b'' are limited to a small range of values, outside of which the fractal ceases to resemble the Henon Attractor. | | | | | | | Line 82: | | Line 81: | | | | </gallery> | | </gallery> | | | | | | | - | | | | | - | }} | | | | | | | | | | | | | ## Revision as of 10:21, 28 June 2012 Henon Attractor Fields: Dynamic Systems and Fractals Image Created By: Piecewise Affine Dynamics Website: Lozi Maps Henon Attractor This image is a Henon Attractor (named after astronomer and mathematician Michel Henon), which is a fractal in the division of the chaotic strange attractor. # Basic Description The Henon Attractor is a special kind of fractal that belongs in a group called Strange Attractors, and can be modeled by two general equations. The Henon Attractor is created by applying this system of equations to a starting value over and over again and graphing each result. ### Making the Henon Attractor Say we took a single starting point (x,y) and plotted it on a graph. Then, we applied the two Henon Attractor equations to the initial point and emerged with a new point that we graphed. Next, we took this new point and again applied the two equations to it and graphed the next new point. If we continued to apply the two equations to each new point in a process called iteration and plotted every outcome from this iteration, we would create a Henon Attractor. Click here to learn more about iterated functions. Furthermore, if we plotted each outcome one at a time, we would observe that the points jump from one random location to another within the image. If you take a look at the animation, you can see the irregularity of the plotted points. Eventually, the individual points become so numerous that they appear to form lines and an image emerges. ### Magnification of the Henon Attractor 1X 8X 64X 512X If you magnify this image, you would find that the lines (really many, many points) that appear to be single lines on the larger image are actually sets or bundles of lines, that, if magnified closer, are bundles of lines and so on. This property is called self-similarity, which means that even as you look closer and closer into the image, it continues to look the same. In other words, the larger view of the image is similar to a magnified part of the image. ### History of the Henon Attractor Michel Henon was a French mathematician and astronomer who developed the Henon Attractor in the 1970s. At that time, Henon was interested in dynamical systems and especially the complicated orbits of celestial objects. The Henon Attractor emerged from Henon's attempt to model the chaotic orbits of celestial objects (like stars) in the mist of a gravitational force. # A More Mathematical Explanation Note: understanding of this explanation requires: *Algebra [Click to view A More Mathematical Explanation] ## Fractal Properties <span class="_togglegroup _toggle_initshow _toggle _toggler toggle-visible" st [...] [Click to hide A More Mathematical Explanation] ## Fractal Properties [Show] [Hide] Zooming in on the Henon Attractor The Henon Attractor is often described as being similar to the Cantor Set. Let us zoom into the Henon Attractor near the doubled-tip of the fractal (as seen in the animation). We can see that as we continue to magnify the lines that form the structure of the Henon Attractor, these lines become layers of increasingly deteriorating lines that appear to resemble the Canter Set. The Fractal Dimension of the Henon Attractor is not calculable using a single equation$D = \frac{log(n)}{log(e)}$, but it is estimated to be about 1.261. ## Chaotic System Original Henon Attractor, a = 1.4, b = 0.3 The Henon system can be described as chaotic and random. However, the system does have structure in that its points settle very close to an underlying pattern called a chaotic attractor. The basic Henon Attractor can be described by the equations, where $x_n$ is the x-value at the nth iteration. $x_{n+1} = y_n + 1 - ax^2_n$ $y_{n+1} = bx_n\,$ Astronomer Michel Henon created the original Henon Attractor using the values a = 1.4 and b = 0.3 and starting point (1,1). These are also the values used by the artist to create the featured image at the top of the page. However, by changing the values of a and b, we can obtain Henon Attractors that look slightly different. ## Changing "a" and "b" Although the original Henon Attractor uses a = 1.4 and b = 0.3, we can alter those values slightly to produce various-looking Henon Attractors. However, the values of a and b are limited to a small range of values, outside of which the fractal ceases to resemble the Henon Attractor. Here are some more examples of Henon Attractors with different a and b values. | | | | |--------------------------------------------|------------------|-----------------------------------------------| | a = 1; b = 0.542 | a = 1.2; b = 0.3 | a = 1.3; b = 0.3 (points need to be enlarged) | | a = 1.4; b = 0.3, original Henon Attractor | a = 1.5; b = 0.2 | a = 1.4; b = 0.1 | ## Fixed Points Looking at the system of equations $x_{n+1} = y_n + 1 - ax^2_n$ and $y_{n+1} = bx_n\,$ that describe the fractal, the Henon Attractor uses only two variables (x and y) that are evaluated into themselves. This results in two equilibrium or fixed points for the attractor. Fixed points are such that if the two Henon Attractor equations are applied to the fixed points, the resulting points would be the same fixed points. In algebraic terms: $x_{n+1} = x_n\,$ and $y_{n+1} = y_n\,$ where $x_n$ is the x-value at the nth iteration and $x_{n+1}$ is the x-value at the next iteration. Therefore, if the system ever plotted onto the fixed points, the fractal would become stagnant. By solving the Henon Attractor's system of equations with a = 1.4 and b = 0.3, we can find that the fixed points for the original Henon Attractor are (0.6314 , 0.1894) and (-1.1314 , -0.3394). [Show Solving the System of Equations][Hide Solving the System of Equations] To solve the system of equations: $x_{n+1} = y_n + 1 - ax^2_n$ $y_{n+1} = bx_n\,$ Since $x_{n+1} = x_n\,$ and $y_{n+1} = y_n\,$, we can simplify the equations and refer to the variables as just $x$ and $y$, respectively $x = y + 1 - ax^2$ $y = bx\,$ By substituting the value of $y$ defined by the second equation into the $y$ in the first equation, we get $x = bx + 1 - ax^2$ Using the quadratic equation $x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4ac} }{2a}$ $x = \frac{-(b-1) \pm \sqrt{(b-1)^2 - 4(-a)(1)}}{2(-a)}$ $x = \frac{-(b-1) \pm \sqrt{(b-1)^2 + 4a}}{-2a}$ Using a = 1.4, b = 0.3: $x = 0.6314, -1.1314 \,$ Using y = bx: $y = 0.1894, -0.3394 \,$ There are two types of fixed points, stable and unstable. The first fixed point (0.6314, 0.1894), labeled "1" on the image, is located within the attractor itself and is stable. This means that if a point is plotted close to the fixed point, the next iterated points will remain close to the fixed point. The second fixed point (-1.1314 , -0.3394), labeled "2", is considered unstable, and it is located outside of the bounds of the attractor. An unstable fixed point is such that if the system gets close to the fixed point, the next iterated points rapidly move away from the fixed point. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # About the Creator of this Image Piecewise Affine Dynamics is a wiki site that was created by a group of French mathematicians that is dedicated to providing information about "dynamic systems defined by piecewise affine transformations". # References • Glenn Elert, The Chaos Hypertextbook • Heinz-Otto Peitgen, Hartmut Jürgens, Dietmar Saupe, Chaos and fractals • Bill Casselman, Simple Chaos-The Hénon Map • www.ibiblio.org Henon Strange Attractors • Michele Henon, Michele Henon # Future Directions for this Page A better, less vague description of how sections of the Henon Attractor resembles the Cantor Set Also, the description of the Henon Attractor can be expanded to include a discussion about the fractal's "basin of attraction". For more information, click here. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156772494316101, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/25593?sort=oldest
## Functoriality of Poincaré duality and long exact sequences ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi all, Today I was playing with the cohomology of manifolds and I noticed something intriguing. Although I might just have been caught out by a couple of enticing coincidences, it feels enough like there might be something going on that I thought I'd put it out here. Let $M$ be an $n$-manifold with boundary $\partial M$. We write out the long exact homology sequence for the pair $(M, \partial M)$: $$\cdots \to H_k(M) \to H_k(M, \partial M) \to H_{k - 1}(\partial M) \to \cdots$$ Let's apply Poincaré duality termwise, and keep the arrows where they were out of sheer faith. What we get is $$\cdots \to H^{n - k}(M, \partial M) \to H^{n - k}(M) \to H^{n - k}(\partial M) \to \cdots$$ Surprisingly, this is the long exact cohomology sequence for the pair $(M, \partial M)$! To my mind, two things here are weird. The first is that intuitively, any functoriality properties Poincaré duality possesses should be arrow-reversing. The second is that we have a shift - but not a shift by a multiple of 3. So the boundary map in the homology sequence 'maps' to something that doesn't change degree in the cohomology sequence. Let's play the same game with the Mayer-Vietoris sequence. For simplicity, suppose now $M$ is without boundary. Write $M = A \cup B$ where $A$ and $B$ are $n$-submanifolds-with-boundary of $M$ and $A \cap B$ is a submanifold of $M$ with boundary $\partial A \cup \partial B$. Then we have $$\cdots \to H_k(A \cap B) \to H_k(A) \oplus H_k(B) \to H_k(M) \cdots$$ Hitting it termwise with Poincaré duality, and cruelly and unnaturally keeping the arrows where they are once again, we get $$\cdots \to H^{n - k}(A \cap B, \partial A \cup \partial B) \to H^{n - k}(A, \partial A) \oplus H^{n - k}(B, \partial B) \to H^{n - k}(M) \to \cdots$$ This looks unfamiliar, but by looking at cochains it's not hard to see that there actually is a long exact sequence with these terms. However, this time we don't have the weird shift. Now is there anything going on here, or just happenstance? Is there really a sense in which Poincaré duality is functorial with respect to long exact sequences? If so, what's the 'Poincaré dual' of the long exact sequence of the pair $(M, N)$ where $N$ is a tamely embedded submanifold of $M$? Edit: Realised that in the final l.e.s. the arrows should actually go the other way, which is slightly less impressive. Even so... - I'm having trouble seeing why your first pair of sequences isn't arrow-reversing. It looks contravariant to me ... or was that the point? – S. Carnahan♦ May 22 2010 at 18:55 What I meant was we have ... -> K -> L -> M -> ... mapping to ... -> PD(K) -> PD(L) -> PD(M) -> ... rather than ... -> PD(M) -> PD(L) -> PD(K) -> ... – Saul Glasman May 23 2010 at 10:12 ## 1 Answer One of the standard proofs of Poincaré duality, at least for those manifolds that have handle decompositions, provides a reason for some of these naturality properties. Every piecewise linear manifold, or every smooth manifold, has a handle decomposition, and many but not all topological manifolds also do. (Amazingly enough, the only exceptions are in 4 dimensions.) A handle decomposition gives rise to two different CW cellulations on the manifold, one using cores and the other using co-cores. Then this proof of Poincaré duality posits that the CW chain complex of one cellulation is identical to the CW cochain complex of the other cellulation. You can extend this coincidence of chain complexes to both of your examples, the Mayer-Vietoris sequence and the exact sequence of a pair. Obtaining identical chain complexes also gives you other information, for instance that the Bockstein maps are the same. - I'd never heard/read «cellulation». Is that your coinage? – Mariano Suárez-Alvarez May 22 2010 at 20:15 I heard that term from Mike Freedman. It sounded good to me. – Greg Kuperberg May 22 2010 at 20:19 This is very interesting; I have seen this proof of PD, but I didn't imagine it would lie behind the phenomena I asked about. Does this mean that similar things don't happen for other cohomology theories? – Saul Glasman May 23 2010 at 10:14 I don't know. I would suppose that similar things could happen for other cohomology theories, if you figured out how to compute them from a CW complex. The CW chain complex is really a spectral sequence in disguise. I suppose that that spectral sequence always exists, I just don't know what it does for generalized cohomology theories. It should also be said that this CW model is an explanation for your question, but probably not the explanation. – Greg Kuperberg May 23 2010 at 16:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512273669242859, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/19890-factorising-explaination-print.html
# Factorising explaination Printable View • October 2nd 2007, 08:07 PM Local_lunatic Factorising explaination I've just been doing some revising and I came across a couple of factorisation questions that im not quite sure how to work out. $x^4 - a6$ and $x - x^5$ So if anyone could explain how to do them it would be really useful. • October 2nd 2007, 08:17 PM earboth Quote: Originally Posted by Local_lunatic I've just been doing some revising and I came across a couple of factorisation questions that im not quite sure how to work out. $x^4 - a6$ and $x - x^5$ So if anyone could explain how to do them it would be really useful. hello, I assume that you mean: $x^4-a^6$ . This is a difference of squares which can be factored to: $x^4-a^6 = (x^2 - a^3)(x^2 + a^3)$ $x-x^5 = x(1-x^4)$ . Factor out (?) the common factor. You get the result in the bracket by dividing each summand by the factor in front of the bracket: $x-x^5 = x\left(\frac xx-\frac{x^4}{x}\right) = x(1-x^4)$ • October 2nd 2007, 09:46 PM Local_lunatic Quote: Originally Posted by earboth hello, I assume that you mean: $x^4-a^6$ . This is a difference of squares which can be factored to: $x^4-a^6 = (x^2 - a^3)(x^2 + a^3)$ $x-x^5 = x(1-x^4)$ . Factor out (?) the common factor. You get the result in the bracket by dividing each summand by the factor in front of the bracket: $x-x^5 = x\left(\frac xx-\frac{x^4}{x}\right) = x(1-x^4)$ That's the answer I got for the second one too, but the book had a different one. Actually, the answer comes to the same thing it's just written differently. $x(1+x^2)(1+x)(1+x)$ Which one am I meant to do in a test? • October 3rd 2007, 05:24 PM SnipedYou Are you sure it doesn't say $x(1+x^{2})(1+x)(1-x)$? You wouldn't wanna make a typo like that on a test (I have done it before and lost 5 points for one sign so watch for it) • October 3rd 2007, 08:23 PM CaptainBlack Quote: Originally Posted by Local_lunatic That's the answer I got for the second one too, but the book had a different one. Actually, the answer comes to the same thing it's just written differently. $x(1+x^2)(1+x)(1+x)$ Which one am I meant to do in a test? If the question asks you to factorise something full marks will be given for as complete a factorisation as possible under the constraints you are expected to observe (probably factors with real coefficients). RonL All times are GMT -8. The time now is 03:13 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9676436185836792, "perplexity_flag": "middle"}
http://www.physicsforums.com/library.php?do=view_item&itemid=798
Physics Forums Menu Home Action My entries Defined browse Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list Search magnetic field Definition/Summary Magnetic field ($\boldsymbol{B}$) is force per charge per speed. It is a vector (strictly, a pseudovector), measured in units of N.s/C.m = N/A.m = T (tesla). The force from a magnetic field $\boldsymbol{B}$ on a charge $q$ with velocity $\boldsymbol{v}$ is $q\,(\boldsymbol{v}\times\boldsymbol{B})$. The force from a magnetic field $\boldsymbol{B}$ on a current $I$ in a straight wire with vector length $\boldsymbol{l}$ is the Laplace force, $I\,(\boldsymbol{l}\times\boldsymbol{B})$. A magnetic field is produced (induced) by moving electric charge, such as a current. An endless electric solenoid with current $I$ and constant pitch (turns per length) $n$ has a magnetic moment density $\boldsymbol{h}$ along the solenoid, with $|\boldsymbol{h}|\ =\ nI$, which produces a magnetic field along the middle, $\boldsymbol{B}\ =\ \mu_o\boldsymbol{h}$. In empty space, an endless electric solenoidal field with magnetic moment density field $\boldsymbol{H}$ is the same as a magnetic field $\boldsymbol{B}\ =\ \mu_o\boldsymbol{H}$. Magnetic moment density ($\boldsymbol{H}$) is a vector, current times pitch, measured in amp-turns per metre (A/m). $\mu_o$ is a universal constant which should really be $1\ N/A^2$, but is actually defined as $4\pi\ 10^{-7}\ N/A^2$, so as to make the amp (A) a useful everyday unit. Equations Lorentz force: $q\,(\boldsymbol{E}\ +\ \boldsymbol{v}\times\boldsymbol{B})$. Laplace force: $I\,(\boldsymbol{l}\times\boldsymbol{B})$ Force on a moment: $\nabla(\boldsymbol{m}\cdot\boldsymbol{B})$ Biot-Savart law: $$\boldsymbol{B} = \frac{\mu_oI}{4\pi} \int \frac{d\boldsymbol{l} \times \hat{\boldsymbol{r}}}{r^2}$$ Gauss' Law for Magnetism: $\nabla\cdot\mathbf{B}\ =\ 0$ Faraday's Law: $\nabla\times\mathbf{E}\ =\ - \frac{\partial\mathbf{B}}{\partial t}$ $\boldsymbol{B} = \mu_o(\boldsymbol{H} + \boldsymbol{M})$ Scientists Recent forum threads on magnetic field Breakdown Physics > Electromagnetism >> Mathematical Methods See Also electric fieldelectric unitsMaxwell's equationssusceptibility Images Extended explanation Electromagnetic field: The magnetic field $\boldsymbol{B}$ is a vector, but it transforms (between observers with different velocities) as three of the six components of a 2-form, the electromagnetic field, $(\boldsymbol{E};\boldsymbol{B})$. Lorentz force: The magnetic force is part of the whole Lorentz force, $q\,(\boldsymbol{E}\ +\ \boldsymbol{v}\times\boldsymbol{B})$. The magnetic force on a stationary charge ($\boldsymbol{v}\ =\ 0$) is zero. (Unless that charge has a magnetic moment, see next section.) On a moving charge, it changes the direction but not the speed … so (in a constant magnetic field) the charge moves with constant speed in a circle … its kinetic energy stays the same … the field does no work on it. Force on a current: The Lorentz force on an uncharged stationary conductor (such as a wire) is zero, unless a current is flowing through it: that means that the (positively charged) nuclei are stationary, but some of the (negatively-charged) electrons are moving, and therefore are affected by a magnetic force, $(\sum\ q\boldsymbol{v})\times\boldsymbol{B}$. $\sum\ q\boldsymbol{v}$ for the moving electrons is the sum of charge times distance per time, = distance times charge per time, = distance times current. Accordingly, the force on a current-carrying stationary straight wire of vector length $\boldsymbol{l}$ is the Laplace force $I\ (\boldsymbol{l}\times\boldsymbol{B})$. Force on a moment: Even a stationary electron has a magnetic moment, as if it was spinning with a finite radius and angular speed. The force from a magnetic field $\boldsymbol{M}$ on a magnetic moment $\boldsymbol{m}$ is $\nabla(\boldsymbol{m}\cdot\boldsymbol{B})$. This is the force which enables a magnetic field to attract stationary magnetisable material in which the electron moments are not random. Force from a current: The Biot-Savart law states that the magnetic field at point $\boldsymbol{r}$ induced by a current I in a wire whose line element is dl is $\boldsymbol{B} = (\mu_oI/4\pi) \int (d\boldsymbol{l} \times \hat{\boldsymbol{r}})/r^2$. More convenient is the law for the magnetic field along the middle of a solenoid of constant pitch $n$ and current $I$: $|\boldsymbol{B}|\ =\ \mu_onI$. Magnetic moment density (dipole moment density): The magnetic moment of one turn of a current $A$ enclosing a planar area $A$ is the vector $\boldsymbol{m}$ along the normal (perpendicular) direction, with $|\boldsymbol{m}|\ =\ IA$. So the magnetic moment density of a solenoid with a current $I$ and with pitch $n$ (in turns per metre) is the vector $\boldsymbol{h}$ along the solenoid, of magnitude $|\boldsymbol{h}|\ =\ IA$ times number of turns over volume, $|\boldsymbol{h}|\ =\ nI$. Magnetic moment density is a vector, measured in units of amp-turns per metre $(A/m)$. A magnetised material also has magnetic moment density, from the loops of "bound current" constituting its magnetisation. Two ways of measuring a magnetic field: A magnetic field can be measured according to its effect, or its cause. Its effect comes from the Lorentz force: force per charge per speed. So it can be measured in units of N/C(m/s), or N/(C/s).m, or N/A.m (newton per amp-metre). This is defined as the tesla (T). Its cause, in most cases, is loops of current (artificially, from solenoids, or naturally, from "bound current" in magnetised material), and its strength is proportional to the magnetic moment density of such solenoids or material. So it can be measured in the same units as magnetic moment density: amp-turns per metre (A/m). Historically, the B field has always been measured in tesla, and the H and M fields in amp-turns per metre: but there is no reason why they cannot be measured in the same units. What is µ0? µo is the conversion factor between tesla ($T\ =\ N/A.m$) and amp-turns per metre ($A/m$): so it has units of $N/A^2$. Why isn't µo = 1 N/A2 (so that it needn't be mentioned)? well, it would be , buuuut … i] in SI units, a factor of 4π keeps cropping up! … so we multiply by 4π ii] that would make the amp that current which in a pair of wires a metre apart would produce a force between them of 2 N/m … which would make most electrical appliances run on micro-amps! … so, for practical convenience only, we make µo 107 smaller, and the amp 107 larger! (so the amp is that current which in a pair of wires a metre apart would produce a force between them of 2 10-7 N/m, and µo is 4π 10-7 N/A2 (= 4π 10-7 H/m)) (for historical details, see http://en.wikipedia.org/wiki/Magnetic_constant)How can it be appropriate to say that empty space contains a magnetic moment density, varying from point to point, when there are no actual loops of current anywhere near? Because any magnetic field can be replaced by an identical solenoidal field, as follows: Let's define a solenoidal field as a region R of space with a "honeycomb" of thin hexagonal solenoids (they needn't be hexagonal: but that makes them fit nicely ), each with a (different) current Ii, and a (different) pitch, ni (pitch is turns per length). The solenoids aren't straight, they can be curved into any shape. That causes ("induces") the whole region R to be filled with a magnetic field, of "Lorentzian" strength µoniIi, = µohi, inside each solenoid, where hi is the magnetic moment density of each solenoid, measured in amp-turns per metre. Now consider any B field in any region R. We can fill R with an imaginary honeycomb of solenoids whose sides follow the B field lines (ie lines of constant |B|, and whose tangent at each point is parallel to the B field at that point), and whose current or pitch (or both) are adjusted so that the solenoidal field equals the B field along the centre line of each solenoid … and by making the number of solenoids large enough (ie, the diameters small enough), we can make the solenoidal field match the whole B field to any required degree of accuracy. In other words: in the limit, any actual B field can be replaced by a purely solenoidal field, which is naturally described in amp-turns per metre. Commentary boisebrats @ 09:08 AM Mar3-13 it would seem that if this was so, that all other planets mars, mercury etc. would exhibit the same magnetic field with varying mathematical predictability,. . . Velikovsky @ 03:24 PM Mar1-13 Is it a possibility that the earth's magnetic field is a result of it's rotation within the electrical/plasmic field/flux given of by the sun? If the earth were to theoretically stop it's axial rotation about itself and also to cease its orbit about the sun, would it's magnetic field not fall to comparatively feeble levels? SteveDave @ 12:34 AM Feb21-13 Per Dr. J. Marvin Herndon in 2007, the Earth's elctromagnetic field is genrated by a possible fission reactor system at the Earth's core, enlayered by an unknow liquid system flowing in the opposite direction, then the liquid core just beneath the rock mantle layer that again turns opposite of it's lower layer, but the same direction as the iron ore core, this generates a magnetic convection oven, if you will, that then genrates our magnetic field. I'm not sure if this makes sense, if someone could please elaborate. Thanks, girish k @ 01:54 AM Jul30-12 what ll be the effect of current or voltage on the substrate magnetic property Kigami @ 05:14 PM May24-12 lol, the earth's magnetic field is a result of electron shedding on the liquid iron core, the core form's random vortices in the liquid witch in turn excites the electrons of the iron and the one's that wiggle free flow out as a magnetic field- geodynamics as explained by Kigami (thats me) Redbelly98 @ 06:24 PM Apr25-12 Fomular, please post questions like this in one of the subject areas that you'll find at www.physicsforums.com Fomular @ 03:36 AM Apr25-12 what causes the earth magnetic field? MissingPerson @ 10:32 AM Apr16-12 Could there be a polar opposite to the electromagnetic spectrum?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917426347732544, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/72354/embedding-of-local-representation-into-automorphic-representation/81552
## embedding of local representation into automorphic representation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume $v$ is a place of a number field $k$, finite or not. Let $\pi_v$ be an irreducible admissible generic representation of $GL_n(k_v)$. Is it always true that we can find some irreducible generic automorphic representation $\Pi$ of $GL_n(\mathbb{A}_k)$ with $v$-component exactly isomorphic to $\pi_v$? A form of the famous generalized Ramanujan conjecture says that if $\Pi$ is cuspidal, then every component is tempered. So the above question is kind of converse to Ramanujan conjecture. It is known that if $v$ is a finite place, and $\pi_v$ is supercuspidal, then $\Pi$ always exists, and in fact we can take $\Pi$ to be a cuspidal representation. Many thanks for any answer or references related to this question. - What do you mean by generic, in this context? – Alain Valette Aug 8 2011 at 15:03 Here it means there exists some automorphic form $\phi$ in $\Pi$, such that the associated Whittaker function $W_{\phi}$ is not identically zero. – unknown (google) Aug 8 2011 at 15:30 Aren't there uncountably many local representations, but only countably many automorphic representations? – anton Aug 8 2011 at 19:28 @AD: Yes, if you restrict to cuspidal GL(n) aut. reps. – David Hansen Aug 8 2011 at 21:20 @David: What exactly do you mean by automorphic representation? I used to understand a representation which occurs discretely in $L^2(G({\mathbb Q})\backslash G({\mathbb A})$. If you fix a level, ie a compact open at the finite places, you get countably many automorphic representations by spectral geometry (cuspidals plus residues of Eisenstein series). As there are essentially countably many levels, you're done. What am I missing? – anton Aug 9 2011 at 7:20 show 5 more comments ## 1 Answer I don't know if you're still interested in the question, but Arthur proves something like this in his upcoming book on representations of classical groups (http://www.claymath.org/cw/arthur/pdf/Book.pdf): Lemma 6.2.2 says that for $G=SO(n)$ or $Sp(2n)$, a local field $F\neq\Bbb C$, and a square-integrable irreducible representation $\pi$ of $G(F)$, there is a global field $K$, a place $v$, and an automorphic representation $\Pi$ in the discrete spectrum that has $\pi$ as the $v$-component and is spherical at all other finite places. (He needs the lemma to use the trace formula over the constructed global field $K$.) And I think I've seen something like this also for other groups somewhere else, at least for $GL(n)$ (with weaker conditions on the remaining places). - Thank you very much! – unknown (google) Nov 21 2011 at 21:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132847785949707, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/90825/resolving-a-paradox-concerning-an-expected-value?answertab=active
# Resolving a paradox concerning an expected value We have a coin that has a probability $p>1/2$ of coming up heads (and probability $1-p$ of coming up tails). We now play the following game: 1. We start with a fortune of one dollar. 2. We toss the coin. If it comes up heads, we double our current fortune, and we repeat this step (step 2). However, if it comes up tails, we lose all of our fortune, i.e. go broke, and stop playing. Note that after $n$ plays, our fortune is $2^n$ with probability $p^n$, and zero with probability $1-p^n$. This means that the expected value of our fortune after $n$ plays is $2^np^n = (2p)^n$. Since $p>1/2$, we have $2p>1$, which tells us that our expected value after $n$ plays approaches infinity as $n$ approaches infinity. On the other hand, our probability of going broke after $n$ plays, $1-p^n$, approaches one as $n$ approaches infinity. In other words, we know for sure that we will eventually go broke. My question is this: How can we, in the infinity, both possess an infinite fortune and yet be completely broke? How can this paradox be resolved? - – Marc van Leeuwen Dec 12 '11 at 16:01 This isn't exactly the St. Petersburg paradox. (Not that you were necessarily implying that; just to clarify.) – joriki Dec 12 '11 at 16:11 – joriki Dec 12 '11 at 16:22 This is much like a sequence of functions on $[0,1]$, all of whose integrals are $1$, but whose pointwise limit is the $0$ function. It is more a problem with how we think of limits. – Thomas Andrews Dec 12 '11 at 16:39 1 – Henry Dec 12 '11 at 17:24 show 1 more comment ## 2 Answers How can we, in the infinity, both possess an infinite fortune and yet be completely broke? First, as already explained by @joriki, this statement should be corrected as one possesses a mean infinite fortune and one is broke almost surely. Second, rephrasing things like I just did hints as an occurrence of the interplay between different convergence modes. Indeed, let $X_n$ denote the random fortune after $n$ plays and $X$ the random fortune after infinitely many plays. Both exist, in particular $X_n\to X$ almost surely and $X=0$ almost surely. Hence, in a sense, the mean fortune after infinitely many plays is $\mathrm E(X)=0$. On the other hand, the mean fortune after $n$ plays is $\mathrm E(X_n)=(2p)^n$, which does not go to zero if $p\geqslant\frac12$. All this means that we are facing a case where the expectation of the limit is not the limit of the expectations, that is, $\mathrm E(X_n)\not\to\mathrm E(X)$ although $X_n\to X$ (almost surely). Well, these things just happen... - – Topology Dec 12 '11 at 20:51 The problem is an imprecise use of concepts like "what happens at infinity". The following are correct: • The expected value of your fortune eventually increases above any finite bound. • The probability that you haven't gone broke eventually decreases below any positive bound. This is usually expressed by • The expected value goes to infinity. • The probability goes to zero. Note, however, that these latter statements derive their meaning solely from having been defined to mean precisely the former statements. Whatever you may associate with the second set of statements that goes beyond the first set of statement is not part of their mathematical definition. Thus, there is no paradox because there is no point "at infinity" at which you are simultaneously expectedly wealthy and certainly broke. - – Topology Dec 12 '11 at 20:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543649554252625, "perplexity_flag": "middle"}
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Simple_linear_regression
More results on: Download PDF files on: Download Word files on: Images on: Video/Audio on: Download PowerPoint on: More results from.edu web: Map (if applicable) of: Simple linear regression edit Extracted from Wikipedia, the Free Encyclopedia - Original source - History Loading Simple linear regression - Wikipedia, the free encyclopedia # Simple linear regression From Wikipedia, the free encyclopedia Jump to: navigation, search This article includes a list of references, but . Please help to improve this article by introducing more precise citations. (January 2010) Regression analysis Models Estimation Background Okun's law in macroeconomics is an example of the simple linear regression. Here the dependent variable (GDP growth) is presumed to be in a linear relationship with the changes in the unemployment rate. In statistics, simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable. In other words, simple linear regression fits a straight line through the set of n points in such a way that makes the sum of squared residuals of the model (that is, vertical distances between the points of the data set and the fitted line) as small as possible. The adjective simple refers to the fact that this regression is one of the simplest in statistics. The slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that it passes through the center of mass (x, y) of the data points. Other regression methods besides the simple ordinary least squares (OLS) also exist (see linear regression model). In particular, when one wants to do regression by eye, people usually tend to draw a slightly steeper line, closer to the one produced by the total least squares method. This occurs because it is more natural for one's mind to consider the orthogonal distances from the observations to the regression line, rather than the vertical ones as OLS method does. ## Fitting the regression line Suppose there are n data points {yi, xi}, where i = 1, 2, …, n. The goal is to find the equation of the straight line $y = \alpha + \beta x, \,$ which would provide a "best" fit for the data points. Here the "best" will be understood as in the least-squares approach: such a line that minimizes the sum of squared residuals of the linear regression model. In other words, numbers α and β solve the following minimization problem: $\text{Find }\min_{\alpha,\,\beta}Q(\alpha,\beta),\text{ where } Q(\alpha,\beta) = \sum_{i=1}^n\hat{\varepsilon}_i^{\,2} = \sum_{i=1}^n (y_i - \alpha - \beta x_i)^2\$ By using either calculus, the geometry of inner product spaces or simply expanding to get a quadratic in α and β, it can be shown that the values of α and β that minimize the objective function Q1 are $\begin{align} \hat\beta & = \frac{ \sum_{i=1}^{n} (x_{i}-\bar{x})(y_{i}-\bar{y}) }{ \sum_{i=1}^{n} (x_{i}-\bar{x})^2 } = \frac{ \sum_{i=1}^{n}{x_{i}y_{i}} - \frac1n \sum_{i=1}^{n}{x_{i}}\sum_{j=1}^{n}{y_{j}}}{ \sum_{i=1}^{n}({x_{i}^2}) - \frac1n (\sum_{i=1}^{n}{x_{i}})^2 } \\[6pt] & = \frac{ \overline{xy} - \bar{x}\bar{y} }{ \overline{x^2} - \bar{x}^2 } = \frac{ \operatorname{Cov}[x,y] }{ \operatorname{Var}[x] } = r_{xy} \frac{s_y}{s_x}, \\[6pt] \hat\alpha & = \bar{y} - \hat\beta\,\bar{x}, \end{align}$ where rxy is the sample correlation coefficient between x and y, sx is the standard deviation of x, and sy is correspondingly the standard deviation of y. Horizontal bar over a variable means the sample average of that variable. For example: $\overline{xy} = \tfrac{1}{n}\textstyle\sum_{i=1}^n x_iy_i\ .$ Substituting the above expressions for $\hat\alpha$ and $\hat\beta$ into $y = \hat\alpha + \hat\beta x, \,$ yields $\frac{ y-\bar{y}}{s_y} = r_{xy} \frac{ x-\bar{x}}{s_x}$ This shows the role $r_{xy}$ plays in the regression line of standardized data points. ### Linear regression without the intercept term Sometimes, people consider a simple linear regression model without the intercept term: y = βx. In such a case, the OLS estimator for β simplifies to $\hat\beta = (\overline{x y}) / (\overline{x^2})$. ## Numerical properties 1. The line goes through the "center of mass" point (x, y). 2. The sum of the residuals is equal to zero, if the model includes a constant: $\textstyle\sum_{i=1}^n\hat\varepsilon_i=0.$ 3. The linear combination of the residuals, in which the coefficients are the x-values, is equal to zero: $\textstyle\sum_{i=1}^nx_i\hat\varepsilon_i=0.$ ## Model-cased properties Description of the statistical properties of estimators from the simple linear regession estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere. ### Unbiasedness The estimators $\hat\alpha$ and $\hat\beta$ are unbiased. This requires that we interpret the estimators as random variables and so we have to assume that, for each value of x, the corresponding value of y is generated as a mean response α + βx plus an additional random variable ε called the error term. This error term has to be equal to zero on average, for each value of x. Under such interpretation, the least-squares estimators $\hat\alpha$ and $\hat\beta$ will themselves be random variables, and they will unbiasedly estimate the "true values" α and β. ### Confidence intervals The formulas given in the previous section allow one to calculate the point estimates of α and β — that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are. That is, how much the estimators $\hat\alpha$ and $\hat\beta$ can deviate from the "true" values of α and β. The latter question is answered by the confidence intervals for the regression coefficients. In order to construct the confidence intervals usually one of the two possible assumptions is made: either that the errors in the regression are normally distributed (the so-called classic regression assumption), or that the number of observations n is sufficiently large so that the actual distribution of the estimators can be approximated using the central limit theorem. ### Normality assumption Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean β and variance $\sigma^2/\sum(x_i-\bar{x})^2,$ where $\sigma^2$ is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals Q is distributed proportionally to χ2 with (n−2) degrees of freedom, and independently from $\hat\beta.$ This allows us to construct a t-statistic $t = \frac{\hat\beta - \beta}{s_{\hat\beta}}\ \sim\ t_{n-2},$   where $s_\hat{\beta} = \sqrt{ \frac{\tfrac{1}{n-2}\sum_{i=1}^n \hat{\varepsilon}_i^{\,2}} {\sum_{i=1}^n (x_i -\bar{x})^2} }$ which has a Student's t-distribution with (n−2) degrees of freedom. Here sβ is the standard error of the estimator $\hat\beta.$ Using this t-statistic we can construct a confidence interval for β: $\beta \in \Big[\ \hat\beta - s_{\hat\beta} t^*_{n-2},\ \hat\beta + s_{\hat\beta} t^*_{n-2}\ \Big]$   at confidence level (1−γ), where $t^*_{n-2}$ is the (1−γ/2)-th quantile of the tn–2 distribution. For example, if γ = 0.05 then the confidence level is 95%. Similarly, the confidence interval for the intercept coefficient α is given by $\alpha \in \Big[\ \hat\alpha - s_{\hat\alpha} t^*_{n-2},\ \hat\alpha + s_{\hat\alpha} t^*_{n-2}\ \Big]$  at confidence level (1−γ), where $s_{\hat\alpha} = s_{\hat\beta}\sqrt{\tfrac{1}{n}\textstyle\sum_{i=1}^n x_i^2} = \sqrt{\tfrac{1}{n(n-2)}\left(\textstyle\sum_{j=1}^n \hat{\varepsilon}_j^{\,2} \right) \frac{\sum_{i=1}^n x_i^2} {\sum_{i=1}^n (x_i -\bar{x})^2} }$ The US "changes in unemployment – GDP growth" regression with the 95% confidence bands. The confidence intervals for α and β give us the general idea where these regression coefficients are most likely to be. For example in the "Okun's law" regression shown at the beginning of the article the point estimates are $\hat\alpha=0.859$ and $\hat\beta=-1.817.$ The 95% confidence intervals for these estimates are $\alpha\in\big[\,0.76,\,0.96\,\big], \quad \beta\in\big[\,{-2.06},\,{-1.58}\,\big]$   with 95% confidence. In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be showncitation needed that at confidence level (1−γ) the confidence band has hyperbolic form given by the equation $\hat{y}|_{x=\xi} \in \Bigg[ \hat\alpha + \hat\beta \xi \pm t^*_{n-2} \sqrt{ \textstyle\frac{1}{n-2} \sum\hat{\varepsilon}_i^{\,2} \cdot \Big(\frac{1}{n} + \frac{(\xi-\bar{x})^2}{\sum(x_i-\bar{x})^2}\Big) } \Bigg] .$ ### Asymptotic assumption The alternative second assumption states that when the number of points in the dataset is "large enough", the law of large numbers and the central limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile t*n−2 of student-t distribution is replaced with the quantile q* of the standard normal distribution. Occasionally the fraction 1⁄(n−2) is replaced with 1⁄n. When n is large such change does not alter the results considerably. ## Numerical example This example concerns the data set from the Ordinary least squares article. This data set gives average weights for humans as a function of their height in the population of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead. | | | | | | | | | | | | xi |  Height (m) | yi |  Mass (kg) | |-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|---------------|-------|--------------| | 1.47 | 1.5 | 1.52 | 1.55 | 1.57 | 1.6 | 1.63 | 1.65 | 1.68 | 1.7 | 1.73 | 1.75 | 1.78 | 1.8 | 1.83 | | 52.21 | 53.12 | 54.48 | 55.84 | 57.2 | 58.57 | 59.93 | 61.29 | 63.11 | 64.47 | 66.28 | 68.1 | 69.92 | 72.19 | 74.46 | There are n = 15 points in this data set. Hand calculations would be started by finding the following five sums: $\begin{align} & S_x = \sum x_i = 24.76,\quad S_y = \sum y_i = 931.17 \\ & S_{xx} = \sum x_i^2 = 41.0532, \quad S_{xy} = \sum x_iy_i = 1548.2453, \quad S_{yy} = \sum y_i^2 = 58498.5439 \end{align}$ These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors. $\begin{align} & \hat\beta = \frac{nS_{xy}-S_xS_y}{nS_{xx}-S_x^2} = 61.272 \\ & \hat\alpha = \tfrac{1}{n}S_y - \hat\beta \tfrac{1}{n}S_x = -39.062 \\ & s_\varepsilon^2 = \tfrac{1}{n(n-2)} \big( nS_{yy}-S_y^2 - \hat\beta^2(nS_{xx}-S_x^2) \big) = 0.5762 \\ & s_\beta^2 = \frac{n s_\varepsilon^2}{nS_{xx} - S_x^2} = 3.1539 \\ & s_\alpha^2 = s_\beta^2 \tfrac{1}{n} S_{xx} = 8.63185 \end{align}$ The 0.975 quantile of Student's t-distribution with 13 degrees of freedom is t*13 = 2.1604, and thus confidence intervals for α and β are $\begin{align} & \alpha \in [\,\hat\alpha \mp t^*_{13} s_\alpha \,] = [\,{-45.4},\ {-32.7}\,] \\ & \beta \in [\,\hat\beta \mp t^*_{13} s_\beta \,] = [\, 57.4,\ 65.1 \,] \end{align}$ The product-moment correlation coefficient might also be calculated: $\hat{r} = \frac{nS_{xy} - S_xS_y}{\sqrt{(nS_{xx}-S_x^2)(nS_{yy}-S_y^2)}} = 0.9945$ This example also demonstrates that sophisticated calculations will not overcome the use of badly prepared data. The heights were originally given in inches, and have been converted to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm, this is not a correct conversion. The original inches can be recovered by Round(x/0.0254) and then re-converted to metric: if this is done, the results become $\begin{align} & \hat\beta = 61.6746 \\ & \hat\alpha = -39.7468 \\ \end{align}$ Thus a seemingly small variation in the data has a real effect. ## See also • Proofs involving ordinary least squares — derivation of all formulas used in this article in general multidimensional case • Deming regression — simple linear regression with errors measured non-vertically • Linear segmented regression ## References 1. Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252-285 HPTS - Area Progetti - Edu-Soft - JavaEdu - N.Saperi - Ass.Scuola.. - TS BCTV - TSODP - TRTWE TSE-Wiki - Blog Lavoro - InterAzioni- NormaScuola - Editoriali - Job Search - DownFree ! TerritorioScuola. Some rights reserved. Informazioni d'uso ☞
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8679067492485046, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Range_of_a_function
# Range (mathematics) (Redirected from Range of a function) For other uses, see range. $f$ is a function from domain X to codomain Y. The smaller oval inside Y is the image of $f$. Sometimes "range" refers to the codomain and sometimes to the image. In mathematics, the range of a function refers to either the codomain or the image of the function, depending upon usage. The codomain is a set containing the function's output, whereas the image is only the part of the codomain where the elements are outputs of the function. For example, the function $f(x) = x^2$ is often described as a function from the real numbers to the real numbers, meaning that the codomain is R, but its image is the set of non-negative real numbers. Some books say that range of this function is its codomain, the set of all real numbers, reflecting that the function is real-valued. These books call the actual output of the function the image. This is the current usage for range in computer science. Other books say that the range is the function's image, the set of non-negative real numbers, reflecting that a number can be the output of this function if and only if it is a non-negative real number. In this case, the larger set containing the range is called the codomain.[1] This usage is more common in modern mathematics. ## Examples Let f be a function on the real numbers $f\colon \mathbb{R}\rightarrow\mathbb{R}$ defined by $f(x) = 2x$. This function takes as input any real number and outputs a real number two times the input. In this case, the codomain and the image are the same (i.e., the function is a surjection), so the range is unambiguous; it is the set of all real numbers. In contrast, consider the function $f\colon \mathbb{R}\rightarrow\mathbb{R}$ defined by $f(x) = \sin(x)$. If the word "range" is used in the first sense given above, we would say the range of f is the codomain, all real numbers; but since the output of the sine function is always between -1 and 1, "range" in the second sense would say the range is the image, the closed interval from -1 to 1. ## Formal definition Standard mathematical notation allows a formal definition of range. In the first sense, the range of a function must be specified; it is often assumed to be the set of all real numbers, and {y | there exists an x in the domain of f such that y = f(x)} is called the image of f. In the second sense, the range of a function f is {y | there exists an x in the domain of f such that y = f(x)}. In this case, the codomain of f must be specified, but is often assumed to be the set of all real numbers. In both cases, image f ⊆ range f ⊆ codomain f, with at least one of the containments being equality. ## References 1. Walter Rudin, Functional Analysis, Second edition, p. 99, McGraw Hill, 1991, ISBN 0-07-054236-8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980754613876343, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/58694/numbers-p-with-the-property-that-the-sum-of-the-divisors-of-p-including-1-and-p?answertab=active
# Numbers p with the property that the sum of the divisors of p (including 1 and p) equals to that of p + 1 So, I wondered if the property described in the title (namely, the property that the sum of the divisors of $n$ equals the sum of the divisors of $n+1$) ever occurred, and went to compute it. Here are the numbers with this property up to 20.000 (including): 14, 206, 957, 1334, 1364, 1634, 2685, 2974, 4364, 14841, 18873, 19358, ... Can anyone explain this growth? Are there infinitely many of them? (sure looks like so). Is there a formula for the nth term of this sequence, or something? - 1 I've edited to incorporate the title into the body, to make the question self-contained. – Gerry Myerson Aug 20 '11 at 23:28 ## 2 Answers Whether or not this sequence goes on forever is an unsolved problem. You can find its terms and more information at the OEIS. It is A002961. Edit: these are the numbers for which $n$ and $n+1$ have the same sum of divisors. - It seems that the numbers you listed are squarefree numbers or numbers of the form $p^{k}q$, where $p$ is the smallest prime factor of such a number and $q$ a squarefree number. - The OEIS page has a link to the first 4800 numbers with $\sigma(n)=\sigma(n+1)$. I'd be more impressed if you checked more than just the 12 numbers OP listed. – Gerry Myerson Aug 21 '11 at 12:57 1 My previous observation is false: 193893=3.7².1319. – Sylvain Julien Aug 21 '11 at 14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227136373519897, "perplexity_flag": "head"}
http://mathoverflow.net/questions/43313/good-references-for-rigged-hilbert-spaces/43506
## Good references for Rigged Hilbert spaces? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Every now and then I attempt to understand better quantum mechanics and quantum field theory, but for a variety of possible reasons, I find it very difficult to read any kind of physicist account, even when the physicist is trying to be mathematically respectable. (I am not trying to be disrespectful or controversial here; take this as a confession of stupidity if it helps.) I am generally interested in finding online mathematical accounts which ideally would come close to being of "Bourbaki standard": definition-theorem-proof and written for mathematicians who prefer conceptual explanations, and ideally with tidy or economical notation (e.g., eschewing thickets of subscripts and superscripts). More specifically, right now I would like a (mathematically trustworthy) online account of rigged Hilbert spaces, if one exists. Maybe I am wrong, but the Wikipedia account looks a little bit suspect to me: they describe a rigged Hilbert space as consisting of a pair of inclusions $i: S \to H$, $j: H \to S^\ast$ of topological vector space inclusions, where $S^\ast$ is the strong dual of $S$, $H$ is a (separable) Hilbert space, $i$ is dense, and $j$ is the conjugate linear isomorphism $H \simeq H^\ast$ followed by the adjoint $i^\ast: H^\ast \to S^\ast$. This seems a little vague to me; should $S$ be more specifically a nuclear space or something? My guess is that a typical application would be where $S$ is Schwartz space on $\mathbb{R}^4$, with its standard dense inclusion in $L^2(\mathbb{R}^4)$, so $S^\ast$ consists of tempered distributions. I also hear talk of a nuclear spectral theorem (due to Gelfand and Vilenkin) used to help justify the rigged Hilbert space technology, but I don't see precise details easily available online. - ## 5 Answers Some time ago I was interested in rigged Hilbert space to get a better understanding of quantum physics. On that occasion I collected some references on this subject, see below. It's quite comprehensive. A good starting point for an overview could be the works of Madrid and Gadella. Note that there are different versions of "rigged Hilbert space" (in context of quantum physics) in literature. J.-P. Antoine. Dirac formalism and symmetry problems in quantum mechanics. i. general dirac formalism. Journal of Mathematical Physics, 10(1):53--69, 1969. N.Bogoliubov, A.Logunov, and I.Todorov. Introduction to Axiomatic Quantum Field Theory, chapter 1 Some Basic Concepts of Functional Analysis 4 The Space of States, pages 12--43, 113--128. Benjamin, Reading, Massachusetts, 1975. R.de la Madrid. Quantum Mechanics in Rigged Hilbert Space Language. PhD thesis, Depertamento de Fisica Teorica Facultad de Ciencias. Universidad de Valladolid, 2001. (available here) M.Gadella and F.Gómez. A unified mathematical formalism for the dirac formulation of quantum mechanics. Foundations of Physics, 32:815--869, 2002. (available here) M.Gadella and F.Gómez. On the mathematical basis of the dirac formulation of quantum mechanics. International Journal of Theoretical Physics, 42:2225--2254, 2003. M.Gadella and F.Gómez. Dirac formulation of quantum mechanics: Recent and new results. Reports on Mathematical Physics, 59:127--143, 2007. I.M. Gelfand and N.J. Vilenkin. Generalized Functions, vol. 4: Some Applications of Harmonic Analysis, volume4, chapter 2-4, pages 26--133. Academic Press, New York, 1964. A.R. Marlow. Unified dirac-von neumann formulation of quantum mechanics. i. mathematical theory. Journal of Mathematical Physics, 6:919--927, 1965. E.Prugovecki. The bra and ket formalism in extended hilbert space. J. Math. Phys., 14:1410--1422, 1973. J.E. Roberts. The dirac bra and ket formalism. Journal of Mathematical Physics, 7(6):1097--1104, 1966. J.E. Roberts. Rigged hilbert spaces in quantum mechanics. Commun. math. Phys., 3:98--119, 1966. (available here) Tjøstheim. A note on the unified dirac-von neumann formulation of quantum mechanics. Journal of Mathematical Physics, 16(4):766--767, 4 1975. Edit I remember that there is also a discussion about Gelfand triples in physics in the Funktionalanalysis books by Siegfried Großmann but I don't have a copy handy the moment. Though it is in german it might be interesting for you, too. - Thanks, student. I am not particularly near a good library where I can access these to see which would suit my purposes. I remember seeing something by Madrid on the arXiv and it wasn't quite what I was looking for, but I'll look at his thesis. Which of these do you think are in definition-theorem-proof format at a level of rigor that would satisfy mathematicians? – Todd Trimble Oct 23 2010 at 20:18 When I collected the references I was interested to have as much rigor as possible. If I remember correctly all of those are written in a usual mathematical style. In Madrid's thesis there are many examples concerning quantum mechanics. For a more general approach I would look at M.Gadella and F.Gómez. A unified mathematical formalism for the dirac formulation of quantum mechanics. Foundations of Physics, 32:815--869, 2002 In this paper they tried to unify some (perhaps most) versions of rigorous frameworks for rigged Hilbert space in view of quantum physics. – student Oct 23 2010 at 21:20 I think the problem with this subject is, that there are many different attempts to give a rigorous framework for rigged Hilbert spaces in physics. I don't know if there is a generally accepted useful version. Hence it's not surprising that there are now good free online resources about this subject. – student Oct 23 2010 at 21:27 @student: thanks again for all your references, but regarding your last comment, you've so far given me one online resource (Madrid), which gives no proofs. So this is not in a style that I was asking for above. – Todd Trimble Oct 23 2010 at 22:10 I added links to those articles which are currently open access (only two more, sorry) – student Oct 23 2010 at 23:27 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "Generalized functions volume 4" by Gelʹfand, Vilenkin, (Math review number 0146653) has a long an detailed discussion of rigged Hilbert spaces and nuclear spaces. The book by Glimm and Jaffe has a brief summary of the theory. - Thank you, Richard. At physicsforums.com/showthread.php?t=294488 I read some hearsay about some alleged flaws in their arguments; do you know what they're talking about and whether a mathematician should be worried? – Todd Trimble Oct 23 2010 at 17:48 The Springer online Encyclopedia of Mathematics' entry on RHS looks more rigorous albeit also more succinct than Wikipedia; for another online intro see the nlab entry. In addition to the references listed there, a rigorous discussion of the RHS can be found (as far as I recall -- I do not have a copy handy) e.g. in the two-volume book Principles of Advanced Mathematical Physics by Robert D. Richtmyer. Also, it appears that, unlike the physics community, the name Gelfand triple (rather than RHS) is more commonly used by the mathematicians. - Thanks for the tips, mathphysicist. The first sentence of that Springer Encyclopedia reference gives the same definition as wikipedia (so maybe that definition is perfectly adequate after all), but then a little later it says, "The most interesting case is that in which is $\Phi$ [my $S$] is a nuclear space." Then they cite a spectral theorem, but I can't tell if they mean to include the nuclear hypothesis in the theorem or not. – Todd Trimble Oct 23 2010 at 17:27 Um, mathphysicist, I know you don't realize it, but I was the one who wrote (most of) that nLab article! – Todd Trimble Oct 23 2010 at 20:09 Well, Todd, unfortunately I didn't, sorry :( Should have looked at the edits history :) – mathphysicist Oct 24 2010 at 2:24 Please feel free to remove the nLab reference from my answer if you find this necessary and/or appropriate. – mathphysicist Oct 24 2010 at 2:25 This is not precisely related to your question, but a certain notion of rigged Hilbert space occurs in the theory of C*-algebras. Particularly, one should look at the work of Marc Rieffel, e.g. http://math.berkeley.edu/~rieffel/papers/morita_equivalence.pdf. I figured I'd mention this because it is decidedly mathematical, and a useful idea. - Thanks, Jon. Interesting blast from the not-too-distant past. – Todd Trimble Oct 25 2010 at 12:50 You're welcome, Todd! – Jon Bannon Oct 25 2010 at 14:21 Correct me if I'm wrong, but Rieffel's rigged spaces are what we call Hilbert C*-modules, right? Is that related to the rigged spaces everyone here is talking about? I'm asking because I really don't understand... Anyway, if so, a nice book would also be Lance's "Hilbert C*-Modules: A Toolkit for Operator Algebraists", since its a toolkit and all, very concise and extremely pedagogic. – Yul Otani Nov 14 at 19:00 @Yul: I was wondering the same thing way back when I asked this (hence my first sentence). I don't see a direct relation, come to think of it...except via the Gelfand triple bit...but that is not a precise relation. If you happen to find out, please let me know. – Jon Bannon Nov 14 at 23:00 I would highly recommend looking at the chapter on Sobolev Towers in the book by Engel and Nagel One-Parameter Semigroups for Linear Evolution Equations or the "baby edition" A Short Course on Operator Semigroups. It provides a really nice example of rigged Hilbert spaces. For example, if $A:D(A) \subset L^2 \to L^2$ is the (Dirichlet) Laplacian, then one can identify $D(A^n)$, $n=1,2,\ldots$ with Sobolev spaces and $D(A^{-n})$ with the negative Sobolev spaces (i.e. extrapolation spaces of $A$). This concept can be taken further if one considers analytic semigroups and fractional powers of operators and also into the Banach space setting (see Amann's book Linear and Quasilinear Parabolic Problems: Abstract linear theory). Basically, the concept of rigged Hilbert spaces becomes really natural if one keeps PDEs and Sobolev spaces in mind. Finally, the book by Reed and Simon Methods of Modern Mathematical Physics - Vol 1: Functional analysis provides a number of references for rigged Hilbert spaces at the end of Section VII (page 244). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028428792953491, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/192599-what-will-determinant-matrix.html
# Thread: 1. ## what will be the determinant of the matrix A Let A be 3 by 3 matrix over real numbers satisfying A⁻¹= I -2A. Then what will be the determinant of the matrix A? thanks in advance. regards. 2. ## Re: what will be the determinant of the matrix A Originally Posted by sorv1986 Let A be 3 by 3 matrix over real numbers satisfying A⁻¹= I -2A. Then what will be the determinant of the matrix A? Perhaps (perhaps) that is not the exact formulation of the problem. Could you review it? 3. ## Re: what will be the determinant of the matrix A There are two possible answers. First, the determinant of a matrix is the same as the product of all of its eigenvalues. Second, if A satisfies that equation, its eigenvalues must also. You can manipulate the equation to a quadratic that has two solutions. Since A is 3 by 3, it has three eigenvalues. Since only two numbers sastisfy that equation, one of them must be a duplicate eigenvalue. The determinant depends upon which of them is the duplicate. 4. ## Re: what will be the determinant of the matrix A Originally Posted by FernandoRevilla Perhaps (perhaps) that is not the exact formulation of the problem. Could you review it? i m sry .but that was the only information given about the matriX A. 5. ## Re: what will be the determinant of the matrix A det(AB) = det(A)det(B) Determinant is non-zero if it is invertible det(A) = 1/(det(A^-1)) 6. ## Re: what will be the determinant of the matrix A Originally Posted by HallsofIvy The determinant depends upon which of them is the duplicate. But that is not possible. We have $A^{-1}=I-2A\Leftrightarrow 2A^2-A+I=0$ . That is , $p(\lambda)=2(\lambda-\lambda_1)(\lambda-\lambda_2)$ is an annihilator polynomial of $A$ being $\lambda_1=(1+\sqrt{7}i)/4$ and $\lambda_2=(1-\sqrt{7}i)/4$ . As $A\in\mathbb{R}^{3\times 3}$ , $A-\lambda_1I\neq 0$ and $A-\lambda_2I\neq 0$ . This means that the minimal polynomial of $A$ is just $(1/2)p(\lambda)$ and $\lambda_1,\lambda_2$ are eigenvalues of $A$ as you said. The third one $\lambda_3$ should be necessarily real and $A$ is diagonalizable in $\mathbb{C}$ . So, $A$ is similar (in $\mathbb{C}$) to $D=\textrm{diag}\;(\lambda_1,\lambda_2,\lambda_3)$ and as a consequence: $2D^2-D+I\Rightarrow 2\lambda_i^2-\lambda_i+1=0\;(i=1,2,3)$ Again, as you said $\lambda_3=\lambda_1$ or $\lambda_3=\lambda_2$ which implies $\det A=\det D=\lambda_1\lambda_2\lambda_3=(1/2)\lambda_3\not\in\mathbb{R}$ (contradiction). 7. ## Re: what will be the determinant of the matrix A Originally Posted by CaramelCardinal Determinant is non-zero if it is invertible det(A) = 1/(det(A^-1)) Rigorously true. 8. ## Re: what will be the determinant of the matrix A Originally Posted by FernandoRevilla But that is not possible. We have $A^{-1}=I-2A\Leftrightarrow 2A^2-A+I=0$ . That is , $p(\lambda)=2(\lambda-\lambda_1)(\lambda-\lambda_2)$ is an annihilator polynomial of $A$ being $\lambda_1=(1+\sqrt{7}i)/4$ and $\lambda_2=(1-\sqrt{7}i)/4$ . As $A\in\mathbb{R}^{3\times 3}$ , $A-\lambda_1I\neq 0$ and $A-\lambda_2I\neq 0$ . This means that the minimal polynomial of $A$ is just $(1/2)p(\lambda)$ and $\lambda_1,\lambda_2$ are eigenvalues of $A$ as you said. The third one $\lambda_3$ should be necessarily real and $A$ is diagonalizable in $\mathbb{C}$ . So, $A$ is similar (in $\mathbb{C}$) to $D=\textrm{diag}\;(\lambda_1,\lambda_2,\lambda_3)$ and as a consequence: $2D^2-D+I\Rightarrow 2\lambda_i^2-\lambda_i+1=0\;(i=1,2,3)$ Again, as you said $\lambda_3=\lambda_1$ or $\lambda_3=\lambda_2$ which implies $\det A=\det D=\lambda_1\lambda_2\lambda_3=(1/2)\lambda_3\not\in\mathbb{R}$ (contradiction). how [COLOR=rgb(0, 0, 0)]lambda3=lambda1 or lambda3= lambda2[/COLOR] is possible? lambda_3 must be real as the matrix is 3 by 3.Is that problem is wrong or something? Still i got no clue. 2A^2-A+I=0 cant be the minimal polynomial as every annihilating polynomial is multiple of the minimal polynomial. Please help me 9. ## Re: what will be the determinant of the matrix A Originally Posted by CaramelCardinal det(AB) = det(A)det(B) Determinant is non-zero if it is invertible det(A) = 1/(det(A^-1)) how the det of A^-1 i.e. det (I-2A) would be calculated? 10. ## Re: what will be the determinant of the matrix A Originally Posted by sorv1986 Is that problem is wrong or something? Plainly: $A$ does not exist. Still i got no clue. 2A^2-A+I=0 cant be the minimal polynomial as every annihilating polynomial is multiple of the minimal polynomial. Please help me The minimal polynomial $\mu (\lambda)$ of $A$ divides to any annihilating polynomial of $A$ so $\mu(\lambda)=\lambda-\lambda_1$ or $\mu(\lambda)=\lambda-\lambda_2$ or $\mu(\lambda)=(\lambda-\lambda_1)(\lambda-\lambda_2)$ . As $A-\lambda_1I\neq 0$ and $A-\lambda_2I\neq 0$ we conclude that $\mu(\lambda)=(\lambda-\lambda_1)(\lambda-\lambda_2)$ is the minimal polynomial of $A$ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174094200134277, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32117/set-of-real-numbers-with-positive-measure-containing-no-midpoints/32124
## Set of real numbers with positive measure containing no midpoints ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does there exists a subset E of R with positive measure and without containing any midpoints (i.e. x,y distinct in E, (x+y)/2 not in E)? - ## 3 Answers According to James Foran, Non-averaging sets, dimension, and porosity, Canad Math Bull 29 (1986) 60-63, "It follows from the Lebesgue Density Theorem that a measurable, non-averaging subset (of $(0,1]$) cannot have positive measure." - 1 Link to the article (open access): cms.math.ca/cmb/v29/p60 – Yemon Choi Jul 16 2010 at 6:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As already said by Gerry, the answer to your question is negative. However, it becomes positive if you only ask your set to have Hausdorff dimension 1 instead of positive Lebesgue measure, see Salem, R.; Spencer, D. C. On sets which do not contain a given number of terms in arithmetical progression. Nieuw Arch. Wiskunde (2) 23, (1950). 133--143. For a more general recent result see Tamás Keleti Construction of one-dimensional subsets of the reals not containing similar copies of given patterns, Analysis and PDE Vol. 1 (2008), No. 1, 29-33 (if you do not know this journal, you should have a look at it and more generally to the web site of the Mathematical Science publishers, by the way.) - No, such a set cannot exist and one can prove this using Lebesgue Density Theorem and a simple pegionhole argument. Infact all points $x$ which are density points of $E$ will be a midpoint for some $y,z \in E$ i.e., $x=\frac{y+z}{2}$. Let $F \subseteq E$ be the set of density points of E, and $x \in F$. Then there exists a $\epsilon > 0$ such that $m( B_{\epsilon}(x)\cap F) > \epsilon$. Now if $x$ is not a midpoint of $E$ then $\forall d \in (0,\epsilon)$, atleast one of $x-d$ or $x+d$ does not belong to $F$. But then $m( B_{\epsilon}(x)\cap F)= \int_0^{\epsilon} |F\cap \lbrace x-t,x+t\rbrace| dt < \epsilon$, a contradiction !! A set $A$ of real number is called Universal if every measurable set of positive measure necessarily contains an affine image of $A$. A simple variation of the above argument will give that all finite set $A$ are infact Universal. However, no example of an infinite Universal set is knwon and its a conjecture of Erdos that no infinite universal sets exists. This paper has a nice discussion and references to this problem M. Kolountzakis: Infinite Patterns That Can Be Avoided by Measure, Bull. London Math. Soc. 29 (1997), 4, 415-424. http://fourier.math.uoc.gr/~mk/ps/universal.pdf As Gerry and Benoît Kloeckner has mentioned the problem becomes interesting when one considers Hausdroff measure instead of Lebesgue measure. Recently I. Laba and M. Pramanik proved existence of 3 term arithmetic progression even in closed sets which has Hausdroff dimension close to 1, `under the condition that E supports a probability measure obeying appropriate dimensionality and Fourier decay conditions' I. Laba and M. Pramanik: "Arithmetic progressions in sets of fractional dimension",, Geom. Funct. Anal. 19 (2009), 429-456. http://www.math.ubc.ca/~ilaba/preprints/progressions-may15.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8776082992553711, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/48221/spectral-sequence-for-cohomology-of-open-subset
## Spectral sequence for cohomology of open subset ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let `$X$` be a smooth compact orientable manifold (or variety) and let $j: U \subset X$ be the complement to a union $\bigcup_{i \in I} X_i$ of smooth compact orientable manifolds. Suppose that $A$ is a sheaf on $X$ - say, the locally constant sheaf $Z$ (although the question below can be asked for coherent sheaves too). Let $Z_U$ be $j_!j^* Z$, i.e. the locally constant sheaf on $U$ extended by zero to $X$. The usual set theoretic inclusion-exclusion formula leads to a long exact sequence of sheaves $0 \to Z_U \to Z \to \oplus Z_{X_i} \to \oplus Z_{X_i \cap X_j} \to \ldots$ where for a closed subset $f: W \subset X$ one sets $Z_W = f_* f^* Z$. This leads to a spectral sequence with first page given by cohomology of finite intersections $X_{i_1} \cap \ldots \cap X_{i_s}$, and the differential is induced by the combinatorial inclusion-exclusion formula. Are there any examples when the differential of $E_2$ is non zero and known explicitly (which means that we also know the E_2 terms)? Maybe something in terms of excess intersection bundles for intersections of $X_i$, some Gysin maps, etc/? - 2 Like this? dpmms.cam.ac.uk/~bt219/config.pdf – Ryan Budney Dec 3 2010 at 23:20 Thank you Ryan, you could see well beyond the question I stated :) – Vladimir Baranovsky Dec 6 2010 at 0:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086965322494507, "perplexity_flag": "head"}
http://mathoverflow.net/users/29555?tab=recent
# Martin 233 Reputation 216 views ## Registered User Name Martin Member for 5 months Seen 3 hours ago Website Location Age 3h comment Importance of separability vs. second-countabilityWhat makes the simple fact that images of separable spaces are separable an important theorem? May2 comment Reason for studying coherent sheaves on complex manifolds.I think it is due to using * which confuses the renderer because it wants to set things between * as italics. If you replace * by \ast everywhere, then the preview looks fine. Apr27 comment Tensor product of C*-algebras of bounded, uniformly continuous functions on metric spacesDoes the following work? If $X_1$ and $X_2$ are not totally bounded, they contain $\varepsilon$-discrete infinite subsets $D_1$ and $D_2$ for some $\varepsilon$. Their closures in the Samuel compactifications should be $\beta D_1$ and $\beta D_2$. Moreover, $D_1 \times D_2$ is $\varepsilon$-discrete, so its closure in the Samuel compactification of $X_1 \times X_2$ is $\beta (D_1 \times D_2)$. But $\beta (D_1 \times D_2) \neq \beta D_1 \times \beta D_2$. Apr25 comment separable spaces-QM vs. functional analysisSince $\psi_n$ is supposed to be a Cauchy sequence, it is contained in some ball of radius $R$ around zero. Assuming $R \gt \varepsilon$, no vector $\psi$ of norm $\geq 2R$ can be near any $\psi_n$. Apr24 comment separable spaces-QM vs. functional analysis"There exists a Cauchy sequence ..." seems to be a typo (since Cauchy sequences are bounded, the condition is obviously nonsensical). Delete "Cauchy" and you get a dense sequence $\psi_n$, hence a countable dense set. For the other direction enumerate the countable dense set to get a sequence satisfying Zettili's condition. Mar29 comment if $S \times \Re$ is diffeomorphic to $T \times \Re$ then are S and T diffeomorphic?No worries, I removed my comment :-) Mar26 comment Are there results in “Digit Theory”? Mar26 awarded ● Critic Mar24 comment What information about a locally compact group $G$ is encoded in $C_r^\ast(G)$ which is not in $L^1(G)$? Mar20 comment Product of Baire sigma-algebrasNB: this also tells us how to find a meager set that doesn't belong to $\mathcal{E} \otimes \mathcal{E}$: take a universal analytic set $A$ in $X \times X$ and take an open set $U$ such that $M = A \mathbin{\Delta} U$ is meager. Then $M \notin \mathcal{E} \otimes \mathcal{E}$. Mar20 answered Product of Baire sigma-algebras Mar17 comment Compactness of sigma-algebra for the $L^1$ metrics@Didier Piau: Dunford and Schwartz, Part I, Section III.7 has a paragraph The metric space $\Sigma(\mu)$ plus some exercises in III.9. It appears in the index "measure space, as a metric space". Feb27 comment The point of view of semicats in functional analysisThanks for the clarification. You could be interested in reading about operator ideals. These give a vast number of interesting functional-analytic examples of "semicats" different from the compact operators. Feb27 comment The point of view of semicats in functional analysisNo immediate point, really. I suppose I'm puzzled about the beating around the bush with monics and epics in title, body of the question and the comments, while the question, as you say, asks something different. Feb26 comment The point of view of semicats in functional analysisWhat is the difficulty in determining monics and epics? If an operator has non-trivial kernel, it kills some map from $\mathbb{R}$ and if the range is not dense, Hahn-Banach yields a non-zero functional vanishing on the range. Operators with finite-dimensional source or target are compact. This hardly needs a reference, does it? Feb23 awarded ● Enlightened Feb23 accepted Is the closed unit ball of the Hilbert space homeomorphic to the unit sphere ? Feb20 comment How to see such space is Lindelof? Feb13 awarded ● Nice Answer Feb13 answered Is the closed unit ball of the Hilbert space homeomorphic to the unit sphere ? Feb11 comment Complexifying a real Banach space and its dualI believe the standard way is due to Dieudonné: dx.doi.org/10.1090/S0002-9939-1952-0047252-8 where he also proves that the James space is not the underlying real space of a complex Banach space thus disproving a conjecture of Banach. I think Ivan Singer's Bases in Banach spaces, I contains a discussion of the complexification in quite some detail on the first few pages. Feb10 comment pointwise ergodic theorem and mean sojourn time Feb10 comment Must the Minkowski sum of a Borel set and a *closed* ball be Borel? Feb9 comment isometric embeddings of Cayley graphs in “nice” spacesThank you. I found the article: W. Holsztyński, $\mathbf R^n$ as a universal metric space, Notices AMS 25 (3) (1978) A- 367. Feb9 comment isometric embeddings of Cayley graphs in “nice” spacesCould you please give a more precise reference to your short note? It is impossible for me to guess either title of the paper or its author from the information on this page. Feb2 comment Fredholm and Compact Operators Feb2 comment Equivariant forms and localization Feb2 comment Equivariant integration (localization formula) Jan30 comment Metrization of weak convergence of signed measuresIt seems easier to argue that $X^\ast$ has uncountable dimension as a vector space. Every weak*-neighborhood contains a linear subspace of finite codimension, so the intersection of countably many $0$-neighborhoods contains a subspace of countable codimension, in particular it can't be reduced to $\{0\}$. Jan28 comment Behaviour of power series on their circle of convergenceThe question came up on SE again: math.stackexchange.com/q/288765 user mrf refers to Lukašenko S. Ju., Sets of divergence and nonsummability for trigonometric series, Vestnik Moskov. Univ. Ser. I Mat. Mekh. 1978, no. 2, 65–70 for the result that there is a $G_\delta$-set which is not a set of convergence. Jan27 comment Showing a Banach space is reflexiveThe examples you mention are very easy to recognize as non-reflexive. $X$ is reflexive iff $X^\ast$ is reflexive and closed subspaces of reflexive spaces are reflexive. Identify $(c_0)^\ast = \ell_1$ and $(c_0)^{\ast\ast} = \ell_\infty$. Thus, $c_0$ is not reflexive. It follows that $\ell_1$ and $\ell_\infty$ are not reflexive either. Now show that you can embed $c_0$ as a closed subspace into a space of (continuous) bounded functions or $\ell_1$ into the dual of such a space and this covers all of your examples. Jan15 comment Algebraic Morse theory Jan6 comment definition of operator valued integral with spectral measure Jan6 comment Absolute norms and 1-unconditional sums Jan3 comment Baire sets of $X$ possess the required Cartesian product property Dec31 awarded ● Nice Answer Dec30 comment Old books still usedI'm really surprised at the suggestion that a book first published in the 1980ies should be a serious contender for "oldest books regularly used" in classical analysis Dec29 revised Old books still useddeleted 10 characters in body Dec29 answered Old books still used Dec6 awarded ● Commentator Dec5 awarded ● Supporter Dec3 awarded ● Citizen Patrol Dec3 answered Degeneracies for semi-simplicial Kan complexes Nov30 awarded ● Teacher Nov30 awarded ● Editor Nov30 revised Product of Borel sigma algebrasadded 33 characters in body Nov30 answered Product of Borel sigma algebras
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004071354866028, "perplexity_flag": "middle"}
http://adamazzam.wordpress.com/2012/05/18/sperners-lemma/
# Sperner’s Lemma Posted on May 18, 2012 by Sperner’s Lemma is an elegant result in discrete mathematics. While it originated in the context of combinatorics and graph colorings, it also has surprisingly powerful applications to analysis and topology. In particular, it provides a combinatorial proof of the Intermediate Value Theorem (below) and the Brouwer Fixed Point Theorem (next time). Its powerful continuous corollaries have earned it the nickname: “The Discrete Mathematician’s Intermediate Value Theorem.” To motivate the later and more technical definitions, let’s begin with a special case. Let $T$ be a triangle that is triangulated into several smaller subtriangles, whose vertices are labelled either with a $1$, $2$, or $3$. The labeling chosen is special, in that: 1. The main vertices of $T$ are all given different labels. 2. The label of a vertex along any edge of $T$ matches the label of one of the main vertices spanning that edge. For example: On the bottom-most edge between the corners labeled $1$ and $2$, each vertex in between is either labeled with a $1$ or a $2$. It is similarly the case for the edge whose corners are labeled $2$ and $3$, etc. The vertices on the inside are labeled arbitrarily. Given any triangulation of $T$, a labeling that satisfies these conditions is called a Sperner labeling. Sperner’s Lemma (Triangles) Any Sperner-labeled triangulation of $T$ contains an odd number of elementary triangles possessing all labels. Like most discrete existence theorems, it’s fun to play with it for awhile and try to convince yourself that you can’t find a counterexample. I should mention not all triangulations are admissible, only a simplicial subdivision – which I’ll explain below. Now that we have an intuition for what we are to prove, I’ll state it in its full generality. However, I will only furnish the proof of Sperner’s lemma for the triangle. Definition. An $n$-dimensional simplex $S$ is the convex hull of $(n+1)$ affinely independent points in $\mathbb{R}^{m}$ for $m\ge n$. We call these points the vertices of the simplex. For example, a $1$-simplex is a line segment, a $2$-simplex is a triangle, a $3$-simplex is a tetrahedron, and so on. A $k$-face of an $n$-simplex is the convex hull of any $(k+1)$-subset of its vertices. An $(n-1)$-face of an $n$-simplex is called a facet. For example, the facets of a line segment are its endpoints, the facets of a triangle are its lines, and the facets of a tetrahedron are its triangular faces. Definition. A simplicial subdivision of an $n$-simplex $S$ is a collection of (distinct) smaller $n$-simplices whose union is $S$, with the property that any two of them intersect in a common face, or not at all. The smaller $n$-simplices are called subsimplices, and their vertices are called the vertices of the subdivision. Definition. Let $S$ be an $n$-simplex and number the facets of $S$ by $1,2,\dots,n+1$. Given a simplicial subdivision of $S$, a Sperner Labeling is any labeling of the vertices of the subdivision that satisfies: • Each vertex of the simplex is labeled by a distinct $1,2,\dots,n+1$. • Each vertex on the boundary of $S$ falling on the facet labeled $j$ receives the label $j$. (You may label the interior vertices what you may). Sperner’s Lemma. Any Sperner-labelled triangulation of a n-simplex must contain an odd number of fully labelled elementary n-simplices. In particular, there is at least one. To test your understanding so far, it’s a good exercise to prove the case for $n=1$. I’ll prove it in the case of $n=2$. The higher dimensional cases follow by an induction argument, whose proof is more or less identical. Proof. Let $T$ be a triangle with a Sperner-labeled triangulation. Think of the triangle as a house, triangulated into triangular rooms. Let’s call every line-segment in the subdivision labeled $12$ or $21$ a “door” (labeled below). Notice that a fully labeled subtriangle only has one door and has two doors otherwise. This is because any room with at least one door has either no repeated labels (it is fully labeled), or out has one repeated label that appears twice. Since the triangulation is given a Sperner labeling, the only ways into the house from the outside is on facet whose corners are labeled $1$ and $2$. Thus, by Sperner’s Lemma for $n=1$, we know that there are an odd number of doors that lead from the outside to the inside. Let’s start at any of the doors on the boundary and walk inside through the door. Either the room you’re in has another door for you to exit through, or there are no other doors. If there are no other doors, then you’re standing in a room with one door. So you’ve found a completely labeled room and we’ve proven existence. Otherwise, continue walking through doors into the one adjacent room, repeating this process (convince yourself why we can never double back on a room). Since the number of rooms is finite, the procedure must terminate and so at the end you’ve either found yourself outside or stopping somewhere inside. If you stop inside, then by the previous argument you’ve found your completely labeled room. If you stop outside, then you’ve just paired up precisely two doors on the boundary of $S$ that do not lead inside. However, since the number of boundary doors is odd, there must be at least one door that leads, and stays, inside. Proving the existence of at least one completely labeled room. In fact, this shows that there are an odd number of boundary doors that locate a room inside. Moreover, any fully labelled rooms not reachable by paths from the boundary must come in pairs, since we can repeat the same process of walking about, which must terminate at some other room. Let’s summarize our findings. There are an odd number of fully labeled rooms reachable from the outside doors. If there are fully labeled rooms unreachable from the boundary there are an even number of them. Thus, there are an odd number of fully labeled rooms. The continuous corollaries of Sperner’s Lemma are primarily deduced by devising a creative Sperner labeling, taking successive approximations, and exploiting compactness in some way or another. This process will be cleared up after the following result, but even clearer after proving the Brouwer Fixed Point Theorem in the next post. Intermediate Value Theorem Suppose $f:\mathbb{R}\to \mathbb{R}$ is continuous. Suppose $a,b\in \mathbb{R}$ and $f(a)\le f(b)$. If $c\in \mathbb{R}$ and $f(a)< y< f(b)$, then there exists $c\in (a,b)$ so that $f(c)=y$. Proof. Suppose there exists no such c. Let $T^1$ be a partition of $[a,b]$ with $\text{mesh}(T^1)\le 2^{-1}$. For each vertex $x\in T^1$, label • $\lambda(x)=1$ if $f(x)<y$ and • $\lambda(x)=2$ if $f(x)>y$ Our initial assumption ensures this is a Sperner labeling. Hence there exists a completely labeled subinterval, call it $[a_1,b_1]$ and assume without loss of generality that $\lambda(a_1)=1$ and $\lambda(b_1)=2$. We may assume this since, as we will see, the order of $a_1$ and $b_1$ don’t matter. Let $T^2$ be a partition of $[a_1,b_1]$ with $\text{mesh}(T^2)<2^{-2}$, and labeled as before. Then we can find a subinterval $[a_2,b_2]$ that is completely labeled (again assuming without loss of generality that $\lambda(a_2)=1, \lambda(b_2)=2$). Continue by induction to furnish a sequence $\{a_n\}$ and $\{b_n\}$ so that $\left|{b_n-a_n}\right|<2^{-n}$ so that $\lambda(a_n)=1$ and $\lambda(b_n)=2$. By compactness, each admits a convergent subsequence, and each convergent subsequence $\{a_{n_{k}}\}$ and $\{b_{n_{j}}\}$. However, since $\left|b_n-a_n\right|<2^{-n}$ for all $n$, it is clear that these two subsequences share a common limit, call it $x$. But then we see by the continuity of $f(x)$ that • $f(x)=f\left(\lim_{k\to \infty}a_{n_{k}}\right)=\lim_{k\to \infty}f(a_{n_{k}})\le y$ • $f(x)=f\left(\lim_{j\to \infty}b_{n_{j}}\right)=\lim_{j\to \infty}f(b_{n_{j}})\ge y$ So $f(x)=y$, as desired. Thanks for reading. The definitions and proof of Sperner’s Lemma are due to Francis Su. The proof of the intermediate value theorem given here, to the best of my memory, is my own. Tomorrow I will deduce the Brouwer Fixed Point Theorem from Sperner’s Lemma. ### Like this: This entry was posted in Discrete Mathematics, Fixed Point Theory, Topology by Adam Azzam. Bookmark the permalink. ## One thought on “Sperner’s Lemma” 1. Pingback: An algebraic [and ninth] proof. | Alexander Adam Azzam ### Follow My Blog Exhausted from refreshing my blog every second to see if there's a new post? No problem - follow me! If you enter your email below, you'll be notified of my new posts by email. No more sleepless nights for you! Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 89, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919329047203064, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/167927-integrals-sigma-notation.html
# Thread: 1. ## Integrals & Sigma Notation Consider the definite integral Then the right Riemann sum obtained by subdividing the integral into equal parts is where (Your answer should contain the variables and .) After simplifying this algebraically and breaking it up into three parts, we can write this Riemann sum as where should only depend on and not on . After applying the formulas we can rewrite the Riemann sum as the following function of : Taking the limit of this as , we obtain that Im stuck at the first part. I know that the delta x is supposed to be b-a/n so that would be 2/n. other than that I am not sure where to go 2. $f(x) = 3x^2 + 2x + 3$ Right Riemann sum in interval $[a,b]$ of $n$ equal parts is: $\sum_{i=1}^n f(x_i)\Delta x$ Where $\Delta x = \frac{b-a}{n}$ and $x_i = a + i\Delta x$. 3. Also, this thread could be useful. Fernando Revilla
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8961805701255798, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/100042/additive-integer-valued-functions-on-the-module-category/103867
## Additive integer-valued functions on the module category ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is inspired by the theorem mentioned in http://mathoverflow.net/questions/99916/why-is-this-theorem-attributed-to-serre. But I'm not sure if it's research level. If not, please feel free to vote for closing. Let $R$ be a ring and let $Mod_R$ be the category of finitely generated $R$-modules. What are examples of additive integer-valued functions on $Mod_R$, i.e. functions $\lambda: Mod_R \to \mathbb{Z}$ satisfying $\lambda(M) = \lambda(M') + \lambda(M'')$ for short exact sequences $$0 \to M' \to M \to M'' \to 0$$ in $Mod_R$ ? Two obvious examples that come into my mind are: 1. $\lambda(M)=\dim_k M$ if $R=k$ is a field. 2. $\lambda(M)=\text{length}(M)$ if $R$ is Artinian. - If $R$ is Artinian, any such function is uniquely and freely determined by what it does to simple modules, isn't it? – Qiaochu Yuan Jun 19 at 23:30 @Ralph: Spelling of length is wrong in the last statement. – Chandrasekhar Jun 20 at 1:29 @Qiaochu: Yes it is. – Ralph Jun 20 at 6:59 @Chandrasekhar: Thanks for the hint. – Ralph Jun 20 at 7:01 @Ralph: When you think about these sort of questions and try to construct the universal example, you end up with G-theory automatically. – Martin Brandenburg Jun 20 at 7:14 ## 5 Answers Such functions are the same as homomorphisms $G_0(R)\rightarrow\mathbb{Z}$ from the Grothendieck group of your category, the $G$-theory group of degree $0$. The answer is only trivial from this formal point of view. The computation of $G_0(R)$ is non-trivial in general. If your ring is commutative noetherian and regular then $G_0(R)=K_0(R)$ is the $K$-theory group of degree $0$, i.e. additive functions only depend on the behaviour on projectives. Let me complete my answer with the examples you consider in your question. If $R=k$ is a field $G_0(k)=K_0(k)=\mathbb{Z}$ generated by the isomorphism class of $k$, therefore all additive functions are multiples of the dimension. If $R$ is artinian then $G_0(R)$ is the free abelian group on simple $R$-modules, hence not all additive functions are multiples of the length in general, but for each simple module $S$, $\lambda(S)=n_S\cdot\operatorname{length}(S)$, and any choice of such $n_S$ determines an additive function $\lambda$. - 1 $G_0(R)=K_0(R)$ even holds when $R$ is a left-noetherian left-regular ring (see Rosenberg's book). – Martin Brandenburg Jun 20 at 7:13 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This has been studied for group rings to some extend. It is a theorem of Wolfgang Lück that a homomorphism $\varphi \colon G_0(\mathbb Z \Gamma) \to \mathbb R$ can be constructed with the property $\varphi([\mathbb Z \Gamma]) = 1$ if $\Gamma$ is amenable. Moreover, such a homomorphism cannot exist if $\Gamma$ contains a non-abelian free group. It is conjectured that the existence is a characterization of amenability. Moreover, if $\Gamma$ is torsionfree and amenable, the conjecture is that the range of $\varphi$ is $\mathbb Z$, this is called Atiyah's conjecture. Sometimes, maps like the one you consider exist on subcategories of the category of f.g. modules. An easy example is the category of f.g. abelian groups $A$, so that $A \otimes_{\mathbb Z} \mathbb Q=0$, i.e. torsion groups. Then, the map $A \mapsto \log |A|$ is additive. There is also a version for f.g. modules over the group ring of an amenable group. It can be shown that assinging to a f.g. module $M$ over $\mathbb Z \Gamma$ ($\Gamma$ is amenable here) the entropy of the natural $\Gamma$-action on the Pontryagin dual of $M$ is additive. This is Yuzvinskii's Additivity Formula as proved by Hanfeng Li in Hanfeng Li, Compact group automorphisms, addition formulas and Fuglede-Kadison determinants, Ann. of Math. (2) 176 (2012), no. 1, 303--347. If $\Gamma$ is finite, then this entropy is essentially the logarithm of the cardinality of $M$. For infinite $\Gamma$, this invariant of $M$ is equal to the so-called $\ell^2$-Torsion of $M$, if it can be defined. For $\Gamma = \mathbb Z^d$, this invariant is related to the Mahler measure and of number theoretic significance. - Thanks for this very interesting answer. A comment and a question: 1) The log-example can also be used if $R$ is finite (i.e. $\lambda(M)=\log|M|$), giving some variant of the length. 2) In your explanation you relate the Atiyah conjecture and Lück's map $\varphi$. From a historical point of view, was the Atiyah conjecture first ? – Ralph Jun 20 at 9:37 Atiyah asked in the last 70s if $\ell^2$-Betti numbers are always rational. This question has turned into a conjecture over the years, with various modifications for torsionfree groups and groups with bounded torsion. Finally, some have been disproved by Grigorchuk-Zuk, and later Austin, Grabowski and Schick-Zuk (in various papers). The torsionfree case is still open. If true, then Lück's map above is integer-valued. Lück defined the map in his study of dimension-functions in the 90's, also Elek has a definition of rank of modules over the group ring of an amenable group. – Andreas Thom Jun 21 at 9:28 If R is a domain and $\lambda$ has non-negative values on objects of $Mod_R$, then $\lambda$ is a multiple of generic rank. See this question Nonnegative additive functions on coherent sheaves. - Let me suggest you some references: Northcott Reufel, "A generalization of the concept of length" (http://qjmath.oxfordjournals.org/content/16/4/297.full.pdf) Vamos, "ADDITIVE FUNCTIONS AND DUALITY OVER NOETHERIAN RINGS" (http://qjmath.oxfordjournals.org/content/19/1/43.full.pdf) Zanardo, "Multiplicative invariants and length functions over valuation domains" (http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jca/1323364358) If your interest goes in this direction I know that people in Padova is working on generalizations of the work of Zanardo to classify length functions of Prufer domains. The classification given by Vamos on Noetherian rings was generalized by him in his (non-pubblished) PhD thesis to a classification for rings with Gabriel-Krull dimension. I recently gave an alternative poof of Vamos' result for Grothendieck categories with Gabriel-Krull dimension based on the formalism of torsion theories, contact me if you are interested. - In general there are many such functions and the non-trivial ones are pretty interesting. I will focus on the commutative case, as that's what I know best. So let $R$ be a commutative noetherian ring and $X \subseteq \text{Spec}(R)$ be an artinian subscheme. Let $F_{\bullet}$ be a perfect complex such that the homologies are supported on $X$. Then it defines a function from $G_0(R) \to \mathbb Z$: `$$\chi_{F_{\bullet}}: M \mapsto \chi(F_{\bullet}\otimes M) $$` Here $\chi$ of a complex with support in $X$ is just the alternating sum of length of the homologies. When $R$ is artinian and $F_{\bullet}$ is just a single module $R$ one recovers the length example in your question. The interesting problem is: when such a function is a "new" one? As Mahdi pointed out, if the function is non-negative on the modules, then it would just be a multiple of rank (suitably defined). Thus to make it interesting one would need it to be negative on some modules. The issue now has deep consequences in intersection theory. Namely, if such complex exists one can often (say if $R$ is local and Cohen-Macaulay) replace it with the resolution of an artinian module of finite projective dimension. But then the definition would agree with Serre's intersection multiplicity. Hence such an example would imply that Serre's definition does not work in a singular setting (as intersection multiplicity should not be negative!). This was an open problem for a while. The first example was constructed in a famous paper by Dutta-Hochster-McLaughlin. The construction is very complicated (involving the construction of a $60\times60$ matrix essentially by hand). This result has been extended by Levine, Roberts-Srinivas, Miller-Singh, Kurano and others. In fact, such an example is now understood in a more general framework of numerically nontrivial elements of Grothendieck groups of local rings. - That's quite interesting. Thanks for the example. – Ralph Aug 9 at 6:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9017124772071838, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/119960-domain-range.html
# Thread: 1. ## Domain And Range 1) State the domain and range of each of the following functions: a) y = 3x + 4 b) y = $3(x-4)^2 +5$ c) y = √ 2x-10 (the root is over all the numbers for c) d) y = √ $-x^2 + 4$ (the square root sign is over all the numbers for d) as well e) y = 2 |1− x |−17 f) $y = x/ (x^2-x-6)$ Can someone help me find these plus explain why for d) e) and f) 2. Originally Posted by foreverbrokenpromises 1) State the domain and range of each of the following functions: a) y = 3x + 4 b) y = $3(x-4)^2 +5$ c) y = √ 2x-10 (the root is over all the numbers for c) d) y = √ $-x^2 + 4$ (the square root sign is over all the numbers for d) as well e) y = 2 |1− x |−17 f) $y = x/ (x^2-x-6)$ Can someone help me find these plus explain why for d) e) and f) You can think of the domain as all the numbers that x is 'allowed' to be. As in c)... You know that you can't take the square root of a negative number and get a real number. So, x can only be those numbers that make the radicand greater than or equal to zero. In other words, solve $2x-10\geq0$ and you will have your domain. 3. (f) You cannot include in the domain where the function is undefined, therefore areas where the dominator of a rational function is equal to zero should not be included in the domain. So the domain is (-∞, -2) U (-2, 3) U (3, ∞) 4. Thanks I've figured out a) b) and c) but i still cant figure out how to complete the following: d) y = √-x^2 +4 e) y = 2 |1-x|-17 f) y= $x/ (x^2-x-6)$ pls help! 5. Originally Posted by foreverbrokenpromises Thanks I've figured out a) b) and c) but i still cant figure out how to complete the following: d) y = √-x^2 +4 e) y = 2 |1-x|-17 f) y= $x/ (x^2-x-6)$ pls help! For d apply the principle VonNemo19 outlined in post 2. d) $-x^2+4 \geq 0 \: \: \rightarrow \: \: 4-x^2 \geq 0$ $(2-x)(2+x) \geq 0$. Spoiler: $-2 \leq x \leq 2$ e) What values of x are not allowed in this linear equation? Spoiler: There are none so x is all the real numbers f) $x^2-x-6=(x+2)(x-3)$. Remember that any value that makes the denominator equal to 0 is not allowed Spoiler: $x \in \mathbb{R} \: , \: x \neq -2,3$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8639565706253052, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Exponential_growth
# Exponential growth The graph illustrates how exponential growth (green) surpasses both linear (red) and cubic (blue) growth. Exponential growth Linear growth Cubic growth Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function's current value. Exponential decay occurs in the same way when the growth rate is negative. In the case of a discrete domain of definition with equal intervals it is also called geometric growth or geometric decay (the function values form a geometric progression). The exponential growth model is also known as the Malthusian growth model. The formula for exponential growth of a variable x at the (positive or negative) growth rate r, as time t goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is $x_t = x_0(1+r)^t$ where x0 is the value of x at time 0. For example, with a growth rate of r = 5% = 0.05, going from any integer value of time to the next integer causes x at the second time to be 1.05 times (i.e., 5% larger than) what it was at the previous time. ## Examples • Biology • The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted. Typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on. • A virus (for example SARS, or smallpox) typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people. • Human population, if the number of births and deaths per person per year were to remain at current levels (but also see logistic growth). For example, according to the United States Census Bureau, over the last 100 years (1910 to 2010), the population of the United States of America is exponentially increasing at an average rate of one and a half percent a year (1.5%). This means that the doubling time of the American population (depending on the yearly growth in population) is approximately 50 years.[1] • Many responses of living beings to stimuli, including human perception, are logarithmic responses, which are the inverse of exponential responses; the loudness and frequency of sound are perceived logarithmically, even with very faint stimulus, within the limits of perception. This is the reason that exponentially increasing the brightness of visual stimuli is perceived by humans as a linear increase, rather than an exponential increase. This has survival value. Generally it is important for the organisms to respond to stimuli in a wide range of levels, from very low levels, to very high levels, while the accuracy of the estimation of differences at high levels of stimulus is much less important for survival. • Genetic complexity of life on Earth has doubled every 376 million years. Extrapolating this exponential growth backwards indicates life began 9.7 billion years ago, potentially predating the Earth by 5.2 billion years.[2][3] • Physics • Avalanche breakdown within a dielectric material. A free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons. The resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material. • Nuclear chain reaction (the concept behind nuclear reactors and nuclear weapons). Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, causing them to fission in turn. If the probability of neutron absorption exceeds the probability of neutron escape (a function of the shape and mass of the uranium), k > 0 and so the production rate of neutrons and induced uranium fissions increases exponentially, in an uncontrolled reaction. "Due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is a reasonable approximation to think of the first 53 generations as a latency period leading up to the actual explosion, which only takes 3–4 generations."[4] • Positive feedback within the linear range of electrical or electroacoustic amplification can result in the exponential growth of the amplified signal, although resonance effects may favor some component frequencies of the signal over others. • Heat transfer experiments yield results whose best fit line are exponential decay curves. • Economics • Economic growth is expressed in percentage terms, implying exponential growth. For example, U.S. GDP per capita has grown at an exponential rate of approximately two percent per year for two centuries. • Multi-level marketing. Exponential increases are promised to appear in each new level of a starting member's downline as each subsequent member recruits more people. • Finance • Compound interest at a constant interest rate provides exponential growth of the capital. See also rule of 72. • Pyramid schemes or Ponzi schemes also show this type of growth resulting in high profits for a few initial investors and losses among great numbers of investors. • Computer technology • Processing power of computers. See also Moore's law and technological singularity (under exponential growth, there are no singularities. The singularity here is a metaphor.). • In computational complexity theory, computer algorithms of exponential complexity require an exponentially increasing amount of resources (e.g. time, computer memory) for only a constant increase in problem size. So for an algorithm of time complexity 2x, if a problem of size x = 10 requires 10 seconds to complete, and a problem of size x = 11 requires 20 seconds, then a problem of size x = 12 will require 40 seconds. This kind of algorithm typically becomes unusable at very small problem sizes, often between 30 and 100 items (most computer algorithms need to be able to solve much larger problems, up to tens of thousands or even millions of items in reasonable times, something that would be physically impossible with an exponential algorithm). Also, the effects of Moore's Law do not help the situation much because doubling processor speed merely allows you to increase the problem size by a constant. E.g. if a slow processor can solve problems of size x in time t, then a processor twice as fast could only solve problems of size x+constant in the same time t. So exponentially complex algorithms are most often impractical, and the search for more efficient algorithms is one of the central goals of computer science today. • Internet traffic growth. ## Basic formula A quantity x depends exponentially on time t if $x(t)=a\cdot b^{t/\tau}\,$ where the constant a is the initial value of x, $x(0)=a\, ,$ the constant b is a positive growth factor, and τ is the time constant—the time required for x to increase by one factor of b: $x(t+\tau)=a \cdot b^{\frac{t+\tau}{\tau}} = a \cdot b^{\frac{t}{\tau}} \cdot b^{\frac{\tau}{\tau}} = x(t)\cdot b\, .$ If τ > 0 and b > 1, then x has exponential growth. If τ < 0 and b > 1, or τ > 0 and 0 < b < 1, then x has exponential decay. Example: If a species of bacteria doubles every ten minutes, starting out with only one bacterium, how many bacteria would be present after one hour? The question implies a = 1, b = 2 and τ = 10 min. $x(t)=a\cdot b^{t/\tau}=1\cdot 2^{(60\text{ min})/(10\text{ min})}$ $x(1\text{ hr})= 1 \cdot 2^6 =64.$ After one hour, or six ten-minute intervals, there would be sixty-four bacteria. Many pairs (b, τ) of a dimensionless non-negative number b and an amount of time τ (a physical quantity which can be expressed as the product of a number of units and a unit of time) represent the same growth rate, with τ proportional to log b. For any fixed b not equal to 1 (e.g. e or 2), the growth rate is given by the non-zero time τ. For any non-zero time τ the growth rate is given by the dimensionless positive number b. Thus the law of exponential growth can be written in different but mathematically equivalent forms, by using a different base. The most common forms are the following: $x(t) = x_0\cdot e^{kt} = x_0\cdot e^{t/\tau} = x_0 \cdot 2^{t/T} = x_0\cdot \left( 1 + \frac{r}{100} \right)^{t/p},$ where x0 expresses the initial quantity x(0). Parameters (negative in the case of exponential decay): • The growth constant k is the frequency (number of times per unit time) of growing by a factor e; in finance it is also called the logarithmic return, continuously compounded return, or force of interest. • The e-folding time τ is the time it takes to grow by a factor e. • The doubling time T is the time it takes to double. • The percent increase r (a dimensionless number) in a period p. The quantities k, τ, and T, and for a given p also r, have a one-to-one connection given by the following equation (which can be derived by taking the natural logarithm of the above): $k = \frac{1}{\tau} = \frac{\ln 2}{T} = \frac{\ln \left( 1 + \frac{r}{100} \right)}{p}\,$ where k = 0 corresponds to r = 0 and to τ and T being infinite. If p is the unit of time the quotient t/p is simply the number of units of time. Using the notation t for the (dimensionless) number of units of time rather than the time itself, t/p can be replaced by t, but for uniformity this has been avoided here. In this case the division by p in the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit. A popular approximated method for calculating the doubling time from the growth rate is the rule of 70, i.e. $T \simeq 70 / r$. ## Reformulation as log-linear growth If a variable x exhibits exponential growth according to $x_t = x_0(1+r)^t$, then the log (to any base) of x grows linearly over time, as can be seen by taking logarithms of both sides of the exponential growth equation: $\log x_t = \log x_0 + t \cdot \log (1+r).$ This allows an exponentially growing variable to be modeled with a log-linear model. For example, if one wishes to empirically estimate the growth rate from intertemporal data on x, one can linearly regress log x on t. ## Differential equation The exponential function $\scriptstyle x(t)=ae^{kt}$ satisfies the linear differential equation: $\!\, \frac{dx}{dt} = kx$ saying that the growth rate of x at time t is proportional to the value of x(t), and it has the initial value $x(0)=a.\,$ For a > 0 the differential equation is solved by the method of separation of variables: $\frac{dx}{dt} = kx$ $\Rightarrow \frac{dx}{x} = k\, dt$ $\Rightarrow \int \frac{dx}{x} = \int k \, dt$ $\Rightarrow \ln x = kt + \text{constant}\, .$ Incorporating the initial value gives: $\ln x = kt + \ln a\,$ $\Rightarrow x = ae^{kt}\,$ The solution also applies for a ≤ 0 where the logarithm is not defined. For a nonlinear variation of this growth model see logistic function. ## Difference equation The difference equation $x_t = a \cdot x_{t-1}$ has solution $x_t = x_0 \cdot a^t,$ showing that x experiences exponential growth. ## Other growth rates In the long run, exponential growth of any kind will overtake linear growth of any kind (the basis of the Malthusian catastrophe) as well as any polynomial growth, i.e., for all α: $\lim_{t\rightarrow\infty} {t^\alpha \over ae^t} =0.$ There is a whole hierarchy of conceivable growth rates that are slower than exponential and faster than linear (in the long run). See Degree of a polynomial#The degree computed from the function values. Growth rates may also be faster than exponential. In the above differential equation, if k < 0, then the quantity experiences exponential decay. ### Comparison with convex growth This section . Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research may be removed. (August 2012) In popular use, because the exponential function is one of the best-known examples of a convex function (which is a less familiar concept), these concepts are often confused, and people refer to any convex growth – increasing at an increasing rate, or decreasing at a decreasing rate (for twice differentiable functions, second derivative positive – as "exponential growth".[citation needed] This is incorrect – exponential growth narrowly means increasing at a rate proportional to the current value, while convex is more general, meaning increasing at an increasing rate. Exponential growth is a special case of convex growth: since the value of an exponentially growing function is increasing, and it increases proportionally to its value, it is increasing at an increasing rate, but in a very specific way. Convex functions in general are not exponential – convex is more general than exponential. For example, quadratic growth (growing like $x^2$) is convex but not exponential. ## Limitations of models Exponential growth models of physical phenomena only apply within limited regions, as unbounded growth is not physically realistic. Although growth may initially be exponential, the modelled phenomena will eventually enter a region in which previously ignored negative feedback factors become significant (leading to a logistic growth model) or other underlying assumptions of the exponential growth model, such as continuity or instantaneous feedback, break down. Further information: Limits to Growth, Malthusian catastrophe, Apparent infection rate ## Exponential stories ### Rice on a chessboard See also: Wheat and chessboard problem According to an old legend, vizier Sissa Ben Dahir presented an Indian King Sharim with a beautiful, hand-made chessboard. The king asked what he would like in return for his gift and the courtier surprised the king by asking for one grain of rice on the first square, two grains on the second, four grains on the third etc. The king readily agreed and asked for the rice to be brought. All went well at first, but the requirement for 2 n − 1 grains on the nth square demanded over a million grains on the 21st square, more than a million million (aka trillion) on the 41st and there simply was not enough rice in the whole world for the final squares. (from Swirski, 2006) For variation of this see second half of the chessboard in reference to the point where an exponentially growing factor begins to have a significant economic impact on an organization's overall business strategy. ### The water lily French children are told a story in which they imagine having a pond with water lily leaves floating on the surface. The lily population doubles in size every day and if left unchecked will smother the pond in 30 days, killing all the other living things in the water. Day after day the plant seems small and so it is decided to leave it to grow until it half-covers the pond, before cutting it back. They are then asked, on what day that will occur. This is revealed to be the 29th day, and then there will be just one day to save the pond. (From Meadows et al. 1972, p. 29 via Porritt 2005) ## References 1. Bob Yirka (2013-04-18), Researchers use Moore's Law to calculate that life began before Earth existed, phys.org, retrieved 2013-04-22 2. Larger than Life Indeed. Economic Times. April 2013. Retrieved 2013-04-23. 3. Sublette, Carey. "Introduction to Nuclear Weapon Physics and Design". Nuclear Weapons Archive. Retrieved 2009-05-26. ### Sources • Meadows, Donella H., Dennis L. Meadows, Jørgen Randers, and William W. Behrens III. (1972) The Limits to Growth. New York: University Books. ISBN 0-87663-165-0 • Porritt, J. Capitalism as if the world matters, Earthscan 2005. ISBN 1-84407-192-8 • Swirski, Peter. Of Literature and Knowledge: Explorations in Narrative Thought Experiments, Evolution, and Game Theory. New York: Routledge. ISBN 0-415-42060-1 • Thomson, David G. Blueprint to a Billion: 7 Essentials to Achieve Exponential Growth, Wiley Dec 2005, ISBN 0-471-74747-5 • Tsirel, S. V. 2004. On the Possible Reasons for the Hyperexponential Growth of the Earth Population. Mathematical Modeling of Social and Economic Dynamics / Ed. by M. G. Dmitriev and A. P. Petrov, pp. 367–9. Moscow: Russian State Social University, 2004.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9068334102630615, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/83380?sort=newest
## Manin-Drinfeld and constructing a finite morphism with two given ramification points ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix a smooth projective connected curve $X$ over `$\overline{\mathbf{Q}}$` of genus $g\geq 1$ and distinct points `$x,y \in X$` such that $x-y$ has infinite order in the Jacobian. Can we always find a finite morphism $X\to \mathbf{P}^1$ which ramifies at $x$ and $y$? If not, can we always find $x,y\in X$ such that $x-y$ has infinite order in $\mathrm{Jac}(X)$ and a finite morphism $X\to \mathbf{P}^1$ which ramifies at $x$ and $y$? Application: If one of the above questions has a positive answer, Belyi's theorem gives the existence of a finite morphism $X\to \mathbf{P}^1$ which ramifies over exactly three points and such that $x$ and $y$ ramify. From this it is easy to see that there exists a subgroup $\Gamma\subset \Gamma(2)$ of finite index such that $\Gamma$ gives a Belyi uniformization of $X$ and such that the Manin-Drinfeld theorem doesn't hold for $\Gamma$. - ## 1 Answer The answer is yes (assuming you are not demanding that the map be unramified away from $x$ and $y$). Choose any Belyi map $f: X \to \mathbb{P}^1$. The points $f(x)$ and $f(y)$ are defined over some number field. As part of the proof of his theorem, Belyi shows how to construct finite maps $g: \mathbb{P}^1 \to \mathbb{P}^1$ that are unramified away from $0,1,\infty$ on the target, whose ramification locus on the source contains any finite collection of points defined over a number field, so we can choose $g$ to be ramified at $f(x)$, $f(y)$, 0, 1, $\infty$. The composition $g \circ f$ is what you want. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398133754730225, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/101292-trigonometric-equation.html
# Thread: 1. ## trigonometric equation Solve for A: cos 9A = (cos 15A)(cos 8A) 2. i have tried to use a math program, and the graph looks like this, the red dot are the roots. I think there so many of them Attached Thumbnails 3. Hello pacman Originally Posted by pacman Solve for A: cos 9A = (cos 15A)(cos 8A) I can't see how to find the complete solution yet, but the most likely approach is probably to use $\cos(A+B) + \cos(A-B) = 2\cos A\cos B$ on the RHS and say: $2\cos9A = 2\cos15A\cos8A$ $= \cos23A + \cos7A$ $\Rightarrow \cos9A - \cos7A= \cos23A -\cos9A$ Now use $\cos A - \cos B = -2\sin\tfrac12(A+B)\sin\tfrac12(A-B)$ on each side: $-2\sin8A\sin A=-2\sin16A\sin7A$ $\sin8A\sin A= 2\sin8A\cos8A\sin7A$ $\sin8A = 0$ or $\sin A = 2\cos8A\sin7A$ So that, at least, gives some solutions: $\sin8A=0\Rightarrow A = \frac{n\pi}{8}$ But what happens next with the remaining expression, I'm not sure. You could use $\sin7A = \sin A(7-56\sin^2A+112\sin^4A-64\sin^6A)$ and then remove the factor $\sin A$ from both sides to get: $\sin A=0$ (which doesn't give any more answers that we haven't already got from $\sin8A=0$), or: $1 = 2\cos8A(7-56\sin^2A+112\sin^4A-64\sin^6A)$ which you could then express simply in terms of $\cos A$, but it would be pretty horrendous! Grandad 4. Hello this is the solution : $\begin{array}{l}<br /> \cos \left( {9a} \right) - \cos \left( {15a} \right)\cos \left( {8a} \right) = 0 \\ <br /> but: \\ <br /> \cos \left( {9a} \right) = \cos \left( {8a + a} \right) = \cos (a) \\ <br /> \cos \left( {15a} \right) = \cos \left( {16a - a} \right) = \cos \left( { - a} \right) = \cos \left( a \right) \\ <br /> \end{array}<br />$ $\begin{array}{l}<br /> subst: \\ <br /> \cos \left( a \right)\left( {1 - \cos \left( {8a} \right)} \right) = 0 \\ <br /> \cos (a) = 0:a = \frac{\pi }{2} + \pi k...k \in Z \\ <br /> \cos \left( {8a} \right) = 1:a = \frac{{\pi k}}{4}...k \in Z \\ <br /> \end{array}$ 5. Hello dhiab Originally Posted by dhiab Hello this is the solution : $\begin{array}{l}<br /> \cos \left( {9a} \right) - \cos \left( {15a} \right)\cos \left( {8a} \right) = 0 \\ <br /> but: \\ <br /> \cos \left( {9a} \right) = \cos \left( {8a + a} \right) = \cos (a) \\ <br /> \cos \left( {15a} \right) = \cos \left( {16a - a} \right) = \cos \left( { - a} \right) = \cos \left( a \right) \\ <br /> \end{array}<br />$ $\begin{array}{l}<br /> subst: \\ <br /> \cos \left( a \right)\left( {1 - \cos \left( {8a} \right)} \right) = 0 \\ <br /> \cos (a) = 0:a = \frac{\pi }{2} + \pi k...k \in Z \\ <br /> \cos \left( {8a} \right) = 1:a = \frac{{\pi k}}{4}...k \in Z \\ <br /> \end{array}$ Sorry, but this doesn't make sense. You can't simply replace $\cos9A$ and $\cos15A$ by $\cos A$. Grandad 6. dhiab, this puzzled me, a is not 20 degrees or what, as GRANDAD have said your post sems puzzling. But THANKS though, it cracks my mind
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443332552909851, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/31627/symmetrical-spinors-and-symmetrical-tensors
# Symmetrical Spinors and Symmetrical Tensors In Quantum Electrodynamics by Landau and Lifshiz there is the following: The correspondence between the spinor $\zeta^{\alpha \dot{\beta}}$ and the 4-vector is a particular case of a general rule: any symmetrical spinor of rank $(k,k)$ is equivalent to a symmetrical 4-tensor of rank $k$ which is irreducible (i.e. which gives zero upon contraction with respect to any pair of indices). L&L also writes out this for a 4-vector $$\zeta^{1\dot{1}}=\zeta_{2\dot{2}}=a^3+a^0 ,\quad \zeta^{2\dot{2}}=\zeta_{1\dot{1}}=a^0-a^3,$$ $$\zeta^{1\dot{2}}=-\zeta_{2\dot{1}}=a^1-ia^2 ,\quad \zeta^{2\dot{1}}=-\zeta_{1\dot{2}}=a^1+ia^2,$$ Surely there must be an established method to do this in general just like it says above. I would like to know this method, so if someone would be kind enough to show me or refer me to a reference I would be grateful, (e.g. suppose I would like to know the components for $\zeta^{\alpha\beta\dot{\gamma}\dot{\delta}}$ in terms of the symmetric traceless rank-2, 4-tensor). Thanks, - You might want to look into Penrose and Rindler's Spinors and Space-Time which looks into this in great detail (it's what the two volumes are entirely about!). – Alex Nelson Jul 9 '12 at 4:51 ## 1 Answer The method is to contract every index with the $\sigma$ four vector: $$\sigma^\mu_{\alpha\dot{\beta}}$$ Where $\sigma^0$ is the identity, and $\sigma^i$ for i=1,2,3 is the Pauli spin matrix. If you have a symmetric tensor with all lower indices, you contract each $\mu$ index with a sigma, and you get the dotted-undotted form. $$M_{\mu\nu} \sigma^\mu_{\alpha\dot{\beta}}\sigma^\nu_{\alpha'\dot{\beta}'} = M_{\alpha\alpha'\dot{\beta}\dot{\beta}'}$$ If M is symmetric on $\mu$ and $\nu$, this is symmetric under permutations of the pairs of corresponding dotted and undotted indices simultaneously. It is not symmetric under separate permutations of the dotted and undotted indices. $$M_{\alpha\alpha'\dot{\beta}\dot{\beta}'} = M_{\alpha'\alpha\dot{\beta}'\dot{\beta}}$$ But this is because I haven't implemented tracelessness. The inner product of two vectors in spinor form is given by using $\epsilon$ tensors: $$V_{\alpha\dot{\alpha}} V_{\beta\dot{\beta}} \epsilon^{\alpha\beta}\epsilon^{\dot{\alpha}\dot{\beta}}$$ If you make the contraction of M with g zero, in spinor form, this tells you that $$M_{\alpha\beta\dot{\alpha}\dot{\beta}} - M_{\beta\alpha\dot{\alpha}\dot{\beta}} = 0$$ Which together with the previous condition tells you that the M in spinor form is symmetric under separate permutations of the two indices. The higher rank proof is exactly the same. - Thanks Ron. Should the $\beta$s in the second line be dotted? – kηives Jul 9 '12 at 19:03 @kηives: Yes, I'll fix it. I was suspended and couldn't do it. – Ron Maimon Jul 10 '12 at 19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471895694732666, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/94163/given-a-finite-list-of-prime-factors-what-is-the-fastest-way-to-find-all-number
# Given a finite list of prime factors, what is the fastest way to find all numbers that can be formed from them $$\text{Let} \ S = \{p_1,p_2,p_3,...,p_n\}$$ $$\text{where} \ p_i \in \Bbb P$$ What is the fastest known method method/algorithm to generate all unique numbers through product operation on $S$? $\text{Ex}: S= \{3,5,2\}$ Soln: $3\times5 = 15$ $3\times2 = 6$ $3\times5\times2 = 30$ $5\times2 = 10$ Currently, my ideas hover around generating all subsets of $S$, multiplying all the members in each of them and eliminating the duplicates from the list of numbers so generated. This is $O(2^n)$. - I assume the primes may not be distinct, correct? – Alex Becker Dec 26 '11 at 7:24 If there are duplicates, $S$ is a multiset. Let's say $S$ contains $m_j$ copies of $p_j$, where $p_1, \ldots, p_k$ are the distinct members of $S$. Then the numbers you can form are all of the form $\prod_{j=1}^k p_j^{d_j}$ where $d_j$ are integers, $0 \le d_j \le m_j$. There are $\prod_{j=1}^k (m_j+1)$ of them. And it's easy to enumerate them, say in lexicographic order. – Robert Israel Dec 26 '11 at 8:11 4 You should realize that in case $S$ has $n$ (distinct) elements, every one of its $2^n$ subsets has a different product, so there are that many elements in your answer. Do you hope to generate them in less than $O(2^n)$ time? That is of course impossible. – Marc van Leeuwen Dec 26 '11 at 10:26 This book by Nijenhuis and Wilf give an algorithm for systematically enumerating subsets. See this for a FORTRAN implementation of the algorithms in the book. – J. M. Dec 26 '11 at 11:47 @Alex Yes, primes need not be distinct. But ordering is flexible. – check123 Dec 26 '11 at 11:56 show 3 more comments ## 2 Answers Given your example solution I saw a very nice solution (the answer by @Arturo Magidin) that may be relevant to you. - If the primes are distinct and repetitions (e.g. $3\times3\times 5$) are forbidden, then going through the whole list of subsets will yield no duplicates, because the fundamental theorem of arithemetic says prime factorizations are unique. (If unlimited numbers of repetitions are allowed, you get an infinite list. It is a somewhat edifying exercise to prove that the sum of the reciprocals of those infinitely many numbers will be finite if you had only finitely many primes in your initial list.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323377013206482, "perplexity_flag": "head"}
http://myalgebrabook.com/Chapters/Exponential_Functions/exponents_to_rational1.php
# 5.6.1 Rational Exponents: 1/n • x^{mn}=(x^m)^n=(x^n)^m • x^{1/n}=\sqrt[n]{x} is the nth root of x. The next piece of the puzzle is to try to understand $x^{m/n}$ where m and and n are counting numbers; this will help us to compute the population of our wildebeests at time 1.5 years, since 1.5 is just 3/2. You may recall that we studied x^{m+n} before we looked at x^{m-n}; addition is oftentimes easier to understand than subtraction. Now, we're hoping to study x^{m/n}. However, we don't yet have a firm understanding of x^{mn}. Since division is the inverse process of multiplication and multiplication is much easier to understand than division, let's start with x^{mn} and see where it takes us. As usual, let's start with a specific example. With m=3 and n=2, can we rewrite $x^{3\times2}$ as something else? If we realize that 3 × 2 = 6 = 3 + 3, then we can see that x^{3 \times 2} =x^6=x^3x^3=(x^3)^2 Here we used the fact that when multiplying exponentials, we can add exponents if we have the same base. But, we can also think of 6 as being equal to 2 + 2 + 2. With this in mind, we can see that x^{3 \times 2} =x^6=x^2x^2x^2=(x^2)^3 This tells us that $x^{3 \times 2} =(x^3)^2=(x^2)^3$ Since there was nothing special about 2, 3, and 6, let's jump up a level of abstraction, developing a new general property of exponents. From the work above, if m and n are counting numbers, we can write x^{mn} in two ways: x^{mn}=(x^m)^n=(x^n)^m It turns out that the same property holds for integers as well. Without going through a formal mathematical proof, let's just explore this concept through one example. If m=2 and n=-3, then we have that: \begin{align*}\left ( x^{-3} \right )^2&=\left ( \frac{1}{x^3} \right )^2\\&=\frac{1}{x^3}\cdot\frac{1}{x^3}\\&=x^{-3}x^{-3}\\&=x^{-6}\end{align*} We then have that, if m and n are integers, x^{mn}=(x^m)^n=(x^n)^m Explore! Evaluate the following without a calculator: (2^3)^2= (3^{-2})^{-2} Now that we now have a deeper understanding of x to "products of integers", let's go back to our exponential of interest, namely x^{m/n}. If we're to treat this expression in a way that follows the same pattern as above, then since \frac{m}{n}=m \times \frac{1}{n}=\frac{1}{n}\times m, we have that x^{m/n} = x^{m\times\left (1/n\right)} = \left (x^m\right)^{1/n} = \left (x^{1/n}\right)^m Let's use specific numbers for m and n to understand what the fractional exponents 1/n and 1/m actually mean. Using the property above, we see that (4^{1/2})^2= : Even though we don't know what 4^{1/2} means yet, we now see that if we square it, we get back 4. Any thoughts about what the symbol could represent? Let's try another one: (8^{1/3})^3= : Even though we don't know what 8^{1/3} means, we see that if we cube it, we get back 8. Putting the two examples together, it looks like raising something to the 1/2 power is the same thing as taking the square root; raising it to the 1/3 power must be the same as taking the cube root! In general, then, we have that x^{1/n} is the nth root of x. We also write the nth root as \sqrt[n]{x}. Explore! Compute the following without using a calculator: 64^{1/2}= 125^{1/3}= With the tools from this section, we're ready to understand what it would mean to put 1.5 into our original model: P(1.5)=P(3/2)=1000(1.02)^{3/2} We'll finish this example in the next section. My Book Key Points Extra Practice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080504179000854, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/16987/the-time-component-is-gamma-m-c-so-shouldnt-e-mc
# The time component is $\gamma m c$, so shouldn't $E=mc$? Basically, the book is Brian Cox's Why Does $E=mc^2$?: (And Why Should We Care?). I just finished Chapter 5, where we derived the spacetime momentum vector (energy-momentum four vector, as he establishes the physics jargon). Let $\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$ So, as we found out, the vector's spatial component is $\gamma m v$, leaving the time component as $\gamma m c$. He went on, under the guise of making the outcome more intellgible, to saying we could happily multiply the time component by $c$ without changing it's meaning. Cool, I thought, no problem. Next, he pointed out $\gamma\approx1+\frac{1}{2}\frac{v^2}{c^2}$, so $\gamma m c^2 \approx mc^2 + \frac{1}{2}mv^2$. Et voilà, $mc^2$. Granted, he's obviously trying to simplify things so I can reach an intuitive understanding, but from that point onwards, he uses $mc^2$ as the conversion value. I'm confused. Could someone explain why we use $mc^2$ and not the version scaled down by a factor of $c$? - simple answer dimensional inconsistency!!! $mc$ only becomes $[kg^1.m^1.s^{-1}]$ where as for energy you need $[kg^1.m^2.s^{-2}]$. – Vineet Menon Nov 15 '11 at 6:25 You scale it by $c$ so that the time becomes a distance, just like the spatial dimensions. – queueoverflow Apr 12 '12 at 8:57 ## 2 Answers Dimensional analysis is enough to see that $mc^2$ has the same units as energy while $\gamma mc$ or $mc$ doesn't. Concerning the components of a 4-vector, special relativity unifies the spatial and temporal components. But the 4 components of a 4-vector with "uniform units" do not necessarily enjoy the same normalization as the quantities outside relativity. Instead, you must typically multiply the time component by $c$ or $1/c$ to get the usual non-relativistic normalization of the quantity. The position vector is $(x,y,z,ct)$. Note that all of them have the dimension of length. But the real time is $ct$, the last component, divided by $c$. Similarly, the energy-momentum vector has $m_0\gamma v_x,m_0\gamma v_y, m_0\gamma v_z,m_0\gamma c$. Again, all components have the same units but now you have to multiply the last component $m_0\gamma c$ by $c$ to get the usual normalization for the energy, $E=m_0 \gamma c^2$. This is just a question of units. Nothing guarantees that the simplest identifications and conventions will lead to correct formulae without any extra $c$ with the units used before relativity. When we say "the time component of a vector is a particular quantity known before relativity", we mean that it contains the same information but sometimes we need to normalize it differently, add a universal factor. Before relativity, people used very unnatural units for many quantities and the temporal and spatial components didn't have the sam units even though their key information was always a part of the same 4-vector which shows that they may be "rotated into each other". Among physicists who study relativistic phenomena (e.g. particle physicists), this is a complete non-problem and physicists often use units with $c=1$, anyway. This really means that distances are measured in light seconds and the speed of light is one light second per second, and the difference between a light second and second is suppressed. (Particle physicists typically use units in which one GeV, and not one second, and its powers are used for everything, i.e. times, distances, energies, momenta, masses, etc.: they also set $\hbar=1$ so that energy and frequency i.e. inverse time have the same units.) - Uh, because $mc$ is not energy? And what do you mean "time component"? Your $\gamma mc$ is momentum. - so what makes $mc^2$ energy? just multiplying by a constant... – dougvk Nov 15 '11 at 6:12 Well one obvious way would be to check units. Other than that, go check out Einstein's relation: $E^2-(pc)^2 = (m_0c^2)^2$, where $m_0$ is rest-mass. – Chris Gerig Nov 15 '11 at 6:14 I'm going to wait for a more intuitive explanation. Also $\gamma m v$ is momentum. $\gamma m c$ is time component. – dougvk Nov 15 '11 at 6:17 Sorry I don't get what you mean be "more intuitive"... by dimensionality, $mc$ cannot be energy and $mc^2$ must be energy. Oh and now I see, your 'time component' is part of your momentum 4-vector (so it is indeed momentum!). – Chris Gerig Nov 15 '11 at 6:28 oh okay. in the question i had it differentiated as time and spatial components. now I see they are all subsumed by the label momentum component. more intuitive just means using many more words to support and explain your equations – dougvk Nov 15 '11 at 6:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452394247055054, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/114841/proof-of-a-formula-involving-eulers-totient-function
Proof of a formula involving Euler's totient function. The third formula on the wikipedia page for the Totient function states that $$\varphi (mn) = \varphi (m) \varphi (n) \cdot \dfrac{d}{\varphi (d)}$$ where $d = \gcd(m,n)$. How is this claim justified? Would we have to use the Chinese Remainder Theorem, as they suggest for proving that $\varphi$ is multiplicative? - 1 There might be a direct proof, but of course if you show that $\varphi$ is multiplicative (using the Chinese Remainder Theorem) and that $\varphi(p^a) = p^a - p^{a-1}$, then you get your result. – Joel Cohen Feb 29 '12 at 15:18 @lhf: You mean the group of units mod (m, n, mn)? I certainly don't know how to do that! – The Chaz 2.0 Mar 13 '12 at 2:21 It'd be nice to relate this formula with the natural mapping $U_{mn}\to U_m \times U_n$ by proving that the kernel has size $d$ and the image has index $\phi(d)$. – lhf Mar 13 '12 at 2:22 – lhf Mar 14 '12 at 12:07 2 Answers You can write $\varphi(n) = n \prod_{p \mid n} \left( 1 - \frac 1p \right)$. Using this identity, we have $$\varphi(mn) = mn \prod_{p \mid mn} \left( 1 - \frac 1p \right) = mn \frac{\prod_{p \mid m} \left( 1 - \frac 1p \right) \prod_{p \mid n} \left( 1 - \frac 1p \right)}{\prod_{p \mid d} \left( 1 - \frac 1p \right)} = \varphi(m)\varphi(n) \frac{d}{\varphi(d)}$$ - Thanks, Dane. That is concise! – The Chaz 2.0 Feb 29 '12 at 15:26 Hint $\$ A multiplicative function $\rm\:f(n)\:$ satisfies said identity if for all primes $\rm\:p\:$ $$\rm\ j\le k\ \Rightarrow\ \ f(p^{j+k}) = \frac{f(p^j)\: f(p^k)\: p^j}{f(p^j)}\ =\ p^j f(p^k)$$ Indeed we have $\rm\ \ \phi(p^{j+k})\ =\ p^{j+k}-p^{j+k-1}\ =\ p^j (p^k-p^{k-1})\ =\ p^j \phi(p^k)$ - Thanks, Bill. I'll try to wrap my head around that. It seems useful. – The Chaz 2.0 Feb 29 '12 at 16:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.905384361743927, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78946/homomophism-from-koszul-complex-to-the-original-ring
## Homomophism from Koszul complex to the original ring ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In an article, I encounter an isomorphism relation as follows: Let S be a comm. ring, x an element in S. K[x,S] be corresponding Koszul complex. The article says "this is a classical isomorphism": $K[x,S]$ isom. to $\Sigma^{-1} Hom_S (K[x,S],S)$, where \Sigma is the translation functor. I guess this mean we should regard S as a chain complex concentrated at 0, and look at the homomophism between chain complexes, but I cannot figure out the relation. Explicitly, how to identify the homomophism of chain complexes with K[x,S] as categories? Thanks a lot. - Just write down what the complex $\hom_S(K[x,S],S)$ is, using the fact that the dual of a free module of rank $1$ is of free of rank $1$. – Mariano Suárez-Alvarez Oct 24 2011 at 2:45 @Mariano Suárez-Alvarez: I am just a starter of comm. and homological algebra. Do I understand $Hom_S (K[x,S],S)$ correctly? Thank you for your patience. – AlgRev Oct 24 2011 at 2:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8473286032676697, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38573/
## The Fine Structure of The Constructible Hierarchy ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I hold in my hand a printout of Jensen's article from Annals of Mathematical Logic 4 (1972) named as in the title of the question. However the quality of the .pdf I found was very bad and it is very hard to read it, furthermore the article was written in days before LaTeX and the symbols are non-standard for current articles (e.g. $\bigwedge$ for $\forall$ and such). Moreover, I was told that there is some mistake somewhere in the beginning of the article, it is not a grave mistake, some result that he doesn't prove there. But it is a mistake nonetheless. Regardless to the two problems, I was told that this is a very good article for studying the topic. Is there a "digitally remastered" version somewhere online with the error(s) corrected and a clearer typeset? (I got mine from the ScienceDirect website) [I'm not sure that this is the place for this question, however I have no idea where would be the place for it. Any comments and/or directions about this issue would be most welcomed] - 2 Keith Devlin's Constructibility is a more recent reference which contains much of Jensen's early results. – François G. Dorais♦ Sep 13 2010 at 13:11 @Francois: Do you mean projecteuclid.org/euclid.pl/1235419477 ? – Asaf Karagila Sep 13 2010 at 13:50 Yes, that's the one. – François G. Dorais♦ Sep 13 2010 at 16:36 I'll see what my advisor thinks about that. Thanks! – Asaf Karagila Sep 13 2010 at 18:16 1 There are several other intros to fine structure theory, e.g., some chapters of the Handbook of Set theory, or L. Stanley: A short course on gap-one morasses with a review of the fine structure of L. Surveys in set theory, 197--243, London Math. Soc. Lecture Note Ser., 87, Cambridge Univ. Press, Cambridge, 1983. – Péter Komjáth Sep 14 2010 at 5:08 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92444908618927, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Second_law_of_black_hole_mechanics
# Black hole thermodynamics (Redirected from Second law of black hole mechanics) In physics, black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. Much as the study of the statistical mechanics of black body radiation led to the advent of the theory of quantum mechanics, the effort to understand the statistical mechanics of black holes has had a deep impact upon the understanding of quantum gravity, leading to the formulation of the holographic principle.[1] An artist depiction of two black holes merging, a process in which the laws of thermodynamics are upheld. ## Black hole The only way to satisfy the second law of thermodynamics is to admit that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole. The increase of the entropy of the black hole more than compensates for the decrease of the entropy carried by the object that was swallowed. Starting from theorems proved by Stephen Hawking, Jacob Bekenstein conjectured that the black hole entropy was proportional to the area of its event horizon divided by the Planck area. Bekenstein suggested (½ ln 2)/4π as the constant of proportionality, asserting that if the constant was not exactly this, it must be very close to it. The next year, Hawking showed that black holes emit thermal Hawking radiation[2][3] corresponding to a certain temperature (Hawking temperature).[4][5] Using the thermodynamic relationship between energy, temperature and entropy, Hawking was able to confirm Bekenstein's conjecture and fix the constant of proportionality at 1/4:[6] $S_{\text{BH}} = \frac{kA}{4\ell_{\mathrm{P}}^2}$ where A is the area of the event horizon, calculated at 4πR2, k is Boltzmann's constant, and $\ell_{\mathrm{P}}=\sqrt{G\hbar / c^3}$ is the Planck length. This is often referred to as the Bekenstein–Hawking formula. The subscript BH either stands for "black hole" or "Bekenstein-Hawking". The black hole entropy is proportional to the area of its event horizon $A$. The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle.[1] Although Hawking's calculations gave further thermodynamic evidence for black hole entropy, until 1995 no one was able to make a controlled calculation of black hole entropy based on statistical mechanics, which associates entropy with a large number of microstates. In fact, so called "no hair"[7] theorems appeared to suggest that black holes could have only a single microstate. The situation changed in 1995 when Andrew Strominger and Cumrun Vafa calculated [8] the right Bekenstein-Hawking entropy of a supersymmetric black hole in string theory, using methods based on D-branes. Their calculation was followed by many similar computations of entropy of large classes of other extremal and near-extremal black holes, and the result always agreed with the Bekenstein-Hawking formula. In Loop quantum gravity (LQG)[9] it is possible to associate a geometrical interpretation to the microstates: these are the quantum geometries of the horizon. LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon.[10][11] It is possible to derive, from the covariant formulation of full quantum theory (Spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy.[12] The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes. ## The laws of black hole mechanics The four laws of black hole mechanics are physical properties that black holes are believed to satisfy. The laws, analogous to the laws of thermodynamics, were discovered by Brandon Carter, Stephen Hawking and James Bardeen. ### Statement of the laws The laws of black hole mechanics are expressed in geometrized units. #### The Zeroth Law The horizon has constant surface gravity for a stationary black hole. #### The First Law Change of mass is related to change of area, angular momentum, and electric charge by: $dM = \frac{\kappa}{8\pi}\,dA+\Omega\, dJ+\Phi\, dQ,$ where $M$ is the mass, $\displaystyle \kappa$ is the surface gravity, $A$ is the horizon area, $\Omega$ is the angular velocity, $J$ is the angular momentum, $\Phi$ is the electrostatic potential and $Q$ is the electric charge. #### The Second Law The horizon area is, assuming the weak energy condition, a non-decreasing function of time, $\frac{dA}{dt} \geq 0.$ This "law" was superseded by Hawking's discovery that black holes radiate, which causes both the black hole's mass and the area of its horizon to decrease over time. #### The Third Law It is not possible to form a black hole with vanishing surface gravity. $\displaystyle \kappa$ = 0 is not possible to achieve. ### Discussion of the laws #### The Zeroth Law The zeroth law is analogous to the zeroth law of thermodynamics which states that the temperature is constant throughout a body in thermal equilibrium. It suggests that the surface gravity is analogous to temperature. T constant for thermal equilibrium for a normal system is analogous to $\displaystyle \kappa$ constant over the horizon of a stationary black hole. #### The First Law The left hand side, dM, is the change in mass/energy. Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right hand side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right hand side the term T dS. #### The Second Law The second law is the statement of Hawking's area theorem. Analogously, the second law of thermodynamics states that the change in entropy in an isolated system will be greater than or equal to 0 for a spontaneous process, suggesting a link between entropy and the area of a black hole horizon. However, this version violates the second law of thermodynamics by matter losing (its) entropy as it falls in, giving a decrease in entropy. Generalized second law introduced as total entropy = black hole entropy + outside entropy. #### The Third Law Extremal black holes[13] have vanishing surface gravity. Stating that $\displaystyle \kappa$ cannot go to zero is analogous to the third law of thermodynamics which states, the entropy of a system at absolute zero is a well-defined constant. This is because a system at zero temperature exists in its ground state. Furthermore, ΔS will reach zero at 0 kelvins, but S itself will also reach zero, at least for perfect crystalline substances. No experimentally verified violations of the laws of thermodynamics are known. ### Interpretation of the laws The four laws of black hole mechanics suggest that one should identify the surface gravity of a black hole with temperature and the area of the event horizon with entropy, at least up to some multiplicative constants. If one only considers black holes classically, then they have zero temperature and, by the no hair theorem,[7] zero entropy, and the laws of black hole mechanics remain an analogy. However, when quantum mechanical effects are taken into account, one finds that black holes emit thermal radiation (Hawking radiation) at temperature $T_{\text{H}} = \frac{\kappa}{2\pi}.$ From the first law of black hole mechanics, this determines the multiplicative constant of the Bekenstein-Hawking entropy which is $S_{\text{BH}} = \frac{A}{4}.$ ## Beyond black holes Hawking and Page have shown that black hole thermodynamics is more general than black holes, that cosmological event horizons also have an entropy and temperature. More fundamentally, 't Hooft and Susskind used the laws of black hole thermodynamics to argue for a general Holographic Principle of nature, which asserts that consistent theories of gravity and quantum mechanics must be lower dimensional. Though not yet fully understood in general, the holographic principle is central to theories like the AdS/CFT correspondence.[14] ## Notes 1. ^ a b Bousso, Raphael (2002). "The Holographic Principle". 74 (3): 825–874. arXiv:hep-th/0203101. Bibcode:2002RvMP...74..825B. doi:10.1103/RevModPhys.74.825. 2. Matson, John (Oct. 1 2010). "Artificial event horizon emits laboratory analogue to theoretical black hole radiation". Sci. Am. 3. A Brief History of Time, Stephen Hawking, Bantam Books, 1988. 4. Majumdar, Parthasarathi (1998). "Black Hole Entropy and Quantum Gravity". ArXiv: General Relativity and Quantum Cosmology. arXiv:gr-qc/9807045. Bibcode:1999InJPB..73..147M. 5. ^ a b 6. Strominger, A.; Vafa, C. (1996). "Microscopic origin of the Bekenstein-Hawking entropy". Physics Letters B 379: 99. arXiv:hep-th/9601029. Bibcode:1996PhLB..379...99S. doi:10.1016/0370-2693(96)00345-0. 7. Rovelli, Carlo (1996). "Black Hole Entropy from Loop Quantum Gravity". Physical Review Letters 77: 3288–3291. arXiv:gr-qc/9603063. Bibcode:1996PhRvL..77.3288R. doi:10.1103/PhysRevLett.77.3288. 8. Ashtekar, Abhay; Baez, John; Corichi, Alejandro; Krasnov, Kirill (1998). "Quantum Geometry and Black Hole Entropy". Physical Review Letters 80 (5): 904–907. arXiv:gr-qc/9710007. Bibcode:1998PhRvL..80..904A. doi:10.1103/PhysRevLett.80.904. 9. Bianchi, Eugenio (2012). Entropy of Non-Extremal Black Holes from Loop Gravity. arXiv:gr-qc/1204.5122. 10. For an authoritative review, see Ofer Aharony, Steven S. Gubser, Juan Maldacena, Hirosi Ooguri and Yaron Oz (2000). "Large N field theories, string theory and gravity". Physics Reports 323: 183–386. arXiv:hep-th/9905111. doi:10.1016/S0370-1573(99)00083-6.  (Shorter lectures by Maldacena, based on that review. ## References • Bardeen, J. M.; Carter, B.; Hawking, S. W. (1973). "The four laws of black hole mechanics". Communications in Mathematical Physics 31 (2): 161–170. Bibcode:1973CMaPh..31..161B. doi:10.1007/BF01645742. • Bekenstein, Jacob D. (April 1973). "Black holes and entropy". Physical Review D 7 (8): 2333–2346. Bibcode:1973PhRvD...7.2333B. doi:10.1103/PhysRevD.7.2333. • Hawking, Stephen W. (1974). "Black hole explosions?". Nature 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0. • Hawking, Stephen W. (1975). "Particle creation by black holes". Communications in Mathematical Physics 43 (3): 199–220. Bibcode:1975CMaPh..43..199H. doi:10.1007/BF02345020. • Hawking, S. W.; Ellis, G. F. R. (1973). The Large Scale Structure of Space–Time. New York: Cambridge University Press. ISBN 0-521-09906-4. • Hawking, Stephen W. (1994). "The Nature of Space and Time". ArΧiv e-print. arXiv:hep-th/9409195v1. Bibcode:1994hep.th....9195H. • 't Hooft, Gerardus (1985). "On the quantum structure of a black hole". Nuclear Phys. B 256: 727–745. Bibcode:1985NuPhB.256..727T. doi:10.1016/0550-3213(85)90418-3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8570324182510376, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/209733/average-standard-deviation-and-min-max-values
# Average, standard deviation and min/max values I'm analzying a computer science paper and just found in the experimental setup the following statement: • Average (standard deviation) of number of files per peer: 464 (554) • Min - max number of files per peer: 100 - 4,774 Are these numbers possible at all? It does not say anything of a normal distribution, but how is it possible that the standard deviation is 554, but the min number of files per peer is 100? - ## 1 Answer Suppose we have $n$ numbers $x_1, x_2, \ldots, x_n$ ranging from $a$ to $b$, meaning that $x_i = a$ for at least one $i$, $1 \leq i \leq n$, and $x_j = b$ for at least one $j$, $1 \leq j \leq n$. Duplicates are allowed, meaning that two or more of the $n$ numbers could possibly have the same value in $[a, b]$. Define the mean $\bar{x}$ and the variance $\sigma_x^2$ of the set of $n$ numbers as $$\bar{x} = \frac{1}{n}\sum_{i=1}^n x_i, ~~ \sigma_x^2 = \frac{1}{n}\sum_{i=1}^n (x_i - \bar{x})^2 = \left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) - \bar{x}^2.$$ For convenience, let us change the set to $y_1, y_2, \ldots, y_n$ where $y_i = x_i - a$, so that the new set has values ranging from $0$ to $b-a$. The mean $\bar{y}$ is just $\bar{x}-a$, while the variance is unchanged: $\sigma_y^2 = \sigma_x^2$. Now, it is shown in the answers to this question on stats.SE that the ratio $\sigma_y/\bar{y} = \sigma_x/(\bar{x}-a)$ can be no larger than $\sqrt{n-1}$. Note that the value does not depend on $b$ at all You don't say what the value of $n$ is, but given the numbers in your answer, the upper bound on the standard deviation is $$\sigma_x = 554 \leq (464-100)\sqrt{n-1}$$ which is certainly satisfied except in the unusual circumstance that there are only $3$ peers in the experiment and thus only $3$ numbers $x_1$, $x_2$, and $x_3$ are being described in terms of mean/standard-deviation/min-max: certainly overkill! In summary, there are no obvious problems with the standard deviation being larger than the mean. - Makes very much sense! Thanks a lot! (by the way, n is in the experiment 2000) – navititious Oct 9 '12 at 20:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945342481136322, "perplexity_flag": "head"}
http://www.abstractmath.org/Word%20Press/?tag=abstraction
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe ## Abstracting algebra 2012/12/21 — SixWingedSeraph This post has been turned into a page on WordPress, accessible in the upper right corner of the screen.  The page will be referred to by all topic posts for Abstracting Algebra. ## Visible Algebra I 2012/07/30 — SixWingedSeraph This is the first in a series of articles about how algebra could be implemented without using the standard language of algebra that so many people find difficult. The code for the graphs are in the Mathematica notebook Algebra1.nb. ### An algebra problem Suppose you are designing a window that is in the shape of a rectangle surmounted by a semicircle, shown above for the window with width 2 and rectangle height 3. This example occurs in a tiresomely familiar calculus problem where you put a constraint on the perimeter of the window, thus turning it into a one-variable problem, then finding the values of the width and height that give the maximum area.  In this post, I am not going to get that far.  All I will do is come up with a calculation for the area.  I will describe a way you might do it on a laptop five or ten years from now. You have an algebra application that shows a screen with some operations that you may select to paste into your calculation.  The ones we use are called plus, times, power, value and input. You choose a function called value, and label it "Area of window". You recognize that the answer is the sum of the areas of the rectangle and the area of the semicircle, so you choose plus and attach to it two inputs which you label "area of rectangle" and "area of semicircle", like this: The notational metaphor is that the computation starts at the bottom and goes upward, performing the operations indicated. You know (or are told by the system) that the area of a rectangle is the product of its width and height, so you replace the value called "area of rectangle" with a times button and attach two values called $w$ and $h$: You also determine that the area under the semicircle is half the area of a circle of radius $r$ (where $r$ must be calculated). You have a function for the area of a circle of radius $r$, so you attach that: Finally, you use the fact that you know that the semicircle has a radius which is half the width of the rectangle. Now, to make the calculation operational, you attach two inputs named "width" and "height" and feed them into the values $w$ and $h$.  When you type numbers into these buttons, the calculation will proceed upward and finally show the area of the window at the top. In a later post I will produce a live version of this diagram.  (Added 2012-09-08: the live version is here.) Right now I want to get this post out before I leave for MathFest.  (I might even produce the live version at MathFest, depending on how boring the talks are.) You can see an example of a live calculation resembling this in my post A visualization of a computation in tree form. ### Remarks #### Who • This calculation might be a typical exercise for a student part way along learning basic algebra. • College students and scientists and engineers would have a system with a lot more built-in functions, including some they built themselves. #### Syntax • Once you have grasped the idea that the calculation proceed upward from the inputs, carrying out the operations shown, this picture is completely self-explanatory. • Well, you have to know what the operations do. • The syntax actually used in later years may not look like mine. • For one thing, the flow might run top down or left to right instead of bottom up. • Or something very different might be used. What works best will be discovered by using different approaches. • The syntax is fully two-dimensional, which makes it simple to understand (because it uses the most powerful tool our brain has: the visual system). • The usual algebraic code was developed because people used pencil and paper. • I would guess that the usual code has fractional dimension about 1.2. • The tree syntax would require too much writing with pencil and paper.  That is alleviated on a computer by using menus. • Once you construct the computation and input some data it evaluates automatically. • It may be worthwhile to use 3D syntax.  I have an experiment with this in my post Showing categorical diagrams in 3D. #### Later posts will cover related topics: • The difficulties with standard algebraic notation.  They are not trivial. • Solving equations in tree form. • Using properties such as associativity and commutativity in tree form. • Using this syntax with calculus. • The deep connection with Lawvere theories and sketches. ### References • A visualization of a computation in tree form (previous post) • Making visible the abstraction in algebraic notation (previous post) • Scrubbing Calculator by Bret Victor • Showing categorical diagrams in 3D (previous post) • Soulver • The symbolic language of math (abstractmath) • Up and down the ladder of abstraction by Brett Victor Posted in exposition, math, representations, Uncategorized, understanding math. Tags:. No Comments » ## Metaphors in computing science I 2012/05/15 — SixWingedSeraph Michael Barr recently told me of a transcription of a talk by Edsger Dijkstra dissing the use of metaphors in teaching programming and advocating that every program be written together with a proof that it works.  This led me to think about the metaphors used in computing science, and that is what this post is about.  It is not a direct answer to what Dijkstra said. We understand almost anything by using metaphors.  This is a broader sense of metaphor than that thing in English class where you had to say "my love is a red red rose" instead of "my love is like a red red rose".  Here I am talking about conceptual metaphors (see references at the end of the post). ### Metaphor: A program is a set of instructions You can think of a program as a list of instructions that you can read and, if it is not very complicated, understand how to carry them out.  This metaphor comes from your experience with directions on how to do something (like directions from Google Maps or for assembling a toy).   In the case of a program, you can visualize doing what the program says to do and coming out with the expected output. This is one of the fundamental metaphors for programs. Such a program may be informal text or it may be written in a computer language. #### Example A description of how to calculate $n!$ in English could be:  "Multiply the integers $1$ through $n$".  In Mathematica, you could define the factorial function this way: fac[n_] := Apply[Times, Table[i, {i, 1, n}]] This more or less directly copies the English definition, which could have been reworded as "Apply the Times function to the integers from $1$ to $n$ inclusive."  Mathematica programmers customarily use the abbreviation "@@" for Apply because it is more convenient: Fac[n_]:=Times @@ Table[i, {i, 1, 6}] As far as I know, C does not have list operations built in.  This simple program gives you the factorial function evaluated at $n$: j=1;  for (i=2; i<=n; i++)   j=j*i; return j; This does the calculation in a different way: it goes through the numbers $1, 2,\ldots,n$ and multiplies the result-so-far by the new number.  If you are old enough to remember Pascal or Basic, you will see that there you could use a DO loop to accomplish the same thing. #### What this metaphor makes you think of Every metaphor suggests both correct and incorrect ideas about the concept. • If you think of a list of instructions, you typically think that you should carry out the instructions in order.  (If they are Ikea instructions, your experience may have taught you that you must carry out the instructions in order.) • In fact, you don't have to "multiply the numbers from $1$ to $n$" in order at all: You could break the list of numbers into several lists and give each one to a different person to do, and they would give their answers to you and you would multiply them together. • The instructions for calculating the factorial can be translated directly into Mathematica instructions, which does not specify an order.   When $n$ is large enough, Mathematica would in fact do something like the process of giving it to several different people (well, processors) to speed things up. • I had hoped that Wolfram alpha would answer "720" if I wrote "multiply the numbers from $1$ to $6$" in its box, but it didn't work.  If it had worked, the instruction in English would not be translated at all. (Note added 7 July 2012:  Wolfram has repaired this.) • The example program for C that I gave above explicitly multiplies the numbers together in order from little to big.  That is the way it is usually taught in class.  In fact, you could program a package for lists using pointers (a process taught in class!) and then use your package to write a C program that looks like the  "multiply the numbers from $1$ to $n$" approach.  I don't know much about C; a reader could probably tell me other better ways to do it. So notice what happened: • You can translate the "multiply the numbers from $1$ to $n$" directly into Mathematica. •  For C, you have to write a program that implements multiplying the numbers from $1$ to $n$. Implementation in this sense doesn't seem to come up when we think about instruction sets for putting furniture together.  It is sort of like: Build a robot to insert & tighten all the screws. Thus the concept of program in computing science comes with the idea of translating the program instruction set into another instruction set. • The translation provided above for Mathematica resembles translating the instruction set into another language. • The two translations I suggested for C (the program and the definition of a list package to be used in the translation) are not like translating from English to another language.  They involve a conceptual reconstruction of the set of instructions. Similarly, a compiler translates a program in a computer language into machine code, which involves automated conceptual reconstruction on a vast scale. #### Other metaphors • C or Mathematica as like a natural language in some ways • Compiling (or interpreting) as translation Computing science has used other VIM's (Very Important Metaphors) that I need to write about later: • Semantics (metaphor: meaning) • Program as text – this allows you to treat the program as a mathematical object • Program as machine, with states and actions like automata and Turing machines. • Specification of a program.  You can regard  "the product of the numbers from $1$ to $n$" as a specification.  Notice that saying "the product" instead of "multiply" changes the metaphor from "instruction" to "specification". #### References Conceptual metaphors (Wikipedia) Images and Metaphors (article in abstractmath) Images and Metaphors for Sets (article in abstractmath) Images and Metaphors for Functions (incomplete article in abstractmath) ## An Elaborate Riemann Sums Demo 2012/05/08 — SixWingedSeraph #### Note To manipulate the demo in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The demo currently shows a banner that says "This file contains potentially unsafe dynamic content".  You can view the diagram by clicking on the "Enable Dynamics" button.  If and when I figure out how to get rid of the banner, this paragraph will disappear from the post! ### Riemann Sums The Riemann Sum is a complicated idea.  The integral \[\int_a^b f(x)\,dx\] involves three parameters: two numbers $a$ and $b$ and the function $x\mapsto f(x)$.  These are not freely varying parameters: They are subject to the requirements • The function $x\mapsto f(x)$  must be defined on the closed interval $[a,b]$ (let's pretend improper integrals don't exist). • The function must be Riemann integrable (continuous will do). A particular Riemann Sum for this integral looks like \[\sum_{i=1}^n f(p_i)(x_i-x_{i-1})\] It has three more parameters, a number and two lists of numbers satisfying some complicated conditions: • The number $n$ of subdivisions. • The partition, which • is a list of $n+1$ numbers $\{x_0,x_1,\ldots,x_n\}$ • satisfies the conditions •  $x_0<x_1<\ldots<x_n$ • $x_0=a$ • $x_n=b$ • The list of evaluation points, which • is a list of $n$ numbers $\{p_1,\ldots,p_n\}$ • satisfies the condition $x_{i-1}\leq p_i \leq x_i$ for $i=1,\ldots,n$. A Riemann sum may or may not have various important properties. • The partition can be • uniform • random • chosen by a rule (increase the number of points as the derivative increases, for example) • The evaluation points can be chosen • randomly • at the midpoint • at the left end • at the right end • at the lowest point • at the highest point. So the concept is complex, with several constituents and interrelationships to hold in your head all at once.  Experienced math people learn concepts like this all the time.  Math students have a harder time.  Manipulable diagrams can help.  Here is an example: ### The Demo In a class where students use computers with CDF Player installed, you could give them this demo along with instructions about how to use it and a list of questions that they must answer. Examples of instructions • Click on the big plus sign in the upper right corner for some options. • Move the slide labeled $n$ to make more or fewer subdivisions. • Click on the little plus sign besides the slide for some options such as allowing $n$ to increase automatically. • The buttons allow you to choose the type of partition, the type of evaluation points, and five functions to play with. Sample questions 1. Set $n=1$, uniform partition and midpoint and look at the results for each function.  Explain what you see. 2. Set $n=4$,  uniform partition and midpoint and look at the results for each function.  Explain each of the following by referring to the picture: • For $x\mapsto x$, the estimate is exact. • For $x\mapsto x^2$, the estimate is less than the value of the integral. • For $x\mapsto x^5$, the error in the estimate is much worse than for $x^2$. • For $x\mapsto \sqrt{1-x^2}$ , the estimate is greater than the value of the integral. 3. Go through the examples in 2. and check that when you make $n$ bigger the properties stated continue to be true.  Can you explain this? 4. Starting with $n=4$, uniform and midpoint and then using bigger values, note that the error for  $x\mapsto \sqrt{1-x^2}$ is always bigger than the error for  $x\mapsto \sin \pi x$.  Try to explain this.  (Don't ask the students to prove it in freshman calculus). 5. For $n=4$, uniform and midpoint (and then try bigger $n$), for $x\mapsto x^5$, the LeftSide error is always less than the RightSide error.  Explain using the picture. 6. For which curves is the LeftSide estimate always the Lower Sum?  Always the Upper Sum?  Neither?  Does using Random instead of Uniform change these answers? There are many other questions like this you can ask. After answering some of them, I claim (without proof) that the students will have a much better understanding of Riemann sums. Note that teachers can use this Demo without knowing anything at all about Mathematica.  There are hundreds of Demos available in the cloud that can be used in the same way; many of the best are on the Wolfram Demonstration Project. If you can program some in Mathematica, you can take the source code for this demo and modify it, for example to use other functions, to provide functions with changeable parameters and to use partitions following dynamic rules. You could also have this up on a screen in your classroom for class discussion.  But I doubt that is the best use.  For classroom demos you probably need simple on-off demos that you prepare ahead or even write on the spot.  An example of a simple demo is in the post Offloading Abstraction.  I will talk about simple demos more in a later post. ### Rant about why math teachers should use manipulable diagrams A teacher in the past would draw an example of a RIemann sum on the blackboard and talk about a few features as they point at the board.  Nowadays, teachers have slides with accurately drawn Riemann sums and books have pictures of them.  This sort of thing gives the student a picture which (hopefully) stays in their head.  That picture is a kind of metaphor which enables you to think of the sum in terms of something that you are familiar with, just as you can think of a function as position and its derivative as velocity.  (Position and velocity are familiar from driving or any other kind of moving.  The picture of a Riemann sum is not something you knew before you studied them, but your brain has remarkable abilities to absorb a picture and the relations between parts of the picture, so once you have seen it you can call it up whenever you think of Riemann sums.) But there are a lot of aspects of Riemann sums that cannot be demonstrated by a still picture.  When the mesh gets finer, the value of the sum tends to be closer to the exact value of the integral.  You can stare at the still picture and sort of visualize this.  Can you visualize a situation where changing to a finer mesh could make the error worse?  If someone suggests a high-frequency sine wave, can you visualize in your head why a finer mesh might make it worse? An elaborate demo with lots of push buttons is something for students to play with on their own time and thereby gain a better understanding of the topic.  Before manipulable diagrams the only way you could do this was produce physical models.  I don't know of anyone who produced a physical model of a Riemann sum.  It is possible to do so with some parameters changeable but it would be difficult and not as flexible as the demo given here. The world has more possibilities.  Use them. Related posts An elaborate Riemann Sum Demo (Mathematica notebook, source of the demo in this post) Freezing a family of functions (previous post) Images and Metaphors (in abstractmath.org) ## Offloading abstraction 2012/02/26 — SixWingedSeraph Note: To manipulate the diagrams in this post and in most of the files it links to, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The diagram above shows you the tangent line to the curve $y=x^3-x$ at a specific point.  The slider allows you to move the point around, and the tangent line moves with it. You can click on one of the plus signs for options about things you can do with the slider.  (Note: This is not new.  Many other people have produced diagrams like this one.) I have some comments to make about this very simple diagram. I hope they raise your consciousness about what is going on when you use a manipulable demonstration. ### Farming out your abstraction load A diagram showing a tangent line drawn on the board or in a paper book requires you visualize how the tangent line would look at other points.  This imposes a burden of visualization on you.  Even if you are a new student you won't find that terribly hard (am I wrong?) but you might miss some things at first: • There are places where the tangent line is horizontal. • There are places where some of the tangent lines cross the curve at another point. Many calculus students believe in the myth that the tangent line crosses the curve at only one point.  (It is not really a myth, it is a lie.  Any decent myth contains illuminating stories and metaphors.) • You may not envision (until you have some experience anyway) how when you move the tangent line around it sort of rocks like a seesaw. You see these things immediately when you manipulate the slider. Manipulating the slider reduces the load of abstract thinking in your learning process.     You have less to keep in your memory; some of the abstract thinking is offloaded onto the diagram.  This could be described as contracting out (from your head to the picture) part of the visualization process.  (Visualizing something in your head is a form of abstraction.) Of course, reading and writing does that, too.  And even a static graph of a function lowers your visualization load.  What interactive diagrams give the student is a new tool for offloading abstraction. You can also think of it as providing external chunking.  (I'll have to think about that more…) ### Simple manipulative diagrams vs. complicated ones The diagram above is very simple with no bells and whistles.  People have come up with much more complicated diagrams to illustrate a mathematical point.  Such diagrams: • May give you buttons that give you a choice of several curves that show the tangent line. • May give a numerical table that shows things like the slope or intercept of the current tangent line. • May also show the graph of the derivative, enabling you to see that it is in fact giving the value of the slope. Such complicated diagrams are better suited for the student to play with at home, or to play with in class with a partner (much better than doing it by yourself).  When the teacher first explains a concept, the diagrams ought to be simple. ### Examples • The Definition of derivative demo (from the Wolfram Demonstration Project) is an example that provides a table that shows the current values of some parameters that depend on the position of the slider. • The Wolfram demo Graphs of Taylor Polynomials is a good example of a demo to take home and experiment extensively with.  It gives buttons to choose different functions, a slider to choose the expansion point, another one to choose the number of Taylor polynomials, and other things. • On the other hand, the Wolfram demo Tangent to a Curve is very simple and differs from the one above in one respect: It shows only a finite piece of the tangent line.  That actually has a very different philosophical basis: it is representing for you the stalk of the tangent space at that point (the infinitesimal vector that contains the essence of the tangent line). • Brian Hayes wrote an article in American Scientist containing a moving graph (it moves only  on the website, not in the paper version!) that shows the changes of the population of the world by bars representing age groups.  This makes it much easier to visualize what happens over time.  Each age group moves up the graph — and shrinks until it disappears around age 100 — step by step.  If you have only the printed version, you have to imagine that happening.  The printed version requires more abstract visualization than the moving version. • Evaluating an algebraic expression requires seeing the abstract structure of the expression, which can be shown as a tree.  I would expect that if the students could automatically generate the tree (as you can in Mathematica)  they would retain the picture when working with an expression.  In my post computable algebraic expressions in tree form I show how you could turn the tree into an evaluation aid.  See also my post Syntax trees. This blog has a category "Mathematica" which contains all the graphs (many of the interactive) that are designed as an aid to offloading abstraction. ## Prechunking 2011/09/15 — SixWingedSeraph The emerging theory of how the brain works gives us a new language to us for discussing how we teach, learn and communicate math. ### Modules Our minds have many functionalities.  They are implemented by what I called modules in Math and modules of the mind because I don’t understand very much about what cognitive scientists have learned about how these functionalities are carried out.  They talk about a particular neuron, a collection of neurons, electrical charges flowing back and forth, and so on, and it appears there is no complete agreement about these ideas. The functions the modules implement are physical structures or activities in the brain.  At a certain level of abstraction we can ignore the mechanism. Most modules carry out functionalities that are hidden from our consciousness. • When we walk, the walking is carried out by a module that operates without our paying (much) attention to it. • When we recognize someone, the identity of the person pops into our consciousness without us knowing how it got there.  Indeed, we cannot introspect to see how the process was carried out; it is completely hidden. Reasoning, for example if you add 56 and 49 in your head, has part of the process visible to your introspection, but not all of it.  It uses modules such as the sum of 9 and 6 which feel like random access memory.  When you carry the addition out, you (or at least I) are conscious of the carry: you are aware of it and aware of adding it to 9 to get 10. Good places to find detailed discussion of this hiddenness are references [2] and [4] below. ### Chunking Math ed people have talked for years about the technique of chunking in doing math. • You see an algebraic expression, you worry about how it might be undefined, you gray out all of it except the denominator and inspect that, and so on.  (This should be the subject of a Mathematica demo.) • You look at a diagram in the category of topological spaces.  Each object in the diagram stands for a whole, even uncountably infinite, space with lots of open and closed subsets and so on, but you think of it just as a little pinpoint in the diagram to discover facts about its relationship with other spaces.  You don’t look inside the space unless you have to to verify something. Students have a hard time doing that.  When an experienced mathematician does this, they are very likely to chunk subconsciously; they don’t think, “Now I am chunking”.  Nevertheless, you can call it to their attention and they will be aware of the process. There are modules that perform chunking whose operation you cannot be aware of even if you think about it.  Here are two examples. Example 1. Consider these two sentences from [2], p. 137: • “I splashed next to the bank.” • “There was a run on the bank.” When you read the first one you visualize a river bank.  When you read the second one you visualize a bank as an institution that handles money.  If these two sentences were separated by a couple of paragraphs, or even a few words, in a text you are likely not to notice that you have processed the same word in two different ways.  (When they are together as above it is kind of blatant.) The point is the when you read each sentence your brain directly presents you with the proper image in each case (different ones as appropriate).  You cannot recover the process that did that (by introspection, anyway). Example 2. I discussed the sentence below in the Handbook.  The sentence appears in references [3]. …Richard Darst and Gerald Taylor investigated the differentiability of functions $latex f^p$ (which for our purposes we will restrict to $latex (0,1)$) defined for each $latex p\geq1$ by In this sentence, the identical syntax $latex (a,b)$ appears twice; the first occurrence refers to the open interval from 0 to 1 and the second refers to the GCD of integers m and n.  When I first inserted it into the Handbook’s citation list, I did not notice that (I was using it for another phenomenon, although now I have forgotten what it was).  Later I noticed it.  My mind preprocessed the two occurrences of the syntax and threw up two different meanings without my noticing it. Of course, “restricting to (0, 1)” doesn’t make sense if (0, 1) means the GCD of 0 and 1, and saying “(m, n) = 1” doesn’t make sense if (m, n) is an interval.  This preprocessing no doubted came to its two different conclusions based on such clues, but I claim that this preprocessing operated at a much deeper level of the brain than the preprocessing that results in your thinking (for example) of a topological space as a single unstructured object in a category. This phenomenon could be called prechunking.  It is clearly a different phenomenon that zooming in on a denominator and then zooming out on the whole expression as I described in [1]. ### This century’s metaphor In the nineteenth century we came up with a machine metaphor for how we think.  In the twentieth century the big metaphor was our brain is a computer.  This century’s metaphor is that of a bunch a processes in our brain and in our body all working simultaneously, mostly out of our awareness, to enable us to live our life, learn things, and just as important (as Davidson [4] points out) to unlearn things.  But don’t think we have Finally Discovered The Last Metaphor. ### References 1. Zooming and chunking in abstractmath.org. 2. Mark Changizi, The vision revolution.  Benbella Books, 2009. 3. Mark Frantz, “Two functions whose powers make fractals”.  American Mathematical Monthly, v 105, pp 609–617 (1998). 4. Cathy N. Davidson, Now you see it.  Viking Penguin, 2011.  Chapters 1 and 2. 5. Math and modules of the mind (previous post). 6. Cognitive science in Wikipedia. 7. Charles Wells, The handbook of mathematical discourse, Infinity Publishing Company, 2003. ## Technical meanings clash with everyday meanings 2010/05/12 — Charles Wells Recently (see note [a]) on MathOverflow, Colin Tan asked [1] “What does ‘kernel’ mean in ‘integral kernel’?”  He had noticed the different use of the word in referring to the kernels of morphisms. I have long thought [2] that the clash between technical meanings and everyday meaning of technical terms (not just in math) causes trouble for learners.  I have recently returned to teaching (discrete math) and my feeling is reinforced — some students early in studying abstract math cannot rid themselves of thinking of a concept in terms of familiar meanings of the word. One of the worst areas is logic, where “implies” causes well-known bafflement.   “How can ‘If P then Q’ be true if P is false??”  For a large minority of beginning college math students, it is useless to say, “Because the truth table says so!”.  I may write in large purple letters (see [3] for example) on the board and in class notes that The Definition of a Technical Math Concept Determines Everything That Is True About the Concept but it does not take.  Not nearly. The problem seems to be worse in logic, which changes the meaning of words used in communicating math reasoning as well as those naming math concepts. But it is bad enough elsewhere in math. Colin’s question about “kernel” is motivated by these feelings, although in this case it is the clash of two different technical meanings given to the same English word — he wondered what the original idea was that resulted in the two meanings.  (This is discussed by those who answered his question.) Well, when I was a grad student I made a more fundamental mistake when I was faced with two meanings of the word “domain” (in fact there are at least four meanings in math).  I tried to prove that the domain of a continuous function had to be a connected open set.  It didn’t take me all that long to realize that calculus books talked about functions defined on closed intervals, so then I thought maybe it was the interior of the domain that was a, uh, domain, but I pretty soon decided the two meanings had no relation to each other.   If I am not mistaken Colin never thought the two meanings of “kernel” had a common mathematical definition. It is not wrong to ask about the metaphor behind the use of a particular common word for a technical concept.  It is quite illuminating to get an expert in a subject to tell about metaphors and images they have about something.  Younger mathematicians know this.  Many of the questions on MathOverflow are asking just for that.  My recollection of the Bad Old Days of Abstraction and Only Abstraction (1940-1990?) is that such questions were then strongly discouraged. ### Notes [a] The recent stock market crash has been blamed [4] on the fact that computers make buy and sell decisions so rapidly that their actions cannot be communicated around the world fast enough because of the finiteness of the speed of light.  This has affected academic exposition, too.  At the time of writing, “recently” means yesterday. ### References [1] Colin Tan, “What does ‘kernel’ mean in ‘integral kernel’? [2] Commonword names for technical concepts (previous blog). [3] Definitions. (Abstractmath). [4] John Baez, This weeks finds in mathematical physics, Week 297. ## Templates in mathematical practice 2010/03/11 — Charles Wells This post is a first pass at what will eventually be a section of abstractmath.org. It’s time to get back to abstractmath; I have been neglecting it for a couple of years. What I say here is based mainly on my many years of teaching discrete mathematics at Case Western Reserve University in Cleveland and more recently at Metro State University in Saint Paul. Beginning abstract math College students typically get into abstract math at the beginning in such courses as linear algebra, discrete math and abstract algebra. Certain problems that come up in those early courses can be grouped together under the notion of (what I call) applying templates [note 0]. These are not the problems people usually think about concerning beginners in abstract math, of which the following is an incomplete list: • Reasoning from axioms • Encapsulating processes (Tall et al, 2000). • Semantic contamination (abstractmath) • Translating between mathematical English and logic (abstractmath) The students’ problems discussed here concern understanding what a template is and how to apply it. Templates can be formulas, rules of inference, or mini-programs. I’ll talk about three examples here. The template for quadratic equations The solution of a real quadratic equation of the form $latex {ax^2+bx+c=0}&fg=000000$ is given by the formula $latex \displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}&fg=000000$ This is a template for finding the roots of the equations. It has subtleties. For example, the numerator is symmetric in $latex {a}&fg=000000$ and $latex {c}&fg=000000$ but the denominator isn’t. So sometimes I try to trick my students (warning them ahead of time that that’s what I’m trying to do) by asking for a formula for the solution of the equation $latex {a+bx+cx^2=0}&fg=000000$. The answer is $latex \displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2c}&fg=000000$ I start writing it on the board, asking them to tell me what comes next. When we get to the denominator, often someone says “$latex {2a}&fg=000000$”. The template is telling you that the denominator is 2 times the coefficient of the square term. It is not telling you it is “$latex {a}&fg=000000$”. Using a template (in the sense I mean here) requires pattern matching, but in this particular example, the quadratic template has a shallow incorrect matching and a deeper correct matching. In detail, the shallow matching says “match the letters” and the deep matching says “match the position of the letters”. Most of the time the quadratic being matched has particular numbers instead of the same letters that the template has, so the trap I just described seldom occurs. But this makes me want to try a variation of the trick: Find the solution of $latex {3+5x+2x^2=0}&fg=000000$. Would some students match the textual position (getting $latex {a=3}&fg=000000$) instead of the functional position (getting $latex {a=5}&fg=000000$)? [Note [0]). If they did they would get the solutions $latex {(-1,-\frac{2}{3})}&fg=000000$ instead of $latex {(-1,-\frac{3}{2})}&fg=000000$. Substituting in algebraic expressions have other traps, too. What sorts of mistakes would students have solving $latex {3x^2+b^2x-5=0}&fg=000000$? Most students on the verge of abstract math don’t make mistakes with the quadratic formula that I have described. The thing about abstract math is that it uses more sophisticated templates • subject to conditions • with variations • with extra levels of abstraction The template for proof by induction This template gives a method of proof of a statement of the form $latex {\forall{n}\mathcal{P}(n)}&fg=000000$, where $latex {\mathcal{P}}&fg=000000$ is a predicate (presumably containing $latex {n}&fg=000000$ as a variable) and $latex {n}&fg=000000$ varies over positive integers. The template says: Goal: Prove $latex {\forall{n}\mathcal{P}(n)}&fg=000000$. Method: • Prove $latex {\mathcal{P}(1)}&fg=000000$ • For an arbitrary integer $latex {n>1}&fg=000000$, assume $latex {\mathcal{P}(n)}&fg=000000$ and deduce $latex {\mathcal{P}(n+1)}&fg=000000$. For example, to prove $latex {\forall n (2^n+1\geq n^2)}&fg=000000$ using the template, you have to prove that $latex {2^2+1\geq 1^1}&fg=000000$, and that for any $latex {n>1}&fg=000000$, if $latex {2^n+1\geq n^2}&fg=000000$, then $latex {2^{n+1}+1\geq (n+1)^2}&fg=000000$. You come up with the need to prove these statements by substituting into the template. This template has several problems that the quadratic formula does not have. Variables of different types The variable $latex {n}&fg=000000$ is of type integer and the variable $latex {\mathcal{P}}&fg=000000$ is of type predicate [note 0]. Having to deal with several types of variables comes up already in multivariable calculus (vectors vs. numbers, cross product vs. numerical product, etc) and they multiply like rabbits in beginning abstract math classes. Students sometimes write things like “Let $latex {\mathcal{P}=n+1}&fg=000000$”. Multiple types is a big problem that math ed people don’t seem to discuss much (correct me if I am wrong). The variable $latex {n}&fg=000000$ occurs as a bound variable in the Goal and a free variable in the Method. This happens in this case because the induction step in the Method originates as the requirement to prove $latex {\forall n(\mathcal{P}(n)\rightarrow\mathcal{P}(n+1))}&fg=000000$, but as I have presented it (which seems to be customary) I have translated this into a requirement based on modus ponens. This causes students problems, if they notice it. (“You are assuming what you want to prove!”) Many of them apparently go ahead and produce competent proofs without noticing the dual role of $latex {n}&fg=000000$. I say more power to them. I think. The template has variations • You can start the induction at other places. • You may have to have two starting points and a double induction hypothesis (for $latex {n-1}&fg=000000$ and $latex {n}&fg=000000$). In fact, you will have to have two starting points, because it seems to be a Fundamental Law of Discrete Math Teaching that you have to talk about the Fibonacci function ad nauseam. • Then there is strong induction. It’s like you can go to the store and buy one template for quadratic equations, but you have to by a package of templates for induction, like highway engineers used to buy packages of plastic French curves to draw highway curves without discontinuous curvature. The template for row reduction I am running out of time and won’t go into as much detail on this one. Row reduction is an algorithm. If you write it up as a proper computer program there have to be all sorts of if-thens depending on what you are doing it for. For example if want solutions to the simultaneous equations | | | | |---------|----|----| | 2x+4y+z | = | 1 | | x+2y | = | 0 | | x+2y+4z | = | 5 | you must row reduce the matrix | | | | | |----|----|----|----| | 2 | 4 | 1 | 1 | | 1 | 2 | 0 | 0 | | 1 | 2 | 4 | 5 | (I haven’t yet figured out how to wrap this in parentheses) which gives you | | | | | |----|----|----|----| | 1 | 2 | 0 | 0 | | 0 | 0 | 1 | 0 | | 0 | 0 | 0 | 1 | This introduces another problem with templates: They come with conditions. In this case the condition is “a row of three 0s followed by a nonzero number means the equations have no solutions”. (There is another condition when there is a row of all 0′s.) It is very easy for the new student to get the calculation right but to never sit back and see what they have — which conditions apply or whatever. When you do math you have to repeatedly lean in and focus on the details and then lean back and see the Big Picture. This is something that has to be learned. What to do, what to do I have recently experimented with being explicit about templates, in particular going through examples of the use of a template after explicitly stating the template. It is too early to say how successful this is. But I want to point out that even though it might not help to be explicit with students about templates, the analysis in this post of a phenomenon that occurs in beginning abstract math courses • may still be accurate (or not), and • may help teachers teach such things if they are aware of the phenomenon, even if the students are not. Notes 1. Many years ago, I heard someone use the word “template” in the way I am using it now, but I don’t recollect who it was. Applied mathematicians sometimes use it with a meaning similar to mine to refer to soft algorithms–recipes for computation that are not formal algorithms but close enough to be easily translated into a sufficiently high level computer language. 2. In the formula $latex {ax^2+bx+c}&fg=000000$, the “$latex {a}&fg=000000$” has the first textual position but the functional position as the coefficient of the quadratic term. This name “functional position” has nothing to do with functions. Can someone suggest a different name that won’t confuse people? 3. I am using “variable” the way logicians do. Mathematicians would not normally refer to “$latex {\mathcal{P}}&fg=000000$” as a variable. 4. I didn’t say anything about how templates can involve extra layers of abstract.  That will have to wait.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 102, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171406030654907, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/77240/starting-with-paths-and-the-fundamental-group
# starting with paths and the fundamental group I have two problems, I´m a little complicated with the problem , I know that it´s easy but I need just a little help. Are two problems i) Suppose that the identity map $i:X \to X$ is homotopic to a constant path, and the space Y is path connected, then the set of homotopy classes of maps of X into Y has a single element. ii) Let $p:E \to B$ be a open covering , B connected. Show that if for some $b_0$ we have $\left| {p^{ - 1} \left( b_0 \right)} \right| = k\,\,\,$ then for every b we have $\left| {p^{ - 1} \left( b \right)} \right| = k$ My try: i) Im not sure if this is ok, but i "proved" that any path $f:X \to Y$ is homotopic to a constant map, and this implies that Y is simply connected. But i did not used the fact that Y is path connected, I think that something that I did, is wrong. Let f be a path from X to Y. Since the identity in X is nulhomotopic, then there exist a homotopy $F:I\times X \to X$ such that: $\eqalign{ & F\left( {0,x} \right) = i\left( x \right) = x \cr & F\left( {1,x} \right) = k_0 \in X \cr}$ Then the homotopy $H = f \circ F:I\times X \to \,Y$ "deforms f to a point " since $\eqalign{ & H\left( {0,x} \right) = f\left( x \right) \cr & H\left( {1,x} \right) = f\left( {k_0 } \right) \in Y \cr}$ For ii) I suppose that the set of elements such that its preimage is "k" (for every k ) elements must be open, but I´m not sure if this is true, if this is not true, I have no idea , How to do the problem. - ## 1 Answer For (i) you've done a lot of the work, but some care is needed. You have indeed shown that any $f\colon X \to Y$ is homotopic to a constant map, but if $Y$ is not path connected then there may be different constant maps from $X$ to $Y$ that are not homotopic. As a silly example, consider the space of maps $\{x\} \to \{y_1, y_2\}$, where these finite sets have the discrete topology. So, picking up from where you leave off and using that homotopy is an equivalence relation, suppose we have constant maps $f, g\colon X \to Y$. As $Y$ is path connected, there is a path $\gamma$ from $g(k_0)$ to $h(k_0)$. Use this to construct a homotopy between $f$ and $g$ (At this stage, $X$ does not play a significant role). I would also avoid thinking of these maps $X \to Y$ as paths. A path in $Y$ is a map $S^1 \to Y$, and $S^1$ is not nullhomotopic. So you certainly can't conclude that $Y$ is simply connected, and indeed that need not be the case. For (ii), I think you have the right idea; you just need to make an argument. Take an open neighborhood $U$ of $b$ such that $p^{-1}(U)$ is a disjoint union of $\{V_i\}_{i \in I}$, each $V_i$ mapping homeomorphically onto $U$. If $b'$ is another element of $U$, then define a map $\Phi\colon p^{-1}(b) \to p^{-1}(b')$ as follows. If $e \in p^{-1}(b)$, then $e$ is an element of $V_i$ for some (unique) $i \in I$. Show that there is a unique element $e'$ of $V_i$ such that $p(e') = b'$. Set $\Phi(e) = e'$, and show that this is a bijection. It follows that if $p^{-1}(b)$ has $k$ elements, then so does $p^{-1}(b')$ for $b'$ in an open neighborhood around $b$. You can say something similar if $p^{-1}(b)$ does not have $k$ elements. Now use connectedness. Let me know if I should say more! - And How can I use the hypothesis of being path connected in i) ? – Daniel Oct 30 '11 at 18:28 @Daniel Updated. – Dylan Moreland Oct 30 '11 at 18:48 Thanks! And in the problem ii) At least the problem, it´s from Munkres, section 53, problem 3. I don´t know If I need more hypothesis or not. – Daniel Oct 30 '11 at 19:22 @Daniel Woops, paths on the brain, I guess. Fixed it up. – Dylan Moreland Oct 30 '11 at 19:45 I´m not sure, how to construct, the new homotopy D: – Daniel Nov 1 '11 at 20:19 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510231018066406, "perplexity_flag": "head"}
http://mathoverflow.net/questions/120219/constructing-expanders-in-z-pz
## Constructing expanders in Z/pZ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix a positive integer $k>0$. For $p>k$ a prime, let $A_p$ be a subset of the finite field $\mathbb{Z}/p\mathbb{Z}$ of size $k$ which contains a primitive element. Define $G_p$ to be the (di)graph whose vertices are elements of $\mathbb{Z}/p\mathbb{Z}$, with two vertices $i,j$ joined by an edge provided $j=ia$ or $j=i+a$ for some $a\in A_p$. (I'm mainly interested in the situation where $A_p$ is closed under the operations of taking multiplicative and additive inverses; under these assumptions I can think of $G_p$ as a graph rather than a digraph.) Question: Is $(G_p)_{p \textrm{ a prime}}$ a family of expanders? Background: I'm expecting the answer to be either "possibly" or "no" (because if it were "yes" I'd hope I'd have heard about it already). My interest comes in studying the Bourgain-Gamburd machinery for proving expansion from results about growth. For the family $(G_p)$, the relevant growth result is the Bourgain-Katz-Tao sum-product theorem for fields of prime order. One needs more than just a growth result of course, one also needs to have some notion of `quasirandomness' (but I think I can handle this), as well as a lower bound on the girth of the graph. I've not thought much about this last aspect so I guess this is the most likely to be the sticking point. - Not an answer, but certainly related is Problem 7.9 from math.haifa.ac.il/~seva/Papers/montpr.dvi . – Seva Jan 29 at 17:07 Is $A$ the same as $A_p$? – Gerry Myerson Jan 29 at 22:38 @Gerry: yes! Will edit... – Nick Gill Jan 30 at 13:27 @Seva, the problem you refer to is very interesting. – Nick Gill Jan 30 at 13:32 This is basically a duplicate of mathoverflow.net/questions/91657/… – Terry Tao Jan 30 at 17:58 show 1 more comment ## 1 Answer No, because solvable groups are amenable. You're asking: Is is there a set in Z/pZ almost invariant by x->x+1 and 2x? Here's one: take the union of I, I/2, .., I/2^n, where I is an interval of length much bigger than 2^n. - Could you expand just a little? (Pardon the pun :-) – Nick Gill Jan 30 at 17:15 To my understanding, applied to the Problem 7.9 mentioned in my comment above, this explains why $\lambda=O(1)$ does not work - but does not solve the problem in its full generality. Is this correct? – Seva Jan 30 at 18:27 The link by Tao gives a more complete answer to my question. It is in the same direction as this answer, so I'm accepting it. – Nick Gill Jan 31 at 13:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452728629112244, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/4613-triple-integral-problems-print.html
Triple Integral Problems Printable View • August 1st 2006, 11:49 AM tttcomrader Triple Integral Problems Question: List five other iterated integrals that equals to - a) $<br /> \int_0^1 \int_y^1 \int_0^y f(x,y,z) dz dx dy<br />$ b) $<br /> \int_0^1 \int_0^{x^2} \int_0^y f(x,y,z) dz dx dy<br />$ To be honest, I don't even know how to start this problem... Thank you. KK • August 1st 2006, 01:53 PM ThePerfectHacker The problem asks to change the order of interated integration. By Fubini's Theorem order in iterated integration is not important. Thus there are six other representations of an integral each having, $\left\{ \begin{array}{c}dxdydx\\dxdzdy\\dydxdz\\dydzdx\\dz dxdy\\dzdydx$ That is why there are 5 others cuz one is already given. • August 1st 2006, 02:37 PM tttcomrader Yes, that is what I thought initially, but I looked up the answer from the back of the book, even the limits of integrals had been changed. Here is the answer to a), but I have no idea how to get it: Answer a): $<br /> \int_0^1 \int_0^x \int_0^y f(x,y,z) dz dy dx<br />$ $<br /> \int_0^1 \int_z^1 \int_y^1 f(x,y,z) dx dy dz<br />$ $<br /> \int_0^1 \int_0^y \int_y^1 f(x,y,z) dx dz dy<br />$ $<br /> \int_0^1 \int_0^x \int_z^x f(x,y,z) dy dz dx<br />$ $<br /> \int_0^1 \int_z^1 \int_z^x f(x,y,z) dy dx dz<br />$ Any thoughts? KK • August 1st 2006, 03:53 PM galactus It may help if you list out the respective limits of your original integral, then you can see how they switched things around. z=0 z=y y=0 y=x x=0 x=1 Now, see how they arrived at the second one?. • August 1st 2006, 04:05 PM Soroban Hello, KK! Quote: List five other iterated integrals that equals to: . . $a)\;\int_0^1 \int_y^1 \int_0^y f(x,y,z)\, dz\, dx\, dy$ I write in the limits "completely". . . $\int^{y = 1}_{y = 0}\int^{x=1}_{x=y}\int^{z=y}_{z=0}f(x,y,z)\,dz\,dx \,dy$ I start with the "outside" limits: . $y = 0,\;y = 1$ That is, $y$ ranges from $0$ to $1.$ Code: ```            y             |       - - -1+ - - - - -       :::::|:::::::::       :::::|:::::::::       :::::|:::::::::       ------+---------- x             |``` The next limits are: . $x = y,\;x = 1$ That is, $x$ ranges between the line $x = y$ and $x = 1$ Code: ```            y             |       - - -1+ - - - * -             |    /:|             |  /:::|             | /:::::|       ------/-------+-- x             |      1``` Now lay this diagram "on the floor"; the z-axis goes straight up. Code: ```              z               |               |               |       - - - - *---------- y             *::*           *:::::*         *::::::::*       * * * * * * *     /   x``` I can't draw this next part . . . hope you can follow. With no restriction on $z$, we have a triangular prism . . with the above triangle as a cross-section, extending up and down infinitely. But the limits are: $z = 0$ and $z = y.$ . . That is, $z$ ranges between the "floor" and the slanted plane $z = y.$ So our triangular prism is cut off at floor-level below . . and the slanted plane $z = y$ above. And that is the solid we are dealing with. Now if we change the order of integration, the limits are changed. With the first of the answers, we have: $dz\,dy\,dx$ We will try to describe the same solid using this order of limits. Starting outside: $x$ goes from $x = 0$ to $x = 1$ Then: $y$ goes from $y = 0$ to $y = x$ Then: $z$ goes from $z = 0$ to $z = y$ So the integral is: . $\int_{x=0}^{x=1}\int_{y=0}^{y=x}\int_{x=0}^{z=y}f( x,y,z)\,dz\,dy,\,dx$ . . I hope this helps . . . • August 1st 2006, 06:00 PM tttcomrader Thank you for your explaination, I worked the problem and I was managed to get the answers of dzdydx, dxdzdy, and dxdydz. However, for dydxdz, dydzdx, and dydxdz, I still don't understand why are they the same solid as the given one, why is y=x, y=z? Any ideas? KK • August 1st 2006, 08:32 PM JakeD Quote: Originally Posted by tttcomrader Thank you for your explaination, I worked the problem and I was managed to get the answers of dzdydx, dxdzdy, and dxdydz. However, for dydxdz, dydzdx, and dydxdz, I still don't understand why are they the same solid as the given one, why is y=x, y=z? Any ideas? KK As another way of solving these, I work with the inequalities describing the region of integration. Start with the original integral $<br /> \int_0^1 \int_y^1 \int_0^y f(x,y,z) dz dx dy .<br />$ The inequalities describing the region are $<br /> 0 \le y \le 1<br />$ $<br /> y \le x \le 1<br />$ $<br /> 0 \le z \le y .<br />$ To get the $dydxdz$ you asked for, we work with and rearrange the inequalities. We need the outer limit for $dz$ first. From the first and third inequalities, we have $<br /> 0 \le z \le y \le 1<br />$ so we can write $<br /> 0 \le z \le 1.<br />$ For $dx$ and $dy$, we have from the second and third inequalities $<br /> z \le y \le x \le 1, <br />$ so we have for $dx$ $<br /> z \le x \le 1<br />$ and for $dy$ $<br /> z \le y \le x.<br />$ The result is the new set of inequalities $<br /> 0 \le z \le 1<br />$ $<br /> z \le x \le 1<br />$ $<br /> z \le y \le x<br />$ and the integral $<br /> \int_0^1 \int_z^1 \int_z^x f(x,y,z) dy dx dz .<br />$ In writing the new inequalities, you can use any implications from the original inequalities. To make sure you haven't left any essential implications out, check that any triple $(x,y,z)$ that satisfies either one set of inequalities also satisfies the other. All times are GMT -8. The time now is 09:41 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219954013824463, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/83150/why-are-jucys-murphy-elements-eigenvalues-whole-numbers/83664
## Why are Jucys-Murphy elements' eigenvalues whole numbers? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Jucys-Murphy elements of the group algebra of a finite symmetric group (here's the definition in Wikipedia) are known to correspond to operators diagonal in the Young basis of an irreducible representaion of this group. As one can see from the Wikipedia entry, all of the elements of such a diagonal matrix (in other words the operator's eigenvalues) are integers. I'm looking for a simple way of explaining this fact (that the eigenvalues are wholes). By simple I mean without going into more or less advanced representation theory of the symmetric group (tabloids, Specht modules etc.), so trying to prove the specific formula given in Wikipedia is not an option. (I'm considering the Young basis as the Gelfand-Tsetlin basis of the representation for the inductive chain of groups $S_1\subset S_2\subset \ldots\subset S_n$, which is uniquely defined thanks to this chain's spectrum's simplicity, not as a set of vectors in correspondence with the standard tableaux.) In fact, I'm trying to prove the first statement ($a_i\in \mathbb{Z}$) of proposition 4.1 in this article. - 3 I myself have asked this question to several people to no avail. Not only are the eigenvalues integers; we also have $\prod\limits_{i=-n+1}^{n-1}\left(X-i\right)=0$, where $X=\left(1,n\right)+\left(2,n\right)+...+\left(n-1,n\right)\in\mathbb Z\left[S_n\right]$ is the $n$-th YJM element. I am sure this has a combinatorial proof, probably even a smart elementary induction one - but I had no success whatsoever in finding one over several weeks. – darij grinberg Dec 10 2011 at 23:00 2 I'm sorry for the off topic, but I had to say this. Don't know if you remember, but I know you, me and my twin brother Anton visited you at your home with our parents some 8-10 years ago when we (and you) lived in Karlsruhe. I'm in my 5th year at MSU now =) Quite a coincidence that you were the first person to respond to my first question here. – Igor Makhlin Dec 10 2011 at 23:23 Oh hi! So we meet again. I've just started studying for a PhD at MIT. As you see by the comment I'm doing some algebraic combinatorics, at least as a pastime. Are you, too, or do you need this for some kind of infinite symmetric groups / probability theory? – darij grinberg Dec 11 2011 at 15:03 Well, actually I have been reading this article simply with the purpose of educating myself on the subject of representation theory, which (I guess) is my main field of interest. But there's a lot of combinatorics to it, yes. – Igor Makhlin Dec 11 2011 at 15:55 ## 5 Answers This can be shown using the following two facts: 1. $X_n=(1,n)+(2,n)+\ldots+(n-1,n)$ commutes with any element of $\mathbb Z S_{n-1}$ 2. Any irreducible $\mathbb Q S_n$-module $V$ restricts to a multiplicity-free $\mathbb Q S_{n-1}$-module (this follows from the classical branching rule; of course you said you didn't want to use tableaux's etc., so I'm not entirely sure whether this is compatible with your idea of "elementary"). The first one implies that $$X_n\in End_{\mathbb Q S_{n-1}}(Res^n_{n-1}V)$$ for any irreducible $\mathbb Q S_n$-module $V$. But, due to the second point and the fact that $\mathbb Q$ is a splitting field for $S_{n-1}$, there is an isomorphism of algebras $$End_{\mathbb Q S_{n-1}}(Res^n_{n-1}V) \cong \mathbb Q \oplus \ldots \oplus \mathbb Q$$ This shows that $X_n$ acts semisimply on $V$ with eigenvalues in $\mathbb Q$. That the eigenvalues lie in $\mathbb Z$ then actually follows from the fact that $\mathbb Z S_n$ is a $\mathbb Z$-order (elements of $\mathbb Z$-orders have integral characteristic polynomial and therefore their eigenvalues will be integral over $\mathbb Z$ no matter on what module we let them act). The assertion for all JM elements reduces to this (hope this is clear). - 1 A shrewd argument, but I'm not sure this is what Igor is searching for (it is, at least, not what I was searching for). When I asked Pavel Etingof for a proof, he came up with a slightly similar proof (he actually computed the eigenvalues with multiplicities using tableau combinatorics - and that's not more tableau combinatorics than needed to prove the branching rule). This is problem 4.52 (a) in his Introduction to representation theory ( arxiv.org/abs/0901.0827 ). What I am looking for (but may be unrealistic) is a completely elementary argument without tableaux or representations. – darij grinberg Dec 11 2011 at 15:08 Thank you very much, this is the kind of argument I was looking for. The restrictions being multiplicity-free are within my understanding of "simple", actually this is what I meant by "the chain's spectrum's simplicity" in my question. That fact is a corollary of the centralizer $Z(\mathbb{C}[S_n],\mathbb{C}[S_{n-1}])$ being commutative. – Igor Makhlin Dec 11 2011 at 15:41 Anyway, it would be very nice to have an elementary combinatoric proof of the identity provided by Darij in his first comment. – Igor Makhlin Dec 11 2011 at 15:44 1 Alternate proof that the eigenvalues are integers: They are eigenvalues of a matrix with integer entries, so they are algebraic integers, and you just showed that they are rational. – David Speyer Dec 19 2011 at 15:09 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I came up with a more or less elementary proof of the identity from the top comment to my question. It involves nothing more advanced than some basic linear algebra. Namely, let us denote $X_k=\sum\limits_{i=1}^{k-1} (i,k) \in\mathbb{C}[S_n]$, then we are to prove $$\prod\limits_{i=-k+1}^{k-1}(X_k-i)=0$$ for all $1\le k\le n$. For convenience sake we will also use $X_k$ to denote the linear operator on $\mathbb{C}[S_n]$ of left multiplication by $X_k \in \mathbb{C}[S_n]$. First of all, let us show that $X_k$ is a diagonalizable operator. There are many ways to prove this fact, for example it is easy to see that $X_k$'s matrix is symmetric in the standard basis consisting of all the elements of $S_n$ (since the matrix corresponding to any $(i,k)$ is obviously such). With the diagonalizability taken into account it is now sufficient to prove that $X_k$'s spectrum is a subset of $\{-k+1,-k+2,\ldots,k-1\}$. Starting with $X_1=0$ we conduct by induction on $k$. Suppose that $\lambda\not\in\{-k,\ldots,k\}$ is an eigenvalue of $X_{k+1}$. $X_{k+1}$ commutes with all of $\mathbb{C}[S_k]$ including $X_k$, which implies that $X_k$ and $X_{k+1}$ are simultaneously diagonalizable. Thus exists such a nonzero $v\in\mathbb{C}[S_n]$ that $X_{k+1}v=\lambda v$ and $X_kv=\mu v$. Our choice of $\lambda$ together with the inductive hypothesis provides $(\lambda-\mu)\not\in\{-1,0,1\}$ which lets us consider the element $$u=\left(s_k-\frac1{\lambda-\mu}\right)v$$ where $s_k=(k,k+1)$. $u\neq0$, otherwise we would have $s_kv=\frac1{\lambda-\mu}v\implies \lambda-\mu=\pm1$ since $s_k^2v=v$. Finally $$X_ku=X_ks_kv-\frac1{\lambda-\mu}X_kv=(s_kX_{k+1}-1)v-\frac\mu{\lambda-\mu}v=\lambda s_kv-v-\frac\mu{\lambda-\mu} v=\lambda u$$ where we employ the easily obtainable $s_kX_{k+1}=X_ks_k+1$. However $\lambda$ being an eigenvalue of $X_k$ contradicts the inductive hypothesis due to our choice of $\lambda$. - "is a an eigenvalue" is a typo. This argument is very much in the spirit of Okounkov-Vershik, but way simpler than what they are doing (then again I might be misunderstanding them; the paper is not among the most readable...). For a representation theorist, it is a very natural argument. – darij grinberg Dec 16 2011 at 3:12 Thanks for the typo. Yes, this proof was definitely inspired by their paper. And yes, as one can see from this question, they made an unexpected choice of statements to be left unproven. – Igor Makhlin Dec 16 2011 at 7:27 I will show that the eigenvalues of $X_{k+1}$ lie in the interval $[-k, k]$. Since Florian has already given a nice proof that the eigenvalues are integers, this answers your question. Lemma: Let $A$ be a symmetric matrix with nonnegative entries whose rows and columns all add up to $k$. Then, for any real vector $v$, we have $-k \langle v,v \rangle \leq \langle v, Av \rangle \leq k \langle v, v \rangle$. Proof: For any vector $v$, we have `$$\langle v, Av \rangle = k \sum v_i^2 - \sum_{i<j} A_{ij} (v_i-v_j)^2 \leq k \sum v_i^2 = k \langle v,v \rangle$$` as desired. The equality `$\langle v, Av \rangle = -k \sum v_i^2 + \sum_{i<j} A_{ij} (v_i+v_j)^2$` proves the other direction. $\square$ Now, the matrix of $X_{k+1}$ acting on the regular representation clearly obeys the conditions of the lemma. Taking $v$ to be an eigenvector with eigenvalue $\lambda$, we deduce that $-k \leq \lambda \leq k$, as desired. - 2 Nice proof, but I think your $X_k$ is Igor's $X_{k+1}$. – darij grinberg Dec 16 2011 at 3:05 Good catch! Fixed now. – David Speyer Dec 16 2011 at 13:46 It is not an answer, rather long comment... 1) I am sorry: my previous posts were incorrect, I will correct below. 2) I would suggest you guys insert the statement and may be proof to Wiki article, it is quite worth and since it was mainly written by me, imho I might give such a suggestion. The main message is that there is "certain relation" (described below) between standard Gelfand-Tsetlin maximal commutative subalgebra in $U(gl_M)$ and the maximal commutative subalgebra in $C[S_M]$ generated by Jucys-Murphy elements. The relation consists of two steps which can be seen as generalized Schur-Weyl duality and generalized $gl_M - gl_N$ duality. Both steps involve an intermediate object - "bending flow" commutative subalgebra in $U(gl_N \oplus ... \oplus gl_N)$ (sum contains $M$ terms). Briefly speaking these generalized dualities say that: images in certain representations of these commutative subalgebras coincide. Since I forget some details I would NOT make again the claim that "JM elements go to "quadratic Casimirs"", which might give another (but very long) way to answer Igor's question. Just simply describe the relation which might be interesting on its own. Step 1. Generalized Schur-Weyl from JM to "bending flows". (Rather trivial step). Consider $V=C^N \otimes ... \otimes C^N$ ($M$ terms in tensor product). $C[S_M]$ acts here in a natural way. $U(gl_N \oplus ... \oplus gl_N)$ surjects on $End(V)$. Since it surjects we can find certain elements in $U(gl_N \oplus ... \oplus gl_N)$ which are mapped to JM elements, moreover we require such elements to be quadratic in generators of $U(gl_N \oplus ... \oplus gl_N)$, and it would fix these elements. The basic idea is that the permutation operator (12) acting in $C^N\otimes C^N$ is OBVIOUSLY an image of $\sum_{ij} E_{ij}\otimes E_{ji} \in U(gl_N)\otimes U(gl_N)=U(gl_N\oplus gl_N)$ and nothing more than that. By $E_{ij}$ denoted the matrix with $1$ at position $(ij)$ and zeros everywhere else. So we get certain commutative subalgebra in $U(gl_N \oplus ... \oplus gl_N)$ such that it is "Schur-Weyl dual" to JM subalgebra, meaning that the images of these subalgebras in $End(V)$ coincide. Such a commutative subalgebra is called "generalized bended flows" or just "bending flows", by reason commented below. Step 2. $GL_M-GL_N$-duality from "bending flows" to Gelfand-Tsetlin. (This step is not so trivial). It is mainly due to Flaschka and Millson - section 8 of http://arxiv.org/abs/math.SG/0108191 Consider the vector space $W = S(C^N\otimes C^M) = S(C^N \oplus ... \oplus C^N)$ (M-terms in sum) and $S$ denotes symmetric algebra of the vector space. Lie algebras $gl_M$ and $U(gl_N \oplus ... \oplus gl_N)$ acts on $W$ in a natural way. Theorem: the images in $End(W)$ of GT and "bending flows" coincide. In such a form it is Theorem 2 page 9 in our paper: http://arxiv.org/abs/0710.4971 Why the name "bending flows" ? If we make similar considerations for $U(so_3 \oplus ... \oplus so_3)$ or more precisely its associated grade Poisson algebra $S(so_3 \oplus ... \oplus so_3)$ we get a (Poisson) commutative subalgebra there. The beautiful fact is that "JM" type generators have very nice geometric interpretation. We can identify $so_3=R^3$ and so elements of $(so_3 \oplus ... \oplus so_3)$ can be seen as $M$-gons in $R^3$. The statement is that if we "bend" polygon along the non-intersecting diagonals then such flows will be hamiltonian and will be defined by JM-type generators in $S(so_3 \oplus ... \oplus so_3)$. Well, I omitted some details and may be comment is not so clear, one should draw simple pictures in order to see what is going on. Bending flows were proposed for $S(so_3 \oplus ... \oplus so_3)$ in paper M. Kapovich, J. Millson, The symplectic geometry of polygons in Euclidean space,J. Differ. Geom. 44, 479–513 (1996) Generalized further in several papers in particular in Gregorio Falqui, Fabio Musso, Gaudin Models and Bending Flows: a Geometrical Point of View, J. Phys. A 36 (2003), no. 46,11655–11676. nlin.SI/0306005 http://arxiv.org/abs/nlin/0306005 - "JM elements will be mapped into quadratic Casimir operators". By what? Schur-Weyl duality? How exactly? – darij grinberg Dec 16 2011 at 21:47 Sorry to say, but I don't understand any of the sentences of your post. I am not used to physicists' terminology at all, but not even the maths part is clear to me. Feel free to explain in Russian if it is easier to you; both Igor and me understand it. – darij grinberg Dec 16 2011 at 21:48 Well, I'm trying to wrap my mind around fact 1. Saying that the images of the GT subalgebras in $\mathbb{C}[S_n]$ and $U(\mathfrak{gl}_k)$ in $\mathrm{End}(\mathbb{C}^k\otimes\mathbb{C}^n)$ coincide, you're surely considering some isomorphism between $\mathbb{C}^k\otimes\mathbb{C}^n$ and $\mathbb{C}^k\oplus\ldots\oplus\mathbb{C}^k$. Which one is it? Since I'm pretty sure that the natural $e_i\otimes e_j\to e_{j,i}$ wouldn't do the trick: in this basis the matrices corresponding to $\mathfrak{gl}_k$ will all be block diagonal ($n$ blocks $k\times k$), while $X_k$'s matrix won't be such. – Igor Makhlin Dec 17 2011 at 14:18 Less formally, fact 1 seems pretty strange to me, since $S_n$ acts only on the second component of a tensor from $\mathbb{C}^k\otimes\mathbb{C}^n$, while $\mathfrak{gl}_k$ only on the first one. This is if we're applying the natural isomorphism mentioned above. – Igor Makhlin Dec 17 2011 at 14:33 AFAIU, for fact 1 to hold it will have to be an isomorphism mapping (any two) GT bases in $\mathbb{C}^k\otimes\mathbb{C}^n$ and $\mathbb{C}^k\oplus\ldots\oplus\mathbb{C}^k$ into each other because our images in $\mathrm{End}$ are the endomorphisms diagonal is these bases. What I'm asking is whether this is the way you initially define your isomorphism. – Igor Makhlin Dec 17 2011 at 14:48 show 4 more comments Awfully sophisticated proof for the fact :) Just to relate it with the question about Knizhnik-Zamolodchikov equation: http://mathoverflow.net/questions/95183/find-polynom-pz-with-values-in-cs-n-such-that-pz-sum-i-id1i-z-i Consider the following KZ ODE: $p'(z) = \sum_{i=2...n} \frac{ Id + \pi( (1i) )}{z-z_i} p (z)$ As it is discussed in MO-question above it is known to have polynomial solution. The reside at infinity is equal to $Res=-\sum_{i=2...n} { Id + \pi( (1i) )}$. Which is our beloved JM-element up to sign and n*Id. Hence its eigenvalues must be non-positive integers (this is obvious since at infinity the solution looks like \$(1/z)^{Res}, so in order to be polynomial in z they must be non-positive ints). Hence we are done. Moreover we got that eigs are greater or equal -n (as David Speyer proved directly above). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 122, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324485659599304, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/108140/recurrence-relation-with-non-constant-coefficients?answertab=oldest
# recurrence relation with non constant coefficients I'm trying to solve a second order differential equation and I got a recurrence. Can someone help to solve $$n(n-1+q)a_{n}-a_{n-3}+e\cdot a_{n-2}=0$$ where $q$, $e$, and $a_{0}$ are some real numbers with $a_{1}=0$ and $a_{2}=-e\cdot a_{0}/(2(1+q))$. - 3 Perhaps you should edit your question so that it looks a little cleaner. Some $a_0$'s just floating in there – Patrick Da Silva Feb 11 '12 at 14:51 1 Recurrences like this don't always have neat solutions. One approach is to calculate the next few terms to see whether there is a pattern. – Gerry Myerson Feb 12 '12 at 0:59 I actually calculate the first six terms, the dependence on n and q clears up but it remains unclear in "e" – mohamed benbitour Feb 12 '12 at 10:13 Maybe if you edit into your question what you have found out about $n$ and $q$, someone will be able to help you with $e$. – Gerry Myerson Feb 13 '12 at 3:55 I'm very hapy becouse I found a solution in "Combinatorics Function technics" hfa1.physics.msstate.edu/064.pdf – mohamed benbitour Feb 15 '12 at 13:49 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398614168167114, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/209344/how-to-prove-that-c-mathbbr-does-not-have-the-heine-borel-property
# How to prove that $C(\mathbb{R})$ does not have the Heine-Borel property The space $X=C(\mathbb{R})=\{f:\mathbb{R}\to\mathbb{C}: f \text{ is continuous}\}$ is metric (not normed) and a Frechet space. I want to show that this space does not satisfy the Heine-Borel property (which means that any closed and bounded subset of $X$ is compact) I feel like the collection $\{\exp(2\pi inx):n\in\mathbb{N}\}$ is a suitable candidate for a counterexample, since it is clearly bounded. How can I show that this set is closed, but not compact in the topology generated by the metric? - 1 What metric are you using? – Chris Eagle Oct 8 '12 at 17:08 1 Or if it’s easier to describe, what countable family of seminorms are you using? – Brian M. Scott Oct 8 '12 at 17:16 If you show the sequence has no convergent subsequence, this will show it is closed (because it has no limit points) and not compact (by Bolzano-Weierstrass). – Nate Eldredge Oct 8 '12 at 17:25 I’m going to guess that $\|f\|_k=\sup_{x\in[-k,k]}|f(x)|$. – Brian M. Scott Oct 8 '12 at 17:26 ## 1 Answer I’m going to guess that $\|f\|_k=\sup_{x\in[-k,k]}|f(x)|$. If $m<n$, then $$\exp(2\pi inx)-\exp(2\pi imx)=\exp(2\pi imx)\Big(\exp(2\pi i(n-m)x)-1\Big)\;,$$ which at $x=\frac1{2(n-m)}$ is $-2\exp\left(\frac{\pi im}{n-m}\right)$. Let $f_n(x)=\exp(2\pi inx)$; then $\|f_n-f_m\|_k=2$ for all $k\in\Bbb Z^+$, and $\|f_n-f_m\|_0=0$, so $$d(f_n,f_m)=\sum_{k\ge 1}\frac{\|f_n-f_m\|_k}{1+\|f_n-f_m\|_k}2^{-k}=\frac23\sum_{k\ge 1}2^{-k}=\frac23\;.$$ Clearly $\langle f_n:n\in\Bbb N\rangle$ can have no Cauchy subsequence and hence no convergent subsequence. - It is simpler to just pick any non-zero function with support contained in $[0,1]$ and consider the set of its integer translates. – Mariano Suárez-Alvarez♦ Oct 8 '12 at 18:08 @MarianoSuárez-Alvarez This is boring and not challenging :) Brian M. Scott (+1). – Norbert Oct 8 '12 at 18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928844153881073, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/markov-chains+reference-request
# Tagged Questions 1answer 22 views ### Time Periodic Homogeneous Markov Chain I want to find a textbook or survey article reference with a treatment of discrete-time, inhomogeneous, yet time periodic, markov chains on finite state spaces. Elaboration: I have an inhomogeneous ... 0answers 21 views ### Transition matrix after lumping Markov Chain? I managed to partition the set of states $P=\{A_1,A_2,...\}$ so that they satisfy the lumpability criterion (for all states in $A_i$ the sum of the outgoing rates to the states in the target partition ... 0answers 119 views ### Boundedness of expected reward Markov chain (may be related to discret $M/M/\infty$ queue) [EDIT]: I read a bit on $M/M/\infty$ queue and it may not be the right comparison and my notation may be confusing (I'm in discrete time and $\lambda,\mu$ look likes rates when they are probability). ... 0answers 36 views ### Efficient random number generation for sojourn times in semi-Markov processes I'm doing a self-study of semi-Markov processes and was wondering if there are efficient methods for generating random numbers for sojourn times. For example, generating a bunch of random numbers from ... 0answers 23 views ### Name for maximum transition probability Let $p(x,y)$ denote the transition probability of a markov chain. Similarly, let $p^n(x,y)$ be the n-step transition probability. My question is, is there a formal name for $S(x,y):=\sup_n p^n(x,y)$. ... 0answers 24 views ### Books about Markov Models I am looking about books on Markov chains, with recent findings such as autoregressive HMM, HMM with inputs, multiple HMM connected together. Is there anything I can look at? 0answers 40 views ### Markov chain supplementary litterature I'm studying markov chains through Durrett and I'm finding it quite hard to read. Does anyone have a good idea to supplementary book, preferably one on his level of generality and which studies some ... 1answer 736 views ### Markov Models simple introduction We're studying Markov models (still at the basis: transitory states, periodic states, etc..) but the professor isn't very good at teaching and I feel I'm getting lost soon. I'd love to have a simple ... 0answers 130 views ### Does every continuous time minimal Markov chain have the Feller property? Consider a Q-matrix on a countable state space. (A Q-matrix is a matrix whose rows sum up to $0$, with nonpositive finite diagonal entries and nonnegative offdiagonal entries). As explained for ... 0answers 45 views ### Green kernel of Hunt processes Let $X$ be a regular Hunt process on $R^+$ with starting at $x$ and $T_y :=\inf\{t>0: X_t=y\}, y\in R^+$. $G_q(\cdot,\cdot), q >0$ denotes Green kernel of the process $X$. We have the following ... 0answers 145 views ### Potential theory: discrete-time Markov processes Recently I've found lecture notes on "Analysis on Graphs" where the potential theory methods were used to study discrete-time, time-reversible Markov chains (i.e. the state space is countable). ... 1answer 120 views ### Steady distribution for the reflected random walk Let us consider the state space being $0,1,\dots,M$ for some $M\in \mathbb N$ and put there $N$ walkers: $$X = (X_1,\dots,X_N).$$ Each of the walkers move independently, they can be in the same ... 1answer 79 views ### Shuffling cards and the horseshoe map I wonder if there is a connection between the dynamics of repeated cut & shuffle operations on a deck of cards, and topological chaotic maps such as the horseshoe map? I ask this entirely naively. ... 2answers 167 views ### “Small sets” in Markov chains I came across a definition for a "small set" (of the state space) $A \subset \Omega$: there exists a $\delta > 0$ and a measure $\mu$ such that $p^{(k)}(x, \cdot) \geq \delta \mu (\cdot)$ for every ... 1answer 451 views ### First time passage decomposition for continuous time Markov chain For discrete time finite Markov chain, the first passage time $T_j$ to visit state $j$, is determined from the recurrence equation: p^{(n)}_{ij} = \sum_{k=0}^n f_{ij}^{(k)} p^{(n-k)}_{jj} = ... 3answers 199 views ### From a deterministic discrete process to a Markov chain: conditions? When will a probabilistic process obtained by an "abstraction" from a deterministic discrete process satisfy the Markov property? Example #1) Suppose we have some recurrence, e.g., $a_t=a^2_{t-1}$, ... 2answers 783 views ### Nice references on Markov chains/processes? I am currently learning about Markov chains and Markov processes, as part of my study on stochastic processes. I feel there are so many properties about Markov chain, but the book that I have makes ... 5answers 1k views ### Good introductory book for Markov processes Which is a good introductory book for Markov chains and Markov processes? Thank you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077240228652954, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4247/constructing-a-minimum-variance-portfolio?answertab=votes
# constructing a minimum variance portfolio Assume a US-based company has sold something to a Norwegian company. It will receive 1M Norwegian Kroner in two months, and would like to hedge this future cash flow against currency exchange risk. It's not possible to hedge Kroner and Dollars directly, but the company can hedge with Euros, since Euros behave similarly to Kroner. Following data is given: Exchange rates: 0.164\$/Kroner$=: K\$; 0.625\$/€$=: M\$; 0.262€/Kroner. correlation coefficient between $K,M: \sigma_{K,M}=0.8$ standard deviations: $\sigma_{M}= 3\%$, $\sigma_{K}=2.5\%$ per month. I am confused since we actually have not two assets, where we have to find the optimal weights in the portfolio, but we have only one asset, which is the futures between Euros and Dollars. Can someone show me how I work with this data, and explain the solution? - – chrisaycock♦ Oct 3 '12 at 13:02 Okay thanks, but that is not part of the exercise! Let's assume there is no such future! -Marie – Marie. P. Oct 3 '12 at 13:03 ## 2 Answers Strictly speaking, this is a proxy hedging problem. You have to hedge one currency with another. The one period covariance matrix is assumed to be $$\Sigma_{1}=\left[\begin{array}{cc} 0.03 & 0\\ 0 & 0.025 \end{array}\right]\left[\begin{array}{cc} 1 & 0.8\\ 0.8 & 1 \end{array}\right]\left[\begin{array}{cc} 0.03 & 0\\ 0 & 0.025 \end{array}\right]$$ So you can calculate the two period covariance matrix as $$\Sigma_{2}=2\Sigma_{1}$$ By your assumptions, you have a portfolio that is effectively $-100\%$ short the Krona. You could set this up an optimization problem to find $$w\equiv argmin\left\{ \left(w-w_{b}\right)'\Sigma_{2}\left(w-w_{b}\right)'\right\}$$ where $w_{b}\equiv\left[\begin{array}{c} 0\\ -1 \end{array}\right]$ and constrain to weights so that $w$ makes no investment in the Krona. However, in this case, you can do it more simply. You can write out the portfolio variance (using values from $\Sigma_{2}$) as $$\sigma^{2}=w_{1}^{2}\sigma_{1}^{2}+w_{2}^{2}\sigma_{2}^{2}+2w_{1}w_{2}\sigma_{1,2}$$ and since $w_{2}$ is already known to equal $-100\%$ you can minimize that directly to find $$w_{1}=-w_{2}\frac{\sigma_{1,2}}{\sigma_{1}^{2}}$$ which works out to $w_{1}=\frac{2}{3}$. Then you would just convert that weight into an amount of euros to buy. - nice answer I knew I wasn't getting the portfolio weights right. – jeffery_the_wind Oct 3 '12 at 14:41 The reason for hedging the exchange risk is that the deal was made given the current exchange rate, and the company wants to make sure that they receive the same amount of dollars (or more) in 2 months that they agreed to now. Assuming that the exchange rates stay the same, when they get paid: ````K 1M = $ 164,000 or € 262,000 ```` The variance that you want to minimize is the \$/K variance, which is the variance of our future cash flow to be paid in 2 months. We are saying that we can't just buy the NOK/USD futures, and since the Euro is highly correlated with Kroners, we can use Euro. Remember you also have dollars, so you actually have 2 assets. If you buy all Euros then you are exposed to the risk of the Dollar gaining on the Euro, if you just keep all dollars, you are exposed to the Euro (and Kroner) gaining on the dollar. So I think the minimum variance of your future cash flow will be attained by buying some combination of dollars and euros. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611780643463135, "perplexity_flag": "middle"}
http://motls.blogspot.com/2013/02/a-pro-firewall-paper.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Monday, February 25, 2013 ### A pro-firewall paper The black hole firewall saga has continued with several new papers. Since the last blog entry about the topic, three Japanese authors proposed something that they call a self-consistent model of the black hole evaporation, probably without any firewalls. Because a starting point is semiclassical gravity, the paper can't be self-consistent, however. Rodolfo Gambini and Jorge Pullin propose that loop quantum gravity "solves" the firewall problem by producing some new degrees of freedom. They extend the LQG algebra to a Lie algebra. I guess other LQG proponents won't like such a heretical modification but one must realize that in LQG, one may add, modify, or erase any degrees of freedom and any terms in the constraints and equations of motion because they're completely ill-defined and arbitrary and they don't change the quality of the theory because of the GIGO principle (garbage in, garbage out). These adjectives must be considered on top of the fact that regardless of the choices, LQG is inconsistent and amazingly dumb, too. At any rate, an attempt to find new degrees of freedom in a theory is very modern – and I would say stringy. Kudos to the authors for that. Today, there's a new pro-firewall paper. Steven G. Avery and Borun D. Chowdhury explicitly try to disprove some recent anti-firewall papers, especially the excellent Papadodimas-Raju paper (on firewall considerations and doubled operators in the context of AdS/CFT) as well as the Harlow-Hayden claim that speed limitation on quantum computers hold and are essential to save us from firewall paradoxes. Firewalls in AdS/CFT I was trying to read the paper but it just doesn't click. They're trying to offer lots of ambitious claims but I can't see any evidence backing these claims so far. Can you help me? The basic claim of these two authors is that AMPS is right and firewalls have to exist because the AdS/CFT dual of a thermal state in the CFT is a firewall. That's great and understandable. However, I was trying to find out why they think so and I just can't see anything rational in that paper that would explain their opinion. These authors must be thanked for noticing that a basic criticism of the firewall meme is that $${\mathcal A}={\mathcal C}$$, approximately: the degrees of freedom in the radiation and in the black hole interior aren't "two wives" that would violate the monogamy rule for quantum entanglement because they're – partly or entirely – the same woman! However, Avery and Chowdhury don't like this observation – which has been at the heart of the black hole complementarity from the beginning. Nevertheless, they seem to misunderstand or overlook all the other important insights that have been unmasked in the context of the AMPS research and, equally importantly, all the arguments meant to support their conclusion that there has to be a firewall at the horizon of the bulk dual of a thermal state seem vague, dull, and emotional to me. I just don't see any real arguments. Instead, what I see are numerous repetition of the word "bizarre". So they argue against the resolutions in the following way: Taking the next logical step, we let an arbitrary system play the role of the early radiation. In other words, we imagine coupling a source/sink to the CFT, allowing them to equilibrate and thereby become entangled, and then decouple them. Next, we couple a source to the CFT to create an infalling observer. The $${\mathcal A}={\mathcal C}$$ argument in this case would mean that the degrees of freedom of the source/sink that purifies the CFT are available to the infalling observer, allowing her free infall. Since the systems are decoupled, this seems to be a bizarre state of affairs given that we are talking about arbitrary (decoupled) systems giving universal free infall. We thus conclude that the dual to a thermal state in the CFT is a firewall! Well, the individual parts of the spacetime in the black hole spacetime never quite decouple and never become quite independent so the toy model that they study is self-evidently not equivalent to the case of a real black hole – one that is formed, one that adopts infalling observers, and that later evaporates. Whatever they derive about this system can't be trusted to be a valid conclusion about a full-fledged, genuine black hole. To make you sure that the word "bizarre" appears thrice in the paper, here are the remaining two copies: We simply do not have the other CFT's degrees of freedom that are necessary for free infall. The equivalent of Papadodimas-Raju and Harlow-Hayden argument would be that the degrees of freedom of $${\mathcal H}_S$$ (which is the equivalent of $${\mathcal H}_A$$ for the evaporating branes) nevertheless come into play. However, given that we have decoupled the source/sink from the CFT this seems rather bizarre. Furthermore, the CFT could have been thermalized by an arbitrary system which may not be described by a CFT at all. It seems rather bizarre that an observer falling into the CFT system would still be able to access the degrees of freedom of HS universally, irrespective of the properties of the latter system. [Said differently...] And so on. One question is whether firewalls are forced upon us by a valid argument. I think that the answer is No because the black hole interior may always be viewed as a "dead end extrapolation" of the degrees of freedom outside the black hole and whatever the observers measure inside a black hole will never get out so these measurements simply can't lead to any contradictions, whatever their results are. But even if I imagined myself to be agnostic about the existence of firewalls, I think that I would still find the logic of the text above incomprehensible – a polite word for "fallacious". They study a toy model in which they manually replace the CFT by something entirely different and then they claim that it's bizarre because the CFT is replaced by something general. But everything they call "bizarre" was created by themselves so why do they complain about it? In the case of AdS/CFT black holes, all the dynamics anywhere in the spacetime is encoded in a CFT. If someone claims that string theory i.e. quantum gravity in an AdS space reconciles all the requirements nicely without any firewalls, he is only making a statement about the way how the degrees of freedom in a CFT may be picked, evolved, and interpreted. Such a defender of the consistency of quantum gravity without firewalls clearly makes no statement about non-quantum-gravity theories that have something else instead of a CFT. In fact, he will probably agree that the consistency of quantum gravity is such a delicate and sensitive feature that if you modify almost anything about it, the whole structure will become inconsistent. So if you find a bizarre feature of such "mutated" theories, it's your personal problem, surely not a problem of quantum gravity or defenders of its consistency (without firewalls)! Also, these authors try to demonize nonlocality of any kind. It seems obvious – and I think that all sensible experts as of 2013 agree – that some nonlocality is present when black holes evaporate. This is needed to avoid Hawking's semiclassical conclusion that the information simply can't get out so the evolution of the initial star to the final Hawking radiation has to be non-unitary. In reality, the nonlocality encountered in these situations is tiny and an exponentially tiny nonlocal effects are enough to reconcile all the principles. However, these folks – and I think it's not just Avery-Chowdhury but also the authors of AMPS and probably others – seem to work in some "yes/no" dogmatic way. When some nonlocality is needed, they say "everything has gone awry" and they immediately make huge claims. What they forget is, among other things, that the existence of a firewall requires a huge violation of locality, too. In fact, the violation of locality and causality needed to produce a firewall at the place of the event horizon is much larger than the nonlocality believed by Raju-Papadodimas, by me, and by many others. The claim that the nonlocality imposed upon us is tiny is the very main point of the Raju-Papadodimas paper and many others! The firewall proponents seem to use perfect locality of some sort as their main motivation or argument (that's also why Avery and Chowdhury think that the "completely decoupled heat bath" is a valid model of a black hole, after all; in the real world, the interior's non-decoupling from the exterior and from the radiation is a key principle and the essence of the black hole complementarity expressed in different words) but then they derive, using their assumptions, that the locality is actually violated brutally (by the existence of the firewalls) and they don't seem to care that their conclusions are inconsistent with their assumptions which means that their "whole theoretical framework" is inconsistent gibberish. Again, the firewall defenders seem to (correctly) conclude that the black holes can't preserve all the quantum-information principles with a perfect locality – so they immediately make the jump and conclude that the nonlocality must be so huge that it doesn't allow you to enter the black hole interior at all (at least not if you want to stay alive). But they actually never prove anything of the sort. In fact, they never really define what a "firewall" is and they never quantify how strong effects it is actually supposed to have. They're entering some bizarre "fundamentalist", Yes/No discourse. I am just not getting it. When it comes to firewalls, they don't seem to be thinking as physicists at all. Let me copy the rest of a paragraph I have already posted: [...of the latter system.] Said differently the evolution of a perturbation created with support on the CFT beyond the horizon depends not only on how the CFT is entangled with some other system but also on the Hamiltonian of the combined system. We thus conclude that for generic System 2 the infalling observer hits a firewall. This is shown in Figure 6. We learn that "We thus conclude [something]". That's nice that they "conclude" something except that "something" doesn't follow from the previous sentences and it isn't even well-defined. In principle, the evolution of any degrees of freedom in a black hole spacetime may depend on any other degrees of freedom – there is room for some nonlocality – except that we must always ask how strong the nonlocality is, how it can be parameterized, whether it may be operationally measured, what variables may parameterically suppress it, and so on. Apologies for this analogy but their approach to the "firewalls are real" claim is analogous to the climate alarmists' approach to the "global warming is real" meme. The content of the sentence isn't well-defined. It apparently doesn't even have to be well-defined and whatever its meaning is, the claim doesn't have to be rationally justified, anyway. It's just not science I can understand – in my understanding, science is composed of propositions about things that are in principle observable by a well-defined protocol and links between such propositions justified by flawless logic. Their convergence towards the sentence "We conclude that there are firewalls" looks pretty much isomorphic to this well-known cartoon: "Step one. Step two: here a miracle happens. Now, we conclude that there are firewalls." Very nice but could you please be more explicit in Step two? ;-) I have my doubts about certain aspects of the Papadodimas-Raju claims and constructions as well but I feel that it's due to some localized misunderstandings of mine – and perhaps less likely, localized mistakes by them – and these problems could be "perturbatively fixed" (which may or may not change the major conclusions). However, when I read the paper by Avery-Chowdhury, I just don't recognize it as rational thinking of the type I know. There's no place for me to start. I understand what they want to conclude but I can't find any calculations or analyses of anything that could possibly be related to these things. #### 3 comments: 1. Jose Frugone Proud to see a paper from my former promotor (Gambini) discussed in many forums. Don't be so hard with their approach. Their result is interesting even if they dont use the approach you preffer. He is from Uruguay and Pulling from Argentina. 2. Hi Jose, I know very well, in the case of Jorge who is somewhat regularly sending me some funny internet banalities, usually new Rube Goldberg machines. ;-) 3. anony fwiw...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9592770338058472, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=618122
Physics Forums ## Find Max Compression of a Spring A 0.50 kg block is pushed against a 400 N/m spring, compressing it 22 cm. When the block is released, it moves along a frictionless horizontal surface and then up an incline (which has friction). The angle of the incline is 37 degrees and the coefficient of kinetic friction is 0.25. Use the conservation of energy law to find the maximum compression of the spring when the block returns to it. Wother = ΔKE + ΔUg + ΔUel KE = 0.5 * m * v2 Ug = m*g*y Uel = -0.5 * k * compression2 This is actually a multi-step problem and it is the last part that I am stuck on. I have found the speed of the block just after it leaves the spring (6.2 m/s), the distance up the ramp that the block travels (2.47 m), and the height of the ramp where the block stops and begins to slide down again (1.48 m). To find the maximum compression of the spring when it slides back down the ramp, I am trying to find the compression variable from the Uel equation, correct? I can solve for ΔUg, plug in k for the ΔUel equation and leave the compression as the variable I am trying to find. But for the ΔKE I don't know how to find the velocity. It is not the same as I found for when the block leaves the spring intially, right? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Homework Help Science Advisor Right. So what is the initial KE on its way down the slope, and what is its final KE when it compresses the spring and comes to a temporary stop? If the velocity is 0, then the KE is 0. So the initial KE when the block is on its way down the slope is 0 because at the top of the slope the block has stopped to change direction. Is the final KE also 0 because it is temporarily stopped there? Recognitions: Gold Member Homework Help Science Advisor ## Find Max Compression of a Spring Quote by Twiggy92 If the velocity is 0, then the KE is 0. So the initial KE when the block is on its way down the slope is 0 because at the top of the slope the block has stopped to change direction. Is the final KE also 0 because it is temporarily stopped there? Yes, correct. On the way up KEup=mgh+Ffx On the way down KEdown=mgh-Ffx Recognitions: Gold Member Homework Help Twiggy92, It's ok to work with the KE at the bottom of the slope. But you don't have to. To see the full power of energy concepts, you can skip the KE. Let point A be the initial point where the mass is compressed against the spring a distance xo. Let point B be the point where the mass reaches its highest point on the incline, and let point C be the final point where the mass is compressed against the spring a distance xf. Use Wother = ΔKE + ΔUg + ΔUel with point A as initial point and point B as final point. Note that KE is zero at both of these points. See if you can get the equation into the form $\frac{1}{2}$kxo2 = mgdsinθ + μkmgdcosθ where d is distance traveled along slope. Then apply Wother = ΔKE + ΔUg + ΔUel using B as the initial point and C as the final point. See if you can get $\frac{1}{2}$kxf2 = mgdsinθ - μkmgdcosθ If you then divide the second equation by the first, you can get an expression for xf2 / xo2. The result can be simplified by cancelling out common factors of m, g, and d. However, if you feel more comfortable breaking up the problem into more parts by finding the KE at the bottom of the ramp going up and then going down, then go with that method. I have the same problem for my physics class, must be using the same book or something. for this problem i got a different value for how far it goes up the slope. I got .885m, however i'm not sure this is right. I went through my logic multiple times and I still come up with my answer instead of yours. As for the velocity I got the same result. The compression when the object comes back down the slope is where i get lost. I'm not sure how to set up the conservation of energy theorem. Recognitions: Gold Member Homework Help Setting up the energy equation for coming back down is very similar to setting it up for going up the slope. See azizlwl's post. It would help if you could give us more detail on how you are setting up your equations. I think Twiggy92's result of 2.47 m for the distance traveled up along the slope is correct. Tags compression, maximum, spring Thread Tools | | | | |-------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Find Max Compression of a Spring | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 7 | | | Introductory Physics Homework | 9 | | | Introductory Physics Homework | 9 | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196192026138306, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/51240/is-all-matter-made-of-virtual-particles?answertab=active
# Is all matter made of virtual particles? This article in New Scientist says that all matter is actually virtual particles popping in and out of existence and nothing more. is this correct? - 3 It is much,much more complicated than stated in the article. So much that the main article line is completely wrong. – Misha Jan 15 at 5:16 4 A disturbingly large proportion of that article is either wrong or grossly misleading, from the title, to the closing sentence "So if the LHC confirms that the Higgs exists, it will mean all reality is virtual" – twistor59 Jan 15 at 7:35 can I get this right then, in quantum field theory, electrons and quarks are not completelty virtual particles? – lee hudson Jan 15 at 12:11 1 There is no sharp dividing line between a virtual particle and a real particle; it's not true that every particle is clearly one or the other. However, calling the electrons and nuclei that make up a physical object virtual is really very misleading. – Peter Shor Jan 15 at 16:56 but peter, we are told that its unknown if virtual particles exist? if they dont, that means normal particles aka field quanta dont exist because virtual and not virtual are the same? – lee hudson Jan 15 at 17:57 show 4 more comments ## 1 Answer The terminology "virtual particle" comes from quantum field theory. Note the third word in QFT, theory. Theory means that it is a mathematical model for calculations which will, if the theory is valid, describe concrete measurements and behaviors of physical reality. The basic building block of QFT is the Feynman diagram: a mathematical prescription that will give integrals which will describe the crossection and angular distributions and phases in experiments with elementary "particles". By "paticles" we refer to the standard model particles assumed to be elementary. Some of the lines that you see in this image are called virtual particles and some are called on mass shell particles. The two electron lines are on mass shell, the gamma, the quarks and the gluon are virtual, in this diagram. It is incomplete because although it has as input on mass shell particles the right side has only virtual ones. The following decay of a Kaon shows input real and output real particles, i.e. particles we can measure with our instruments and get their lifetimes, their crossections etc and check against the theory as exemplified by the Feynman diagrams. We observe that input is a pair of quarks and output are three pairs of quarks. It is the pairs that are real. The quarks, the gluon, the W exchanged are virtual, i.e. off mass shell particles: with the quantum numbers from the table of SM particles but an off shell mass. Now one can say that the K meson is composed of virtual particles if one makes a one to one identification of the guess, a good guess, of how the Kaon is built up, but the Kaon itself is real, as are the pions which are the end products. Real because we measure them with our instruments. Virtual particles are a very useful mathematical tool that simplifies our view of particle interactions, that is all. In some sense asking about "is everything virtual" is like entering a clock shop with its hundreds of clocks and say " are all clocks gears". Clocks are made up with gears but are much more than gears. Thus protons are made up with quarks and a sea of gluons, all virtual particles, but a proton is much more than the virtual particles. Edit: I will incorporate the content of my reply to the comments here. Electrons and all leptons are real particles when on the mass shell, i.e. on external Feynman lines. The same is true for photons and Z and W bosons when on mass shell. Quarks and gluons can never be free and measured individually, because of the nature of the strong force. So quarks are always virtual and have to appear in pairs(mesons) or triplets (protons) in the external legs of the diagrams, and it is the pairs and triplets that can be on the mass shell. Gluons cannot be on their mass shell either, and their only externally measured evidence is in hadronic jets. The nucleus is a more complicated grouping, it contains off the mass shell protons neutrons gluons photons, but all together, the nucleus is on the mass shell. when isolated. When surrounded by electrons it becomes an atom, the atom then is the one that is on the mass shell and contains electrons +nucleus off the mass shell. When observing a molecule, the molecule is on the mass shell and contains off mass shell atoms. When observing a crystal, the whole crystal is on the mass shell and the molecules are off mass shell. All microscopic quantum mechanically defined items can be sometimes virtual and some times real ( except quarks and gluons which are always virtual). Virtual is in the contained level and real is the total group that is under measurement and observation. There is always a real level riding on virtual particles, nuclei, atoms, molecules depending on the magnification one looks at in the microcosm. Our reality rests on nested levels of virtuality and reality ascertains itself on the level we are examining. In the classical world we live and move in the underlying levels of virtuality are immaterial, because in addition to the nesting process described above there is also something called "decoherence" decoherence is the mechanism by which the classical limit emerges out of a quantum starting point and it determines the location of the quantum-classical boundary. Decoherence occurs when a system interacts with its environment in a thermodynamically irreversible way. This prevents different elements in the quantum superposition of the system+environment's wavefunction from interfering with each other. It statistically transforms all quantum mechanical substructures to the level we measure and describe with classical physics and certainly call real. - so anna as you put it, all quarks and electrons, those which make up the atom are virtual particles? – lee hudson Jan 15 at 22:52 see my edit above – anna v Jan 16 at 7:21 Hello Anna, so interesting. I was doing a little of research about the use of the term "virtual particle" and I'm astonished by your answer. What is the definition of "virtual particle" you are using? It seems that you call "virtual particle" to each component or interacting particle (elementary or not) of a bound state (hadrons, nucleus, atoms,etc) while "real particle" to the whole system (first time I see this). But at the same time you identify virtual particles with "those" that aren't on-shell (this is closer to the definition I'm familiar with). – drake May 12 at 4:02 You also say that quarks are always virtual (because of confinement and your first definition). However, when you compute, say, the tree-level cross section $\sigma (e^-\, e^+ \rightarrow q\, \bar q)$ at centre of mass energies much higher than 1 GeV the quarks are on-shell, they are final states belonging to the Hilbert space. In your opinion/definition, is there any relation between non-virtual (i.e., real) particles and states in a Hilbert space? Are you identifying free particles with real particles and interacting particles with virtual particles? – drake May 12 at 4:13 – drake May 12 at 4:14 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235969185829163, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/6259/rsa-primes-vs-largest-known-primes/6265
# RSA primes vs. largest known primes In the context of a new largest (mersenne) prime number being found this week - The largest known prime number is now `2^57,885,161 − 1`, and it took 5 years to find it since the last largest prime was found. But we know that various asymmetric encryption algorithms require ridiculously large primes which are used as the infamous `p` and `q` factors. For example, 1024-bit RSA would require two 512-bit primes. But this article, if correct, claims we are able to enumerate no more than `~1.7M` prime numbers. How does this settle with cryptographic prime number generation? - Why can't you use somes of the largest primes that fit in a 512 bit integer? – Cole Johnson Feb 8 at 17:24 ## 3 Answers A Mersenne prime is a prime number that can be written in the form $M_p = 2^n-1$, and they’re extremely rare finds. Of all the numbers between 0 and $2^{25,964,951}-1$ there are 1,622,441 that are prime, but only 42 are Mersenne primes. The second sentence is wrong. What they meant to say is that there are 1,622,441 numbers of the form they mentioned in the first sentence below $2^{25,964,951}-1$. There are 1,622,441 primes below 25,964,951 and thus 1,622,441 numbers of the form $2^p-1$ (with prime $p$) below $2^{25,964,951}-1$. See Wolfram Alpha. There are over $2^{500}$ primes with exactly $2^{512}$ bits, so there are plenty to choose from for our RSA keys. Take a look at the Prime number theorem which tells you that there are approximately $\frac{n}{\ln(n)}$ primes below $n$. - 1 Note that ArsTechnica has since tried to correct the article (and made it wrong in a different way). It now reads "Of all the numbers between 0 and 25,964,951 there are 1,622,441 that are prime, but only 42 are Mersenne primes". – poncho Feb 7 at 20:43 In addition to RSA key generation often using probabilistic mechanism instead of proving p and q are prime, there are several other requirements, for instance p and q should not be too close. In the context of Mersenne primes, it is worth noting that usually it is not preferable for either p or q to be Mersenne prime, i.e. for RSA key generation if using mechanism which proves primality of p or q, mechanisms that are only suitable for Mersenne primes shall be avoided. The problem with Mersenne primes is that there are relatively few Mersenne primes, and thus factoring p or q becomes trivial. - the problem with a Mersenne prime $p$ is that $p+1$ only has small factors, and this allows for faster factoring. Same holds when $p-1$ has lots of small factors as well. So in RSA keygeneration, such primes are not used, in the standard libraries.. – Henno Brandsma Feb 8 at 22:27 It might be also worth noting that particular RSA implementations usually use some sort of sieve and primality test to get their primes. The steps usually are: 1. Generate as candidate a random odd number n of appropriate size. 2. Test n for primality. 3. If n is composite, return to the first step. The second step can be done with "true" or "probabilistic" tests. An interesting read is chapter 4 of the Handbook of Applied Cryptography: http://cacr.uwaterloo.ca/hac/ - It is also possible to select an integer in such a way that deterministically proving its primality be efficient (general-purpose deterministic tests are kind of slow, but for instance knowing the factorization of $p - 1$ helps a lot) should you need that. Most cryptographic applications don't need to unconditionally guarantee the integer is prime (so they just pick at random) but sometimes a primality certificate may be desirable. – Thomas Feb 6 at 14:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429044723510742, "perplexity_flag": "middle"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G05/g05ccc.html
# NAG Library Function Documentnag_random_init_nonrepeatable (g05ccc) ## 1  Purpose nag_random_init_nonrepeatable (g05ccc) sets the seed used by the basic generator in Chapter g05 to a non-repeatable initial value. ## 2  Specification #include <nag.h> #include <nagg05.h> void nag_random_init_nonrepeatable () ## 3  Description nag_random_init_nonrepeatable (g05ccc) sets the internal seed used by the basic generator nag_random_continuous_uniform (g05cac) to a value ${n}_{0}$ calculated from the setting of the real-time clock. It then generates the value ${n}_{1}$ and discards it, i.e., the first available value is ${n}_{2}$. This function will yield different subsequent sequences of random numbers in different runs of the calling program. It should be noted that there is no guarantee of statistical properties between sequences, only within sequences. None. None. None. Not applicable. None. ## 9  Example The example program prints the first five pseudorandom real numbers from a uniform distribution between 0 and 1, generated by nag_random_continuous_uniform (g05cac) after initialization by nag_random_init_nonrepeatable (g05ccc). The program should give different results each time it is run. ### 9.1  Program Text Program Text (g05ccce.c) None. ### 9.3  Program Results Program Results (g05ccce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.618812084197998, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/22944/how-do-levers-amplify-forces/22992
# How do levers amplify forces? This is really bothering me for a long time, because the math is easy to do, but it's still unintuitive for me. I understand the "law of the lever" and I can do the math and use the torques, or conservation of energy. or whatever... And I can see that a lever can amplify a force you apply to it if you apply a force on the longer side of the beam. If I were to look at the molecular lever and see what actually happens when I push on the lever, and I give acceleration to the molecules, how does it actually happen that more force is transmitted to the other side? Thank you all p.s I'm looking just for an explanation in terms of forces and acceleration, it's clear to me how to do this in terms of energy or torques - Here is a mental model that might help. Think of a series of balls connected in a straight line by very rigid springs floating in space with no external forces acting on it. When it is in its 'rest state' there are no stresses on it. Now give the ball at one end of the assembly a push at right angles from the line. When you do that the spring connecting it to the next ball bends a bit - transmitting the force to the next ball as it tries to straighten out the line. From there to the next ball, and so on. – Benjamin Franz Mar 29 '12 at 1:26 There is not "more force" transmitted to the other side... regardless of looking at molecules, it is the torque that causes acceleration, not force. – Chris Gerig Mar 29 '12 at 5:55 2 @Chris: No. Emphatically no. It is always force which causes acceleration, and there is more force, just acting over a shorter distance (thereby conserving energy ala $W = \int \vec{F} \cdot d\vec{s}$). It is, however, the torque equation which shows you what the coefficient is. – dmckee♦ Mar 29 '12 at 15:41 Ah...I see. You question comes down to "What is the origin of the forces that let the bar (or indeed any solid) maintain it's shape?", which means that @BenjaminFranz's comment is the core of a good answer. – dmckee♦ Mar 29 '12 at 17:18 @BenjaminFranz so it's the electromagnetic forces between molecules that actually generate the extra force? i.e these molecular bonds do not allow the bar to bend and thus create an extra force? – fiftyeight Mar 29 '12 at 17:24 ## 4 Answers I agree with Benjamin Franz that the ball-and-spring model of a solid is helpful and that when a solid exerts a contact force the bonds between the atoms are distorted in that region. If you take a beam, clamp down its ends, and then apply a force to it off-center, the bonds on the short side are distorted more than the bonds on the long side. Therefore, more force is exerted on the clamp that is closer to the applied force. The diagram below illustrates this: - ## Did you find this question interesting? Try our newsletter email address There are two fairly straight forward ways to understand this: • As a problem in "statics" involving forces and torques on the lever. • In terms of conservation on energy between the work done by the person operating the lever and on the load lifted. ## Setup We will, for simplicity, consider the situation where the lever is essentially horizontal (showing that the results hold at other angles is left as an exercise), and will treat the lever as a straight bar of length $l = l_1 + l_2$. Three forces act of the bar, the applied force $F_a$ acts downward atdistance 0, the fulcrum force $F_f$ acts upward at distance $l_1$, and the load $F_l$ acts downward at distance l. Note that so far I have not said anything about the ratio $l_1/l_2$. ## Statics We require that $\sum F_i = 0$ and $\sum \tau_i = 0$ (the sum of the forces and the sum of the torques acting on the bar are zero). I'll measure the torques around the fulcrum. $$-F_a + F_f - F_l = 0$$ $$F_a \cdot l_1 + F_f \cdot 0 -F_l \cdot l_2 = 0$$ Immediately we can see that the system is underconstrained and we have one free parameter; that the weight of the load, so we'll express $F_a$ and $F_f$ in terms of $F_l$. From the torque equation we get $F_a = \frac{l_2}{l_1} F_l$, and plugging that into the forces equation we get $F_f = (1 + \frac{l_2}{l_1}) F_l$. ## Energy concerns The best case is that the machine wastes no energy; we assume this case. While the bar moves through a small angle $\alpha$ near the horizontal the applied force moves through a distance $-\alpha \cdot l_1$, and the loaded end through a distance $\alpha \cdot l_2$, computing the work done my each end we get $$W_a = -F_a \alpha l_1$$ $$W_l = F_l \alpha l_2$$ By assumption these must add to zero, so $$F_a = \frac{l_2}{l_1} F_l$$ as before. ## Conclusions If the load is on the short end then $l_2 < l_1$ and $\frac{l_2}{l_1} < 1$ and you require less force to lift the load, but the load moves a shorter distance. If the load is on the long end then $l_2 > l_1$ and $\frac{l_2}{l_1} > 1$ and you require more force to lift the load, but the load moves a longer distance. - 1 This is a good answer, especially the statics part, but the thing that's mostly bothering me is what actually creates the extra force that raises the object and "amplifies" the force I'm doing, Is it the forces between molecules that keep the shape of the bar that actually amplify the force, what helped me in your answer is putting the fulcrum itself into the picture, because it creates some constraint for the system that got me thinking about the molecules that keep the shape of the bar. – fiftyeight Mar 29 '12 at 17:23 it is the forces between molecules, which are limited only by the breaking strength of the material (which isn't infinite--- you can't move the whole Earth). The transmission of forces conserves the energy, not the force. Force is not a conserved quantity. – Ron Maimon Mar 29 '12 at 17:39 @fiftyeight: Forces need not be conserved. No one needs to create an extra force. Similar things happen in hydraulics--you can amplify a force in one piston by connecting it to a smaller piston. – Manishearth♦ Mar 30 '12 at 3:12 The presence of the earth is significant. @Ron One can move the whole earth by jumping -- just not very much. – Peter Morgan Mar 30 '12 at 14:10 @Peter: The presence of something to push against is significant--that is why you have to include the Fulcrum force for a complete analysis. IN any case, we have both answered the wrong question. – dmckee♦ Mar 30 '12 at 15:34 It is all relative to the pivot point in the lever, and to energy expended, not the force applied. If the pivot point is one quarter of the levers length, from the bottom of the lever, and you apply a force F, to the top of lever, to move the top through adistance D, the result will be that the bottom of the lever will move through a third of the distance of the top. (IE 3/4 of length divided by 1/4 of length about pivot point). The energy expended at the top of the lever is FxD. Since energy in, equals energy out, and the bottom of the lever moves only 1/3 of D, then the force that is exerted at the bottom of the lever is 3D. (IE 3 times the force applied at the top of the lever) but it has been exerted over a shorter distance. Hope that this is what you are looking for, and hope I have made it clear. It is 60 years since I was taught this. - See my answer, lever can be understood either in terms of energy or in terms of forces. – dmckee♦ Mar 29 '12 at 17:06 I take dmckee's answer to be flawed because it doesn't mention the earth. At the coarsest level, the earth accelerates down while the large object accelerates up. At the level of Newtonian mechanics, every action has an equal and opposite reaction. In more detail, the center of mass of the earth, the fulcrum, the lever, the person who pushes, and the large object, taken together as a single composite system, stays motionless (or, rather, at constant velocity), but the positions and velocities of the five internal components relative to each other are changed by the actions of the contact forces (which we can take ultimately to be non-contact gravitational, electromagnetic and nuclear forces, and an understanding of the constitution of matter does ultimately require QM) that act between them. At this level of modeling, the earth's acceleration (in the model) will be slightly different (and the same as the acceleration of the fulcrum), because part of the person who pushes is also accelerating downwards, and the acceleration of the various parts of the lever would have to be taken into account. At increasing levels of detail, each of the five components is also composite. I can bend my arm to exert a downward force because I can adjust the internal geometry of my arm relative to another part using chemical energy (which again we can take to be ultimately electromagnetic and nuclear energy, and QM). Although you tagged this QM, it can be understood moderately well in terms of classical mechanics and EM. The constitution of matter was a concern for late 19C Natural Philosophers, but everything was enough under control that they barely noticed that they were sweeping troubles under the carpet until Planck. - thank you, the QM mechanics tag is a mistake, I didn't mean to put it, I was mainly interested in the EM and Classical Mechanics parts. – fiftyeight Mar 30 '12 at 17:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442768692970276, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/90230-fing-range-alpha-equation-have-solution.html
# Thread: 1. ## Fing the range of alpha for equation to have solution? Find the range of real number $\alpha$ for which the equation $z + \alpha|z - 1| + 2i = 0$ ; $z = x + iy$, has no solution. Also find the solution. 2. Originally Posted by fardeen_gen Find the range of real number $\alpha$ for which the equation $z + \alpha|z - 1| + 2i = 0$ ; $z = x + iy$, has no solution. Also find the solution. If $\alpha$ is real then $z+2i = -\alpha|z-1|$, which is real. So the imaginary part of z must be –2i. Write z = x–2i. Then $x = -\alpha|(x-1)-2i|$ and so $x^2 = \alpha^2\bigl((x-1)^2+4\bigr)$. The condition for that quadratic to have real roots is $\alpha^2\leqslant 5/4$. So the original equation has no solution for z if $|\alpha|>\sqrt5/2$. 3. Originally Posted by Opalg If $\alpha$ is real then $z+2i = -\alpha|z-1|$, which is real. So the imaginary part of z must be –2i. Write z = x–2i. Then $x = -\alpha|(x-1)-2i|$ and so $x^2 = \alpha^2\bigl((x-1)^2+4\bigr)$. The condition for that quadratic to have real roots is $\alpha^2\leqslant 4/5$. So the original equation has no solution for z if $|\alpha|>2/\sqrt5$. Am I doing it correct if I say that, for the equation to have a solution, $-\frac{\sqrt{5}}{2}\leq \alpha \leq \frac{\sqrt{5}}{2}$ and the solution in that case is: $Z = \frac{2\alpha \pm \alpha\sqrt{5 - 4\alpha^2}}{\alpha^2 - 1} - 2i,\ \alpha\neq \pm 1;\ Z = \frac{5}{2} - 2i, \alpha = \pm 1$ 4. Originally Posted by fardeen_gen Am I doing it correct if I say that, for the equation to have a solution, $-\frac{\sqrt{5}}{2}\leq \alpha \leq \frac{\sqrt{5}}{2}$ and the solution in that case is: $Z = \frac{2\alpha \pm \alpha\sqrt{5 - 4\alpha^2}}{\alpha^2 - 1} - 2i,\ \alpha\neq \pm 1;\ Z = \frac{5}{2} - 2i, \alpha = \pm 1$ Yes, the condition for a solution to exist is $-\frac{\sqrt{5}}{2}\leq \alpha \leq \frac{\sqrt{5}}{2}$ (I carelessly wrote $2/\sqrt5$ in my previous comment—now corrected—where it should have been $\sqrt5/2$.) But you should check your answer to the quadratic equation again. I get $z = \frac{\alpha^2 \pm \alpha\sqrt{5-4\alpha^2}}{\alpha^2-1}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949458122253418, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/129332-equivalence-class-question.html
# Thread: 1. ## Equivalence Class Question Hey all, I was wondering if anyone could shed some light on this question. Let X= {1,2,3,4,5} and Y={3,4} b) What is the equivalence class of {1,2}? I understand equivalence relations but can't seem to grasp the concept of equivalence classes. :| Any explainations would be appreciated. Thanks in advance! 2. Originally Posted by swarley Hey all, I was wondering if anyone could shed some light on this question. Let X= {1,2,3,4,5} and Y={3,4} b) What is the equivalence class of {1,2}? I understand equivalence relations but can't seem to grasp the concept of equivalence classes. :|Any explainations would be appreciated. That bit of a question makes no sense whatsoever. Please post the entire question with its exact wording. 3. That bit of a question makes no sense whatsoever. Please post the entire question with its exact wording. Apologies. Let X = {1,2,3,4,5} and Y = {3,4}. Define a relation R on the power set P(X) of X by A R B iff A U Y = B U Y. a) Prove that R is an equivalence relation. b) What is the equivalence class of {1,2}? 4. Originally Posted by swarley Apologies. Let X = {1,2,3,4,5} and Y = {3,4}. Define a relation R on the power set P(X) of X by A R B iff A U Y = B U Y. a) Prove that R is an equivalence relation. b) What is the equivalence class of {1,2}? Is this the answer to part b): $\left\{ {\{ 1,2\} ,\{ 1,2,3\} ,\{ 1,2,4\} ,\{ 1,2,3,4\} } \right\}$ Can you explain why or why not? 5. Originally Posted by Plato Is this the answer to part b): $\left\{ {\{ 1,2\} ,\{ 1,2,3\} ,\{ 1,2,4\} ,\{ 1,2,3,4\} } \right\}$ Can you explain why or why not? Yes, this is the answer but I don't understand why. For a start, why leave out 5? 6. Originally Posted by swarley Apologies. Let X = {1,2,3,4,5} and Y = {3,4}. Define a relation R on the power set P(X) of X by A R B iff A U Y = B U Y. a) Prove that R is an equivalence relation. b) What is the equivalence class of {1,2}? Firstly, $a\cup Y = a\cup Y$ so $aRa$ for all $a\in\wp(X)$. Suppose $aRb$ and $bRc$. Then $a\cup Y=b\cup Y=c\cup Y$. So, by symmetry and transitivity of usual set equality, $bRa$ and $aRc$ So it's an equivalence relation. For b, what is the equivalence class of $A=\{\,1,2\,\}$ is asking what things are R-related to A. If B is R-related to A, then $A\cup Y=B\cup Y$. Then clearly, as 1 and 2 aren't members of Y, we can conclude that $\{\,1,2\,\}\subseteq B$ and furthermore, 3 or 4 could be on B (that wouldn't effect anything since they would be in their after unioned with Y anyway). So it's equivalence class is $\{\,\{\,1,2\,\},\{\,1,2,3\,\},\{\,1,2,4\,\},\{\,1, 2,3,4\,\}\,\}$ 7. Originally Posted by swarley Yes, this is the answer but I don't understand why. For a start, why leave out 5? There is a simple answer. $\{ 1,2\} \cup \{ 3,4\} \ne \{ 1,2,5\} \cup \{ 3,4\}$ 8. Originally Posted by wgunther Firstly, $a\cup Y = a\cup Y$ so $aRa$ for all $a\in\wp(X)$. Suppose $aRb$ and $bRc$. Then $a\cup Y=b\cup Y=c\cup Y$. So, by symmetry and transitivity of usual set equality, $bRa$ and $aRc$ So it's an equivalence relation. For b, what is the equivalence class of $A=\{\,1,2\,\}$ is asking what things are R-related to A. If B is R-related to A, then $A\cup Y=B\cup Y$. Then clearly, as 1 and 2 aren't members of Y, we can conclude that $\{\,1,2\,\}\subseteq B$ and furthermore, 3 or 4 could be on B (that wouldn't effect anything since they would be in their after unioned with Y anyway). So it's equivalence class is $\{\,\{\,1,2\,\},\{\,1,2,3\,\},\{\,1,2,4\,\},\{\,1, 2,3,4\,\}\,\}$ Oh, I see! Thanks very much. Plato, thank you also.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615804553031921, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/181967-proof-using-binomial-theorem.html
# Thread: 1. ## Proof using Binomial Theorem Hey all, I am stuck on trying to prove $2^n = \sum_{r=0}^r \begin{pmatrix} n\\ r \end{pmatrix}$ using the Binomial Theorem, any ideas ? Thanks 2. Originally Posted by Oiler Hey all, I am stuck on trying to prove $2^n = \sum_{r=0}^r \begin{pmatrix} n\\ r \end{pmatrix}$ using the Binomial Theorem, any ideas ? Thanks Hint: 2=1+1 (In most of the times) 3. Originally Posted by Oiler $2^n = \sum_{r=0}^r \begin{pmatrix} n\\ r \end{pmatrix}$ using the Binomial Theorem This is a standard theorem if we know that $\left( {a + b} \right)^n = \sum\limits_{k = 0}^n {\binom{n}{k}a^k b^{n - k} }$ Now let $a=1~\&~b=1$ NOTE you have a mistake. The upper limit in n.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7976679801940918, "perplexity_flag": "middle"}
http://chemistry.stackexchange.com/questions/710/fundamental-forces-behind-covalent-bonding/716
# Fundamental forces behind covalent bonding I understand that covalent bonding is an equilibrium state between attractive and repulsive forces, but which one of fundamental forces actually causes atoms to attract each other? Also, am I right to think that "repulsion occurs when atoms are too close together" comes from electrostatic interaction? - ## 3 Answers I understand that covalent bonding is an equilibrium state between attractive and repulsive forces, but which one of fundamental forces actually causes atoms to attract each other? The role of Pauli Exclusion in bonding It is an unfortunate accident of history that because chemistry has a very convenient and predictive set of approximations for understanding bonding, some of the details of why those bonds exist can become a bit hard to discern. It's not that they aren't there -- they most emphatically are! -- but you often have to dig a bit deeper to find them. They are found in physics, in particular in the concept of Pauli exclusion. Chemistry as avoiding black holes Let's take your attraction question first. What causes that? Well, in one sense that question is easy: it's electrostatic attraction, the interplay of pulls between positively charged nuclei and negatively charged electrons. But even in saying that, something is wrong. Here's the question that points that out: If nothing else was involved except electrostatic attraction, what would be the most stable configuration of two or more atoms with a mix of positive and negative charges? The answer to that is a bit surprising. If the charges are balanced, the only stable, non-decaying answer for conventional (classical) particles is always the same: "a very, very small black hole." Of course you could modify that a bit by assuming that the strong force is for some reason stable, in which case the answer becomes "a bigger atomic nucleus," one with no electrons around it. Or maybe atoms as Get Fuzzy? At this point some of you reading this should be thinking loudly "Now wait a minute! Electrons don't behave like point particles in atoms, because quantum uncertainty makes them 'fuzz out' as they get close to the nucleus." And that is exactly correct -- I'm fond of quoting that point myself in other contexts! However, the issue here is a bit different, since even "fuzzed out" electrons provide a poor barrier for keeping other electrons away by electrostatic repulsion alone, precisely because their charge is so diffuse. The case of electrons that lack Pauli exclusion is nicely captured by Richard Feynman in his Lectures on Physics, in Volume III, Chapter 4, page 4-13, Figure 4-11 at the top of the page. The outcome Feynman describes is pretty boring, since atoms would remain simple, smoothly spherical, and about the same size as more and more protons and electrons get added in. While Feynman does not get into atoms how such atoms would interact, there's a problem there too. Because the electron charges would be so diffuse in comparison to the nuclei, the atoms would pose no real barrier to each other until the nuclei themselves begin to repel each other. The result would be a very dense material that would have more in common with [neutronium[(http://en.wikipedia.org/wiki/Neutronium) than with conventional matter. For now I'll just forge ahead with a more classical description, and capture the idea of the electron cloud simply by asserting that each electron is selfish and likes to capture as much "address space" (see below) as possible. Charge-only is boring! So, while you can finagle with funny configurations of charges that might prevent the inevitable for a while by pitting positive against positive and negative against negative, positively charged nuclei and negatively charged electrons with nothing much else in play will always wind up in the same bad spot: either as very puny black holes, or as tiny boring atoms that lack anything resembling chemistry. A universe full of nothing but various sizes of black holes or simple homogenous neutronium is not very interesting! Preventing the collapse So, to understand atomic electrostatic attraction properly, you must start with the inverse issue: What in the world is keeping these things from simply collapsing down to zero size -- that is, where is the repulsion coming from? And that is your next question: Also, am I right to think that "repulsion occurs when atoms are too close together" comes from electrostatic interaction? No; that is simply wrong. In the absence of "something else," the charges will wiggle about and radiate until any temporary barrier posed by identical charges simply becomes irrelevant... meaning that once again you will wind up with those puny black holes. What keeps atoms, bonds, and molecules stable is always something else entirely, a "force" that is not traditionally thought of as being a force at all, even thought it is unbelievably powerful and can prevent even two nearby opposite electrical charges from merging. The electrostatic force is enormously powerful at the tiny separation distances within atoms, so anything that can stop charged particles from merging is impressive! The "repulsive force that is not a force" is the Pauli exclusion I mentioned earlier. A simple way to think of Pauli exclusion is that identical material particles (electrons, protons, and neutrons in particular) all insist on having completely unique "addresses" to tell them apart from other particles of the same type. For an electron this address includes: where the electron is located in space, how fast and in what direction it is moving (momentum), and one last item called spin, which can only have on of two values that are usually called "up" or "down." You can force such material particles (called fermions) into nearby addresses, but with the exception of that up-down spin part of the address, doing so always increases the energy of at least one of the electrons. That required increase in energy is a nutshell is why material objects push back when you try to squeeze them. Squeezing them requires minutely reducing the available space of many of the electrons in the object, and those electrons respond by capturing the energy of the squeeze and using it to push right back at you. Now, take that thought and bring it back to the question about where repulsion comes from when to atoms bond at a certain distance, but no closer. They are the same mechanism! That is, two atoms can "touch" (move so close, but no closer) only because they both have a lot of electrons that require separate space, velocity, and spin addresses. Push them together and they start hissing like cats from two households who have suddenly been forced to share the same house. (If you own multiple cats, you'll know exactly what I mean by that.) So, what happens is that the overall set of plus-and-minus forces of the two atoms is trying really hard to crush all of the charges down into a single very tiny black hole -- not into some stable state! It is only the hissing and spitting of the overcrowded and very unhappy electrons that keeps this event from happening. Orbitals as juggling acts But just how does that work? It's sort of a juggling act, frankly. Electrons are allowed to "sort of" occupy many different spots, speeds, and spins (mnemonic $s^3$, and no, that is not standard, I'm just using it for convenience in this answer only) at the same time, due to quantum uncertainty. However, it's not necessary to get into that here beyond recognizing that every electron tries to occupy as much of its local $s^3$ address space as possible. Juggling between spots and speeds requires energy. So, since only so much energy is available, this is the part of the juggling act that gives atoms size and shapes. When all the jockeying around wraps up, the lowest energy situations keep the electrons stationed in various ways around the nucleus, not quite touching each other. We call those special solutions to the crowding problem orbitals, and they are very convenient for understanding and estimating how atoms and molecules will combine. Orbitals as specialized solutions However, it's still a good idea to keep in mind that orbitals are not exactly fundamental concepts, but rather outcomes of the much deeper interplay of Pauli exclusion with the unique masses, charges, and configurations of nuclei and electrons. So, if you toss in some weird electron-like particle such as a muon or positron, standard orbital models have to be modified significantly and applied only with great care. Standard orbitals can also get pretty weird just from having unusual geometries of fully conventional atomic nuclei, with the unusual dual hydrogen bonding found in boron hydrides such as diborane probably being the best example. Such bonding is odd if viewed in terms of conventional hydrogen bonds, but less so if viewed simply as the best possible "electron juggle" for these compact cases. "Jake! The bond!" Now on to the part that I find delightful, something that underlies the whole concept of chemical bonding. Recall that it takes energy to squeeze electrons together in terms of the main two parts of their "addresses," the spots (locations) and speeds (momenta)? I also mentioned that spin is different in this way: the only energy cost for adding two electrons with different spin addresses is that of conventional electrostatic repulsion. That is, there is no "forcing them closer" Pauli exclusion cost like you get for locations and velocities. Now you might think "but electrostatic repulsion is huge!", and you would be exactly correct. However, compared to the Pauli exclusion "non-force force" cost, the energy cost of this electrostatic repulsion is actually quite small -- so small that it can usually be ignored for small atoms. So when I say that Pauli exclusion is powerful, I mean it, since it even makes the enormous repulsion of two electrons stuck inside the same tiny sector of a single atom look so insignificant that you can usually ignore its impact! But that's secondary, because the real point is this: When two atoms approach each other closely, the electrons start fighting fierce energy-escalation battles that keep both atoms from collapsing all the way down into a black hole. But there is one exception to that energetic infighting: spin! For spin and spin alone, it become possible to get significantly closer to that final point-like collapse that all the charges want to do. Spin thus becomes a major "hole" -- the only such major hole -- in the ferocious armor of repulsion produced by Pauli exclusion. If you interpret atomic repulsion due to Pauli exclusion as the norm, then spin-pairing two electrons becomes another example of a "force that is not a force," or a pseudo force. In this case, however, the result is a net attraction. That is, spin-pairing allows two atoms (or an atom and an electron) to approach each other more closely that Pauli exclusion would otherwise permit. The result is a significant release of electrostatic attraction energy. That release of energy in turn creates a stable bond, since it cannot be broken unless that same energy is returned. Sharing (and stealing) is cheaper So, if two atoms (e.g. two hydrogen atoms) each have an outer orbital that contains only one electron, those two electrons can sort of look each other over and say, "you know, if you spin downwards and I spin upwards, we could both share this space for almost no energy cost at all!" And so they do, with a net release of energy, producing a covalent bond if the resulting spin-pair cancels out positive nuclear charges equally on both atoms. However, in some cases the "attractive force" of spin-pairing is so overwhelming greater for one of the two atoms that it can pretty much fully overcome (!) the powerful electrostatic attraction of the other atom for its own electron. When that happens, the electron is simply ripped away from the other atom. We call that an ionic bond, and we act as it if it's no big deal. But it is truly an amazing thing, one that is possible only because the pseudo force of spin-pairing. Bottom line: Pseudo forces are important! My apologies for having given such a long answer, but you happened to ask a question that cannot be answered correctly without adding in some version of Pauli "repulsion" and spin-pair "attraction." For that matter, the size of an atom, the shape of its orbitals, and its ability to form bonds similarly all depend on pseudo forces. - Thinking about this like a physicist, there are four fundamental forces: the strong nuclear force, the weak nuclear force, the electromagnetic force, and gravity. The strong nuclear force holds the protons and neutrons in the nucleus together. The weak nuclear force causes beta decay. Those two might be considered chemistry. I don't, but some people do. Gravity is much too weak to have any effect on chemistry. So that leaves the electromagnetic force to control nearly all of chemistry. On a simple conceptual level, that's all there is. The nuclei are both positively charged, so they repel each other. The electrons are negatively charged, so they are attracted to their respective nuclei. When the electron clouds get close enough to interact with both nuclei, then they begin to pull the nuclei together. The deeper explanation requires quantum mechanics. When the atoms are separated, you can use the Schrödinger equation with the electric potential from the nucleus. That gives you the electron orbitals for an atom all by itself. When the two atoms get close together, you use the electric potential for both nuclei in the Schrödinger equation. The solution is then the molecular orbital rather than the atomic orbitals. Because the Schrödinger equation is impossible to solve exactly for a molecule, chemists need an approximation. The usual approximation is to build the molecular orbital out of the atomic orbitals by adding and subtracting the atomic orbitals. This is where the ideas of $sp$-, $sp^2$-, and $sp^3$-hybridizaton, and $\pi$- and $\sigma$-bonding come from. For further information, most introductory college-level general chemistry texts should discus this. As an example, I pulled most of the above explanation from Zumdahl's Chemistry. In the 5th edition, this is in chapters 8 and 9 (the current edition appears to be the 7th). This is a much more important idea in organic chemistry, so those textbooks usually review it in the first one or two chapters. The organic chemistry book I have in front of me at the moment is McMurray's Organic Chemistry. This is discussed in chapter 1 of the 3rd edition of that book (current edition is the 8th). - Both the attraction and repulsion are the result of the electromagnetic interaction. At long distances, two atoms attract each other because of dipole-dipole interactions. When they get close enough together, the electrostatic repulsion of the nuclei takes over (as well as the exchange interaction acting on the non-valence electrons of the atoms and forcing them into higher energy states). This makes the atoms repel each other at short distances. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951590895652771, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20511/is-time-fundamentally-different-from-space?answertab=votes
# Is time fundamentally different from space? Note: This is a rewrite of the original question, which was titled What would time be for 2D beings? In my current, non-physicist's understanding, every instant of our three‑dimensional world is just another 'slice' of a four‑dimensional body. I don't mean that as an analogy, but quite literally... Obviously, it would not be a straight 'slice', it would still be bent and curved by gravity, speed and other relativistic factors. Is this wrong? Also, both 'spacial' and 'temporal' dimensions are — in my mind — fundamentally the same thing, given different names because we experience them differently because of our nature. I've had people explicitly say in the comments that this is wrong and that time and space are not the same 'type' of dimension. I'd like to understand what are the fundamental differences. In my original question I used these two assumptions of mine (that space is a slice of time and that all dimensions are fundamentally the same) to make an analogy. I noted that taking two-dimensional slices of a three-dimensional body — just as a slice of four‑dimensional time is three‑dimensional space — and displaying them is rapid succession looks like a bunch of matter that is changing over time (like in this brain scan below). Assuming all dimensions, temporal or spatial, are fundamentally the same, would that mean that for a hypothetical two‑dimensional being time would be the thrid dimension, not that fourth? The same question in a more general form: for any $n$‑dimensional being, would time for it be the $n+1$ dimension? In particular, what would time be for a being living in a hypothetical fifth dimension? - – Paul Manta Feb 4 '12 at 10:27 The theoretical physics board was designed to be research level. They wouldn't be happy about your question. – Nick Kidman Feb 4 '12 at 15:02 – Warrick Feb 4 '12 at 18:00 You can perform rotations in spatial coordinates, but not in temporal directions. Or can you ... ? – ja72 Aug 29 '12 at 14:15 ## 5 Answers I think you are correct for this hypothetical 2D being. However, you should be aware that time is not just a dimension. In space-dimensions, you can in principle move freely forward and backward, while in time, your motion is fixed. With respect to the brain scan: this way of visualisation is chosen for simplicity. A regular 3D image, where you can look at any depth you want, will give a clearer image of what is happening in this third dimension. Some information is a bit lost for the observer: you clearly see structures in the x,y-plane, but for vertical coordinate, it is not that obvious. Some bit off-topic reading material may be: http://en.wikipedia.org/wiki/Flatland - "you can in principle move freely forward and backward, while in time, your motion is fixed" -- But is that a characteristic of the dimension, or a limitation of our technology and/ or dimensional nature? Afaik, there's no evidence that time travel in the past is impossible, so there's nothing that can be said for certain about that for now. – Paul Manta Feb 4 '12 at 9:30 "With respect to the brain scan: this way of visualisation is chosen for simplicity" -- I know, but that is besides the point for this question. :) I picked that image because it illustartes how 2D scans in rapid succession look like a substance that evolves over time. – Paul Manta Feb 4 '12 at 9:30 +1 "Flatland" seems very interesting. Thanks for pointing it out. – Paul Manta Feb 4 '12 at 9:33 You may be right, but moving backwards in time, will not be as convenient as moving backwards in space in the foreseeable future. I assumed the same constraints for the 2D being as we perceive in daily life. – Bernhard Feb 4 '12 at 9:37 @PaulManta: Space dimension is not the same as time dimension. It has a special significance in GR. A 2D person would experience the time as we are experiencing it! a monotonically increasing function. So, it will be only that there's won't be the "thickness" factor, but as far as time is concerned, it will be same as ours. – Vineet Menon Feb 4 '12 at 13:09 show 2 more comments ## Did you find this question interesting? Try our newsletter email address My take is time is indeed fundamentally different from space, as, for example, time enters the invariant interval with a different sign: $(ds)^2=(dt)^2-(dx)^2-(dy)^2-(dz)^2$. - That is a good argument, and one that I was hoping I would get. But how does the generalized form of that equation look like? Or, at least, how does it look if you introduce another dimension? – Paul Manta Feb 4 '12 at 16:51 @PaulManta: The way I read that formula is the time dimension is just like any spatial dimension, provided you measure it in imaginary units (multiplied by $i$). As far as adding dimensions, you can modify the formula in any way that seems like it might help you understand. – Mike Dunlavey Feb 4 '12 at 17:07 @PaulManta: generalized in what way? If you add another spatial dimension $w$, just add $-(dw)^2$ to the expression above. – akhmeteli Feb 4 '12 at 17:16 @PaulManta: For example if two events are separated by $dx=5$ light seconds, and $dt=4i$ seconds, then the distance between them is $\sqrt{dx^2+dt^2}=\sqrt{25-16}=\sqrt{9}=3$ light seconds. – Mike Dunlavey Feb 4 '12 at 17:28 I'm not saying this answer is useless but it only points out how time is modelled different than space in relativity theory and is not saying anything about the sense in which they are different. It's a statement about how the abstractions 'space' and 'time' and our observation gets translated into a theory and is therefore a statement like 'we seem to always experience only one time' itself. – Nick Kidman Feb 4 '12 at 18:01 show 2 more comments This is related in several ways to time traveler paradoxes. What if you were able to go back in time and kill your grandfather before he had a child? But this is a silly and undisciplined way to think about the problem. A much better way is to use the principles that working physicists use in their descriptions of reality. Now, one of the fundamental principles is the conservation of matter. Matter is neither created nor destroyed, it is simply changed from one form to another while undergoing some kind of work. A time traveler going back in time to kill his grandfather is changing the material content of the universe he is going back to. In accepting the paradox as a possibility, we are forced into a dubious proposition: give up this conservation principle as a way of describing objective reality. If we allow that we can get something from nothing, then perpetual motion machines follow on as a consequence. Perpetual motion machines break the sense of proportionality which all the laws of physics as we know them depend upon. - Time and space is a way of splitting the set of all space-time events into two sets: Space is the family of sets of space-time events simultaneous with one another, with each element of a set paramaterised by three real numbers called the space coordinates; each set paramaterised by a real number t called time. Alternatively, time is the family of sets of space-time events that aren't simutaneous with one another, each element of a set parameterised by a real number t; each family paramaterised by three real numbers called the space coordinates. On the one hand space and time are identical in both being partitioning functions; on the other, they're different partitioning functions. - 1 Isn't it a circular definition to simply say that "time" is the dimension along which spacetime events aren't "simultaneous?" Also, "x" is family of sets of spacetime events whose x coordinates are not the same. That still doesn't really say anything about what makes t different from x. – Larry Gritz Feb 8 '12 at 19:40 The sign is actually metric tensor $g_{\mu\nu}$,It takes the value $-1$,$1$,$1$,$1$ only in Minkowski space time.In here you may find what you need http://en.wikipedia.org/wiki/Metric_tensor -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506722688674927, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/random-variables?sort=unanswered&pagesize=15
# Tagged Questions Questions about maps from a probability space to a measure space which are measurable. 0answers 63 views ### What is the distribution of $\sqrt{X^2+Y^2}$ when $X$ and $Y$ are Gaussian but correlated? If $Z = \sqrt{X^2+Y^2}$, and $X$ and $Y$ are zero-mean i.i.d. normally-distributed random variables, then $Z$ is Rayleigh distributed. What is the distribution of $Z$ if $X$ and $Y$ are correlated ... 0answers 119 views ### Existence of iid random variables In probability theory we often used the existence of a sequence $(X_n)_n$ of independent and identically distributed random variables. This was already discussed here. One of the answers says: As ... 0answers 243 views ### Characteristic functions of random variables (Poisson, Gamma, etc.) My self-study in measure and probability theory as finally brought me to the subject of characteristic functions, and I have not handled these in the past with any rigor at all, so all of this is ... 0answers 30 views ### Random variables related through nonlinear system of equations Lets assume two groups of random variables X and Y (the dimensionality of them is not important). I know probability distribution of X, but not of Y. I also know that Y is a function of X and they are ... 0answers 58 views ### Relation between factor graph and conditional probability distribution First, I'm from computer science. I don't know how to say this problem in a mathematical way. So please bear with me. The question Let say I have a factor graph illustrated in the figure. The ... 0answers 49 views ### Convergence In $L^{1}$ in the Strong Law of Large Numbers I'm trying to prove that if $(X_n)_{n\geq 1}$ is uniformly integrable, then $X_n$ almost surely converging to $X$ implies $X_n$ converges to $X$ in $L^{1}$. How is this done? Generally speaking: ... 0answers 59 views ### Expected value with a kronecker product and Gaussian distributional assumption What is the expected value, $\mathbb{E}\left[ I \otimes \left( \operatorname{diag}(ZZ^T\mathbf{1}) - ZZ^T\right)\right]$ where $Z \sim N(0, \sigma^2I)$ is a random variable? The kronecker product ... 0answers 18 views ### Fast fourier transforms of random binary data I am a physicist who is trying to make sense of FFTs and binary data. Say I have a series of random binary data, which is measured with a repetition rate of 400Hz (interval time of 0.0025s). I have a ... 0answers 28 views ### convergence of discrete random variables with finite entropy Let $Z$ be the set of discrete random variables on some probability space. Define the quantity $d(X_1,X_2)=h(X_1 \mid X_2)+h(X_2 \mid X_1)$ between two random variables $X_1, X_2 \in Z$. For $X \in Z$ ... 0answers 41 views ### About Strict Stationary of AR(1) Sequence The usual Auto regressive process considers the time t from negative infinity and positive infinity, but what if we restrict our time to strict positive space, do we still have our stationary result? ... 0answers 16 views ### Strong Law of Large Numbers a.s. sense implies $L^1$ sense? I have to show that the strong law of large numbers in the almost sure sense implies the strong law of large numbers in the $L^1$ sense. I'm not sure what's being asked, can anyone give me a hint or ... 0answers 19 views ### Generating constrained random numbers I need to find a way to generate random vectors $v \in \mathbb{R}^{n_v}$ and $w \in \mathcal{R}^{n_w}$ such that they satisfy the condition $$R_1 v + R_2 w = c,$$ where \$R_1 \in \mathbb{R}^{n_c \times ... 0answers 39 views ### Showing a certain process has $\limsup X_t$ bounded almost surely. This question has been solved. I'm working on a problem where I need to show $$\limsup_{t \rightarrow \infty} X_t \leq \sqrt{c}\quad \text{a.s.}$$ where $X_t$, $t \geq 0$ is a stochastic process ... 0answers 48 views ### Property of Sum of Random Variables Let $\left(X_n\right)_{n\geq 1}$ be a sequence of i.i.d. real random variables, with $\mathbb E(X_1)=0$, $\operatorname{var}(X_1)=1$. Let $S_n=X_1+\cdots+X_n$. Prove that for any $A>0$, ... 0answers 53 views ### Definition of $x\left<Y\right>$ notation in probability theory I am working on the basics of probability theory in Koller's Probabilistic Graphical Models - Principles and Techniques. Unfortunately I am having trouble understanding a formal definition (possibly ... 0answers 65 views ### A basic question on limit Why for a continuous random variable $X$ there exists a $\delta > 0$ such that for all $x$ in $[c, c+\delta]$ the following is true: $$P(c < X \leq x) < \epsilon$$ for any given \$\epsilon ... 0answers 163 views ### Random Variable on a Sphere Not sure where to start with this problem: For any $d\geq 1$, we admit that there is only one probability measure $\mu$ on $\mathcal S_d$, (the $(d-1)-th$ dimensional sphere embedded in \$\mathbb ... 0answers 18 views ### Estimating the likelihood of independence of two discrete variables using the co-occurrence count matrix. I have some data about users from different regions visiting different directories of some website. Aggregating that data I get the co-occurrence frequency matrix (for regions and directories). Now I ... 0answers 79 views ### How to calculate the highest/smallest possible value of the variance of two random variables mean? Two random variables $X$ and $Y$ have a common expected value $E(s)$ and a common variance $Var(s)$. What's the highest possible value of the variance of their mean, $var ((x+y)/2)$? What's the ... 0answers 51 views ### Measuring a mean variance for some number of objects observed per trial for multiple trials I'm running a bunch of trials, $T$, and the outcome of each trial is some number of objects $k_i$ for $i = [1, T]$. I would like to say something about the average "spread" in terms of the number of ... 0answers 93 views ### Independent Exponentially Distributed Random Variables - Athletes Problem?? Q) At a javalin competition two athletes (1 & 2) are competing against each other. Each has one attempt to throw the javalin. Assume the acheived distance of a throw ($L$1 & $L2$) [note these ... 0answers 83 views ### circular complex random vectors Is a vector which components are circular (aka proper) complex random variables also circular complex? Below I summarized my attempt to solve this problem. I think the answer is no, but the ... 0answers 16 views ### how to write the joint density fuction of two variables that obey lognormal distribution Suppose $U_t$ is a random variable subject to $\operatorname{Lognormal}(x_1, z_1^2)$ distribution. $V_t$ is a random variable subject to $\operatorname{Lognormal}(x_2,z_2^2)$ distribution. Suppose ... 0answers 23 views ### P.d.f of a discrete fourier transform of binary variables Let $\{a_n\}$ be a set of $N$ "binary" random variables uniformly distribuited in $\{-1,1\}$. The discrete fourier transform is defined \$b_k=\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1} a_n e^{-2 \pi i k n ... 0answers 19 views ### How to find the average in the given labels? I generated 11 random variables follow the Poisson distribution. I used lambda equal to 5. The data that I got is following : 0.00673794699908547 0.0336897349954273 0.0842243374885683 ... 0answers 26 views ### Covariance between sample variance and sample sum of squares I am trying to find the cov(A,B), where A is the sample variance, and B is the sample sum of squares. I am new here, and don't know yet how to enter the formuale in the question box, but I think they ... 0answers 30 views ### Homework Help. Probability Density Functions. $X$ is $N(10,1)$. Find $f(x|(x-10)^2 < 4)$ This is a homework question. I can only figure out that X is normally distributed with mean 10 and variance 1. Can you please explain what is meant to ... 0answers 31 views ### a sequence of random variables that converge to a constant c in probability but fail to converge to c with probability 1? Any example that a sequence of random variables that converge to a constant c in probability but fail to converge to c with probability 1? 0answers 26 views ### Skewness of a sum with a positive summand Let $X$ and $Z$ be two random variables with finite third moment, and let $Z>0$. Is it true that the skewness of $X+Z$ is greater or equal than that of $X$? Such a relation clearly holds for the ... 0answers 17 views ### Is $f_{\Theta|Z}(\theta|z)$ Gaussian when $Z = \theta^3 + V$, and given that $\Theta$ and $V$ are Gaussian? $\Theta$ and $V$ are zero mean Gaussian random variables with variances $\sigma_\Theta^2$ and $\sigma_V^2$. A third random variable $Z$ is defined as: $$Z = \Theta^3 + V$$ Is ... 0answers 17 views ### Generate Constrained Vector of Random Numbers? I'm having trouble creating a random vector $\vec{V}$ starting with a standard 0:1 randon number generator subject to the following set of constraints: (given parameters $D$, $L$, and $\theta$) The ... 0answers 46 views ### How to prove that $Y=\ln(X)$ approximately Normal when $X$ is a Normal random variable with $\mu\gg\sigma$ I wanted to prove that PDF of $Y=\ln(X)$ tends to a Normal distribution with $\mathcal{N}(\ln(\mu_{x}),\sigma^{2}_{y})$ when $X\sim\mathcal{N}(\mu_{x},\sigma^{2}_{x})$. It is also important to note ... 0answers 24 views ### Calculation of the error function. I have the next two signals: $X(t)$ and $G(t)$ and a random process $Y(t)=G(t)X(t)$ where $X(t)$ and $G(t)$ are wide sense stationary with expectation values: $E(X)=0, E(G)=1$. Now, it's also given ... 0answers 16 views ### How to test whether there is an association between two data fields by testing a hypothesis? The table below cross classifies Education by Employment Confidence and is based on a sample 1363 randomly selected adult respondents in China. Highest degree         Employment Confidence    Total ... 0answers 30 views ### Using an appropriate hypothesis to test whether two means are different Manager examined potential differences between two models of bicycles. The mean life of the bicycles is of primary concern. The followings table provides the available date which measured in ... 0answers 18 views ### is there a Kalman filter for distribution function? The standard Kalman filter uses a series of measurements observed over time, to decomposite the signal and noise. However, when I'm modeling the distribution (pdf or cdf) of a variant, is there a ... 0answers 31 views ### Inequality for expected values Let $x=(x_1, \ldots, x_n)$ be real valued vector. Let $\pi(\cdot)$ be a permutation on the set $\{1, \ldots, n\}$ with a uniform distribution. Prove the following inequality E \left|\sum_{i=1}^n ... 0answers 17 views ### Compansate standard deviation loss I am not sure if this question will be a little off-topic on this forum, that I will give it a try anyway, since it implies some mathematical background. By using ... 0answers 43 views ### How to find the cdf of the minimum of two r.v's? Let $I=min\{0,W+V-U\}$ where $W,V,U$ are r.v's. Find the CDF of $I$ ? 0answers 36 views ### The identity of two parameters derived via conditioning arguments Suppose I have a random variable $X_1\in\mathbb{R}$ and a random vector $X_2\in\mathbb{R}^d$. Furthermore, there are two measurable functions $f_1$ and $f_2$, and two deterministic vectors \$\theta_1, ... 0answers 28 views ### how to compare correlation between random variables? Suppose I have a random variable, S(k) for starting date of callable bonds, M(k) for the maturity date of the bonds, and C(k) for the called date of the bonds. $$S(k) < C(k) < M(k)$$ C(k) is ... 0answers 32 views ### Correlation Coefficient dealing with discretely distributed variables I'm a bit stuck on this practice problem I have for my HS business stats class. I'd appreciate any help to get the solutions. Thank you. Exercise #22: Let X and Y be discretely distributed random ... 0answers 21 views ### Combining function estimates I have two piecewise linear estimates for two different realisations of the same random variable. What are some techniques that I could use to combine these function estimates into a single ... 0answers 98 views ### what is the pdf of the product of two independent RVs for Normal and chi-square distributed RVs? what is the pdf of the product of two independent random variables X and Y, if X and Y are independent? X is normal distributed and Y is chi-square distributed. Z = XY if $X$ has normal distribution ... 0answers 62 views ### Transformation of Discrete Random Variable Suppose i have a discrete random variable A such that: $p(A=-1) = 3/4$ $p(A=0) = 1/8$ $p(A=1) = 1/8$ Now, i create a random variable $B = |A|$ and so $p(B=0)= 1/8$ $p(B=1)= 7/8$ I want to ... 0answers 94 views ### conditional expectation of normal random variables I am not the best when it comes to conditional expectation and I am currently going over an economic/finance theory paper and they have the following statement: $V_D = \delta_D + d_D$ x \$\delta_F + ... 0answers 50 views ### Min and Max Correlation values i want to know whats is the max value that the correlation rho can take : $\mbox{Rho}(u_1,u_2) = \mbox{Cov}(u_1,u_2)/(\sigma_1*\sigma_2)$ where $u_1$ is a uniform random Variable and $u_2$ is an ... 0answers 58 views ### Convergence in series of expectation If $X_i$ are independent and the series of $\sum X_i$ is convergent. $\sum X_i = Y$ , does it imply $\sum EX_i = EY$ ? 0answers 48 views ### How distribution function behaves when $b\to\infty$ Let $X$ be a r.v. with dist func $F$ and let $a < b$ .Find and sketch the distribution function of z=\begin{cases} x & \text{if } |x| \leq b \\ 0 & \text{if } |x| > ... 0answers 194 views ### Integral of a gaussian with random variance Assuming: $$X(x,\mu)=\frac{1}{\sqrt{(2\pi)\sigma^2}} \exp[-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}]$$ the integral of $X(x,\mu)$ from $-\infty$ to $+\infty$ is: S=\int_{-\infty}^{+\infty}dx ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 130, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910038411617279, "perplexity_flag": "head"}
http://en.wikibooks.org/wiki/Haskell/Denotational_semantics
# Haskell/Denotational semantics New readers: Please report stumbling blocks! While the material on this page is intended to explain clearly, there are always mental traps that innocent readers new to the subject fall in but that the authors are not aware of. Please report any tricky passages to the Talk page or the #haskell IRC channel so that the style of exposition can be improved. Denotational semantics (Solutions) ## Contents Wider Theory Denotational semantics Equational reasoning Program derivation Category theory The Curry-Howard isomorphism fix and recursion ## Introduction This chapter explains how to formalize the meaning of Haskell programs, the denotational semantics. It may seem to be nit-picking to formally specify that the program `square x = x*x` means the same as the mathematical square function that maps each number to its square, but what about the meaning of a program like `f x = f (x+1)` that loops forever? In the following, we will exemplify the approach first taken by Scott and Strachey to this question and obtain a foundation to reason about the correctness of functional programs in general and recursive definitions in particular. Of course, we will concentrate on those topics needed to understand Haskell programs.[1] Another aim of this chapter is to illustrate the notions strict and lazy that capture the idea that a function needs or needs not to evaluate its argument. This is a basic ingredient to predict the course of evaluation of Haskell programs and hence of primary interest to the programmer. Interestingly, these notions can be formulated concisely with denotational semantics alone, no reference to an execution model is necessary. They will be put to good use in Graph Reduction, but it is this chapter that will familiarize the reader with the denotational definition and involved notions such as ⊥ ("Bottom"). The reader only interested in strictness may wish to poke around in section Bottom and Partial Functions and quickly head over to Strict and Non-Strict Semantics. ### What are Denotational Semantics and what are they for? What does a Haskell program mean? This question is answered by the denotational semantics of Haskell. In general, the denotational semantics of a programming language map each of its programs to a mathematical object (denotation), that represents the meaning of the program in question. As an example, the mathematical object for the Haskell programs `10`, `9+1`, `2*5` and `sum [1..4]` can be represented by the integer 10. We say that all those programs denote the integer 10. The collection of such mathematical objects is called the semantic domain. The mapping from program code to a semantic domain is commonly written down with double square brackets ("Oxford brackets") around program code. For example, $[\![\texttt{2*5}]\!] = 10.$ Denotations are compositional, i.e. the meaning of a program like `1+9` only depends on the meaning of its constituents: $[\![\texttt{a+b}]\!] = [\![\texttt{a}]\!]+[\![\texttt{b}]\!].$ The same notation is used for types, i.e. $[\![\texttt{Integer}]\!]=\mathbb{Z}.$ For simplicity however, we will silently identify expressions with their semantic objects in subsequent chapters and use this notation only when clarification is needed. It is one of the key properties of purely functional languages like Haskell that a direct mathematical interpretation like "`1+9` denotes 10" carries over to functions, too: in essence, the denotation of a program of type `Integer -> Integer` is a mathematical function $\mathbb{Z}\to\mathbb{Z}$ between integers. While we will see that this expression needs refinement generally, to include non-termination, the situation for imperative languages is clearly worse: a procedure with that type denotes something that changes the state of a machine in possibly unintended ways. Imperative languages are tightly tied to operational semantics which describes their way of execution on a machine. It is possible to define a denotational semantics for imperative programs and to use it to reason about such programs, but the semantics often has operational nature and sometimes must be extended in comparison to the denotational semantics for functional languages.[2] In contrast, the meaning of purely functional languages is by default completely independent from their way of execution. The Haskell98 standard even goes as far as to specify only Haskell's non-strict denotational semantics, leaving open how to implement them. In the end, denotational semantics enables us to develop formal proofs that programs indeed do what we want them to do mathematically. Ironically, for proving program properties in day-to-day Haskell, one can use Equational reasoning, which transforms programs into equivalent ones without seeing much of the underlying mathematical objects we are concentrating on in this chapter. But the denotational semantics actually show up whenever we have to reason about non-terminating programs, for instance in Infinite Lists. Of course, because they only state what a program is, denotational semantics cannot answer questions about how long a program takes or how much memory it eats; this is governed by the evaluation strategy which dictates how the computer calculates the normal form of an expression. On the other hand, the implementation has to respect the semantics, and to a certain extent, it is the semantics that determine how Haskell programs must be evaluated on a machine. We will elaborate on this in Strict and Non-Strict Semantics. ### What to choose as Semantic Domain? We are now looking for suitable mathematical objects that we can attribute to every Haskell program. In case of the example `10`, `2*5` and `sum [1..4]`, it is clear that all expressions should denote the integer 10. Generalizing, every value `x` of type `Integer` is likely to be an element of the set $\mathbb{Z}$. The same can be done with values of type `Bool`. For functions like `f :: Integer -> Integer`, we can appeal to the mathematical definition of "function" as a set of (argument,value)-pairs, its graph. But interpreting functions as their graph was too quick, because it does not work well with recursive definitions. Consider the definition ```shaves :: Integer -> Integer -> Bool 1 `shaves` 1 = True 2 `shaves` 2 = False 0 `shaves` x = not (x `shaves` x) _ `shaves` _ = False ``` We can think of `0`,`1` and `2` as being male persons with long beards and the question is who shaves whom. Person `1` shaves himself, but `2` gets shaved by the barber `0` because evaluating the third equation yields `0 `shaves` 2 == True`. In general, the third line says that the barber `0` shaves all persons that do not shave themselves. What about the barber himself, is `0 `shaves` 0` true or not? If it is, then the third equation says that it is not. If it is not, then the third equation says that it is. Puzzled, we see that we just cannot attribute `True` or `False` to `0 `shaves` 0`, the graph we use as interpretation for the function `shaves` must have an empty spot. We realize that our semantic objects must be able to incorporate partial functions, functions that are undefined for some arguments. It is well known that this famous example gave rise to serious foundational problems in set theory. It's an example of an impredicative definition, a definition which uses itself, a logical circle. Unfortunately for recursive definitions, the circle is not the problem but the feature. ## Bottom and Partial Functions ### ⊥ Bottom To define partial functions, we introduce a special value ⊥, named bottom and commonly written `_|_` in typewriter font. We say that ⊥ is the completely "undefined" value or function. Every basic data type like `Integer` or `()` contains one ⊥ besides their usual elements. So the possible values of type `Integer` are $\bot, 0, \pm 1, \pm 2, \pm 3, \dots$ Adding ⊥ to the set of values is also called lifting. This is often depicted by a subscript like in $\mathbb{Z}_\bot$. While this is the correct notation for the mathematical set "lifted integers", we prefer to talk about "values of type `Integer`". We do this because $\mathbb{Z}_\bot$ suggests that there are "real" integers $\mathbb{Z}$, but inside Haskell, the "integers" are `Integer`. As another example, the type `()` with only one element actually has two inhabitants: $\bot, ()$ For now, we will stick to programming with `Integer`s. Arbitrary algebraic data types will be treated in section Algebraic Data Types as strict and non-strict languages diverge on how these include ⊥. In Haskell, the expression `undefined` denotes ⊥. With its help, one can indeed verify some semantic properties of actual Haskell programs. `undefined` has the polymorphic type `forall a . a` which of course can be specialized to `undefined :: Integer`, `undefined :: ()`, `undefined :: Integer -> Integer` and so on. In the Haskell Prelude, it is defined as `undefined = error "Prelude.undefined"` As a side note, it follows from the Curry-Howard isomorphism that any value of the polymorphic type `forall a . a` must denote ⊥. ### Partial Functions and the Semantic Approximation Order Now, $\bot$ (bottom type) gives us the possibility to denote partial functions: $f(n) = \begin{cases} 1 & \mbox{ if } n \mbox{ is } 0 \\ -2 & \mbox{ if } n \mbox{ is } 1 \\ \bot & \mbox{ else } \end{cases}$ Here, $f(n)$ yields well defined values for $n=0$ and $n=1$ but gives $\bot$ for all other $n$. Note that the type $\bot$ is universal, as $\bot$ has no value: the function $\bot$`:: Integer -> Integer` is given by $\bot(n) = \bot$ for all $n$ where the $\bot$ on the right hand side denotes a value of type `Integer`. To formalize, partial functions say, of type `Integer -> Integer` are at least mathematical mappings from the lifted integers $\mathbb{Z}_\bot=\{\bot, 0, \pm 1, \pm 2, \pm 3, \dots\}$ to the lifted integers. But this is not enough, since it does not acknowledge the special role of $\bot$. For example, the definition $g(n) = \begin{cases} 1 & \mbox{ if } n \mbox{ is } \bot \\ \bot & \mbox{ else } \end{cases}$ looks counterintuitive, and, in fact, is wrong. Why does $g(\bot)$ yield a defined value whereas $g(1)$ is undefined? The intuition is that every partial function $g$ should yield more defined answers for more defined arguments. To formalize, we can say that every concrete number is more defined than $\bot$: $\bot\sqsubset 1\ ,\ \bot\sqsubset 2\ , \dots$ Here, $a\sqsubset b$ denotes that $b$ is more defined than $a$. Likewise, $a\sqsubseteq b$ will denote that either $b$ is more defined than $a$ or both are equal (and so have the same definedness). $\sqsubset$ is also called the semantic approximation order because we can approximate defined values by less defined ones thus interpreting "more defined" as "approximating better". Of course, $\bot$ is designed to be the least element of a data type, we always have that $\bot\sqsubset x$ for all $x$, except the case when $x$ happens to denote $\bot$ itself: $\forall x\neq\bot\ \ \ \bot\sqsubset x$ As no number is more defined than another, the mathematical relation $\sqsubset$ is false for any pair of numbers: $1 \sqsubset 1$ does not hold. neither $1 \sqsubset 2$ nor $2 \sqsubset 1$ hold. This is contrasted to ordinary order predicate $\le$, which can compare any two numbers. A quick way to remember this is the sentence: "$1$ and $2$ are different in terms of information content but are equal in terms of information quantity". That's another reason why we use a different symbol: $\sqsubseteq$. neither $1 \sqsubseteq 2$ nor $2 \sqsubseteq 1$ hold, but $1 \sqsubseteq 1$ holds. One says that $\sqsubseteq$ specifies a partial order and that the values of type `Integer` form a partially ordered set (poset for short). A partial order is characterized by the following three laws • Reflexivity, everything is just as defined as itself: $x \sqsubseteq x$ for all $x$ • Transitivity: if $x \sqsubseteq y$ and $y \sqsubseteq z$, then $x \sqsubseteq z$ • Antisymmetry: if both $x \sqsubseteq y$ and $y \sqsubseteq x$ hold, then $x$ and $y$ must be equal: $x=y$. Exercises Do the integers form a poset with respect to the order $\le$? We can depict the order $\sqsubseteq$ on the values of type `Integer` by the following graph where every link between two nodes specifies that the one above is more defined than the one below. Because there is only one level (excluding $\bot$), one says that `Integer` is a flat domain. The picture also explains the name of $\bot$: it's called bottom because it always sits at the bottom. ### Monotonicity Our intuition about partial functions now can be formulated as following: every partial function $f$ is a monotone mapping between partially ordered sets. More defined arguments will yield more defined values: $x\sqsubseteq y \Longrightarrow f(x)\sqsubseteq f(y)$ In particular, a function $h$ with $h(\bot)=1$ is constant: $h(n)=1$ for all $n$. Note that here it is crucial that $1 \sqsubseteq 2$ etc. don't hold. Translated to Haskell, monotonicity means that we cannot use $\bot$ as a condition, i.e. we cannot pattern match on $\bot$, or its equivalent `undefined`. Otherwise, the example $g$ from above could be expressed as a Haskell program. As we shall see later, $\bot$ also denotes non-terminating programs, so that the inability to observe $\bot$ inside Haskell is related to the halting problem. Of course, the notion of more defined than can be extended to partial functions by saying that a function is more defined than another if it is so at every possible argument: $f \sqsubseteq g \mbox{ if } \forall x. f(x) \sqsubseteq g(x)$ Thus, the partial functions also form a poset, with the undefined function $\bot(x)=\bot$ being the least element. ## Recursive Definitions as Fixed Point Iterations ### Approximations of the Factorial Function Now that we have a means to describe partial functions, we can give an interpretation to recursive definitions. Lets take the prominent example of the factorial function $f(n)=n!$ whose recursive definition is $f(n) = \mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n \cdot f(n-1)$ Although we saw that interpreting this directly as a set description may lead to problems, we intuitively know that in order to calculate $f(n)$ for every given $n$ we have to iterate the right hand side. This iteration can be formalized as follows: we calculate a sequence of functions $f_k$ with the property that each one consists of the right hand side applied to the previous one, that is $f_{k+1}(n) = \mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n \cdot f_k(n-1)$ We start with the undefined function $f_0(n) = \bot$, and the resulting sequence of partial functions reads: $f_1(n) = \begin{cases} 1 & \mbox{ if } n \mbox{ is } 0 \\ \bot & \mbox{ else } \end{cases} \ ,\ f_2(n) = \begin{cases} 1 & \mbox{ if } n \mbox{ is } 0 \\ 1 & \mbox{ if } n \mbox{ is } 1 \\ \bot & \mbox{ else } \end{cases} \ ,\ f_3(n) = \begin{cases} 1 & \mbox{ if } n \mbox{ is } 0 \\ 1 & \mbox{ if } n \mbox{ is } 1 \\ 2 & \mbox{ if } n \mbox{ is } 2 \\ \bot & \mbox{ else } \end{cases}$ and so on. Clearly, $\bot=f_0 \sqsubseteq f_1 \sqsubseteq f_2 \sqsubseteq \dots$ and we expect that the sequence converges to the factorial function. The iteration follows the well known scheme of a fixed point iteration $x_0, g(x_0), g(g(x_0)), g(g(g(x_0))), \dots$ In our case, $x_0$ is a function and $g$ is a functional, a mapping between functions. We have $x_0 = \bot$ and $g(x) = n\mapsto\mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n*x(n-1) \,$ If we start with $x_0 = \bot$, the iteration will yield increasingly defined approximations to the factorial function $\bot\sqsubseteq g(\bot)\sqsubseteq g(g(\bot))\sqsubseteq g(g(g(\bot)))\sqsubseteq \dots$ (Proof that the sequence increases: The first inequality $\bot\sqsubseteq g(\bot)$ follows from the fact that $\bot$ is less defined than anything else. The second inequality follows from the first one by applying $g$ to both sides and noting that $g$ is monotone. The third follows from the second in the same fashion and so on.) It is very illustrative to formulate this iteration scheme in Haskell. As functionals are just ordinary higher order functions, we have ```g :: (Integer -> Integer) -> (Integer -> Integer) g x = \n -> if n == 0 then 1 else n * x (n-1) x0 :: Integer -> Integer x0 = undefined (f0:f1:f2:f3:f4:fs) = iterate g x0 ``` We can now evaluate the functions `f0,f1,...` at sample arguments and see whether they yield `undefined` or not: ``` > f3 0 1 > f3 1 1 > f3 2 2 > f3 5 *** Exception: Prelude.undefined > map f3 [0..] [1,1,2,*** Exception: Prelude.undefined > map f4 [0..] [1,1,2,6,*** Exception: Prelude.undefined > map f1 [0..] [1,*** Exception: Prelude.undefined ``` Of course, we cannot use this to check whether f4 is really undefined for all arguments. ### Convergence To the mathematician, the question whether this sequence of approximations converges is still to be answered. For that, we say that a poset is a directed complete partial order (dcpo) iff every monotone sequence $x_0\sqsubseteq x_1\sqsubseteq \dots$ (also called chain) has a least upper bound (supremum) $\sup_{\sqsubseteq} \{x_0\sqsubseteq x_1\sqsubseteq \dots\} = x$. If that's the case for the semantic approximation order, we clearly can be sure that monotone sequence of functions approximating the factorial function indeed has a limit. For our denotational semantics, we will only meet dcpos which have a least element $\bot$ which are called complete partial orders (cpo). The `Integer`s clearly form a (d)cpo, because the monotone sequences consisting of more than one element must be of the form $\bot\sqsubseteq\dots\sqsubseteq\ \bot\sqsubseteq n\sqsubseteq n\sqsubseteq \dots\sqsubseteq n$ where $n$ is an ordinary number. Thus, $n$ is already the least upper bound. For functions `Integer -> Integer`, this argument fails because monotone sequences may be of infinite length. But because `Integer` is a (d)cpo, we know that for every point $n$, there is a least upper bound $\sup_{\sqsubseteq} \{\bot=f_0(n) \sqsubseteq f_1(n) \sqsubseteq f_2(n) \sqsubseteq \dots\} =: f(n)$. As the semantic approximation order is defined point-wise, the function $f$ is the supremum we looked for. These have been the last touches for our aim to transform the impredicative definition of the factorial function into a well defined construction. Of course, it remains to be shown that $f(n)$ actually yields a defined value for every $n$, but this is not hard and far more reasonable than a completely ill-formed definition. ### Bottom includes Non-Termination It is instructive to try our newly gained insight into recursive definitions on an example that does not terminate: $f(n) = f(n+1)$ The approximating sequence reads $f_0 = \bot, f_1 = \bot, \dots$ and consists only of $\bot$. Clearly, the resulting limit is $\bot$ again. From an operational point of view, a machine executing this program will loop indefinitely. We thus see that $\bot$ may also denote a non-terminating function or value. Hence, given the halting problem, pattern matching on $\bot$ in Haskell is impossible. ### Interpretation as Least Fixed Point Earlier, we called the approximating sequence an example of the well known "fixed point iteration" scheme. And of course, the definition of the factorial function $f$ can also be thought as the specification of a fixed point of the functional $g$: $f = g(f) = n\mapsto\mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n\cdot f(n-1)$ However, there might be multiple fixed points. For instance, there are several $f$ which fulfill the specification $f = n\mapsto\mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } f(n+1)$, Of course, when executing such a program, the machine will loop forever on $f(1)$ or $f(2)$ and thus not produce any valuable information about the value of $f(1)$. This corresponds to choosing the least defined fixed point as semantic object $f$ and this is indeed a canonical choice. Thus, we say that $f=g(f)$, defines the least fixed point $f$ of $g$. Clearly, least is with respect to our semantic approximation order $\sqsubseteq$. The existence of a least fixed point is guaranteed by our iterative construction if we add the condition that $g$ must be continuous (sometimes also called "chain continuous"). That simply means that $g$ respects suprema of monotone sequences: $\sup_{\sqsubseteq}\{g(x_0)\sqsubseteq g(x_1) \sqsubseteq\dots\} = g\left(\sup_{\sqsubseteq}\{x_0\sqsubseteq x_1\sqsubseteq\dots\}\right)$ We can then argue that with $f=\sup_{\sqsubseteq}\{x_0\sqsubseteq g(x_0)\sqsubseteq g(g(x_0))\sqsubseteq\dots\}$ we have $\begin{array}{lcl} g(f) &=& g\left(\sup_{\sqsubseteq}\{x_0\sqsubseteq g(x_0)\sqsubseteq g(g(x_0))\sqsubseteq\dots\}\right)\\ &=& \sup_{\sqsubseteq}\{g(x_0)\sqsubseteq g(g(x_0))\sqsubseteq\dots\}\\ &=& \sup_{\sqsubseteq}\{x_0 \sqsubseteq g(x_0)\sqsubseteq g(g(x_0))\sqsubseteq\dots\}\\ &=& f \end{array}$ and the iteration limit is indeed a fixed point of $g$. You may also want to convince yourself that the fixed point iteration yields the least fixed point possible. Exercises Prove that the fixed point obtained by fixed point iteration starting with $x_0=\bot$ is also the least one, that it is smaller than any other fixed point. (Hint: $\bot$ is the least element of our cpo and $g$ is monotone) By the way, how do we know that each Haskell function we write down indeed is continuous? Just as with monotonicity, this has to be enforced by the programming language. Admittedly, these properties can somewhat be enforced or broken at will, so the question feels a bit void. But intuitively, monotonicity is guaranteed by not allowing pattern matches on $\bot$. For continuity, we note that for an arbitrary type `a`, every simple function `a -> Integer` is automatically continuous because the monotone sequences of `Integer`s are of finite length. Any infinite chain of values of type `a` gets mapped to a finite chain of `Integer`s and respect for suprema becomes a consequence of monotonicity. Thus, all functions of the special case `Integer -> Integer` must be continuous. For functionals like $g$`::(Integer -> Integer) -> (Integer -> Integer)`, the continuity then materializes due to currying, as the type is isomorphic to `::((Integer -> Integer), Integer) -> Integer` and we can take `a=((Integer -> Integer), Integer)`. In Haskell, the fixed interpretation of the factorial function can be coded as `factorial = fix g` with the help of the fixed point combinator `fix :: (a -> a) -> a`. We can define it by `fix f = let x = f x in x` which leaves us somewhat puzzled because when expanding $factorial$, the result is not anything different from how we would have defined the factorial function in Haskell in the first place. But of course, the construction this whole section was about is not at all present when running a real Haskell program. It's just a means to put the mathematical interpretation of Haskell programs on a firm ground. Yet it is very nice that we can explore these semantics in Haskell itself with the help of `undefined`. ## Strict and Non-Strict Semantics After having elaborated on the denotational semantics of Haskell programs, we will drop the mathematical function notation $f(n)$ for semantic objects in favor of their now equivalent Haskell notation `f n`. ### Strict Functions A function `f` with one argument is called strict, if and only if `f ⊥ = ⊥`. Here are some examples of strict functions ```id x = x succ x = x + 1 power2 0 = 1 power2 n = 2 * power2 (n-1) ``` and there is nothing unexpected about them. But why are they strict? It is instructive to prove that these functions are indeed strict. For `id`, this follows from the definition. For `succ`, we have to ponder whether `⊥ + 1` is `⊥` or not. If it was not, then we should for example have `⊥ + 1 = 2` or more general `⊥ + 1 = k` for some concrete number k. We remember that every function is monotone, so we should have for example `2 = ⊥ + 1 ⊑ 4 + 1 = 5` as `⊥ ⊑ 4`. But neither of `2 ⊑ 5`, `2 = 5` nor `2 ⊒ 5` is valid so that k cannot be 2. In general, we obtain the contradiction `k = ⊥ + 1 ⊑ k + 1 = k + 1`. and thus the only possible choice is `succ ⊥ = ⊥ + 1 = ⊥` and `succ` is strict. Exercises Prove that `power2` is strict. While one can base the proof on the "obvious" fact that `power2 n` is $2^n$, the latter is preferably proven using fixed point iteration. ### Non-Strict and Strict Languages Searching for non-strict functions, it happens that there is only one prototype of a non-strict function of type `Integer -> Integer`: ```one x = 1 ``` Its variants are `constk x = k` for every concrete number `k`. Why are these the only ones possible? Remember that `one n` can be no less defined than `one ⊥`. As `Integer` is a flat domain, both must be equal. Why is `one` non-strict? To see that it is, we use a Haskell interpreter and try ```> one (undefined :: Integer) 1 ``` which is not ⊥. This is reasonable as `one` completely ignores its argument. When interpreting ⊥ in an operational sense as "non-termination", one may say that the non-strictness of `one` means that it does not force its argument to be evaluated and therefore avoids the infinite loop when evaluating the argument ⊥. But one might as well say that every function must evaluate its arguments before computing the result which means that `one ⊥` should be ⊥, too. That is, if the program computing the argument does not halt, `one` should not halt as well.[3] It shows up that one can choose freely this or the other design for a functional programming language. One says that the language is strict or non-strict depending on whether functions are strict or non-strict by default. The choice for Haskell is non-strict. In contrast, the functional languages ML and Lisp choose strict semantics. ### Functions with several Arguments The notion of strictness extends to functions with several variables. For example, a function `f` of two arguments is strict in the second argument if and only if `f x ⊥ = ⊥` for every `x`. But for multiple arguments, mixed forms where the strictness depends on the given value of the other arguments, are much more common. An example is the conditional ```cond b x y = if b then x else y ``` We see that it is strict in `y` depending on whether the test `b` is `True` or `False`: ```cond True x ⊥ = x cond False x ⊥ = ⊥ ``` and likewise for `x`. Apparently, `cond` is certainly ⊥ if both `x` and `y` are, but not necessarily when at least one of them is defined. This behavior is called joint strictness. Clearly, `cond` behaves like the if-then-else statement where it is crucial not to evaluate both the `then` and the `else` branches: ```if null xs then 'a' else head xs if n == 0 then 1 else 5 / n ``` Here, the else part is ⊥ when the condition is met. Thus, in a non-strict language, we have the possibility to wrap primitive control statements such as if-then-else into functions like `cond`. This way, we can define our own control operators. In a strict language, this is not possible as both branches will be evaluated when calling `cond` which makes it rather useless. This is a glimpse of the general observation that non-strictness offers more flexibility for code reuse than strictness. See the chapter Laziness[4] for more on this subject. ## Algebraic Data Types After treating the motivation case of partial functions between `Integer`s, we now want to extend the scope of denotational semantics to arbitrary algebraic data types in Haskell. A word about nomenclature: the collection of semantic objects for a particular type is usually called a domain. This term is more a generic name than a particular definition and we decide that our domains are cpos (complete partial orders), that is sets of values together with a relation more defined that obeys some conditions to allow fixed point iteration. Usually, one adds additional conditions to the cpos that ensure that the values of our domains can be represented in some finite way on a computer and thereby avoiding to ponder the twisted ways of uncountable infinite sets. But as we are not going to prove general domain theoretic theorems, the conditions will just happen to hold by construction. ### Constructors Let's take the example types ```data Bool = True | False data Maybe a = Just a | Nothing ``` Here, `True`, `False` and `Nothing` are nullary constructors whereas `Just` is an unary constructor. The inhabitants of `Bool` form the following domain: Remember that ⊥ is added as least element to the set of values `True` and `False`, we say that the type is lifted[5]. A domain whose poset diagram consists of only one level is called a flat domain. We already know that $Integer$ is a flat domain as well, it's just that the level above ⊥ has an infinite number of elements. What are the possible inhabitants of `Maybe Bool`? They are ```⊥, Nothing, Just ⊥, Just True, Just False ``` So the general rule is to insert all possible values into the unary (binary, ternary, ...) constructors as usual but without forgetting ⊥. Concerning the partial order, we remember the condition that the constructors should be monotone just as any other functions. Hence, the partial order looks as follows But there is something to ponder: why isn't `Just ⊥ = ⊥`? I mean "Just undefined" is as undefined as "undefined"! The answer is that this depends on whether the language is strict or non-strict. In a strict language, all constructors are strict by default, i.e. `Just ⊥ = ⊥` and the diagram would reduce to As a consequence, all domains of a strict language are flat. But in a non-strict language like Haskell, constructors are non-strict by default and `Just ⊥` is a new element different from ⊥, because we can write a function that reacts differently to them: ```f (Just _) = 4 f Nothing = 7 ``` As `f` ignores the contents of the `Just` constructor, `f (Just ⊥)` is `4` but `f ⊥` is `⊥` (intuitively, if f is passed ⊥, it will not be possible to tell whether to take the Just branch or the Nothing branch, and so ⊥ will be returned). This gives rise to non-flat domains as depicted in the former graph. What should these be of use for? In the context of Graph Reduction, we may also think of ⊥ as an unevaluated expression. Thus, a value `x = Just ⊥` may tell us that a computation (say a lookup) succeeded and is not `Nothing`, but that the true value has not been evaluated yet. If we are only interested in whether `x` succeeded or not, this actually saves us from the unnecessary work to calculate whether `x` is `Just True` or `Just False` as would be the case in a flat domain. The full impact of non-flat domains will be explored in the chapter Laziness, but one prominent example are infinite lists treated in section Recursive Data Types and Infinite Lists. ### Pattern Matching In the section Strict Functions, we proved that some functions are strict by inspecting their results on different inputs and insisting on monotonicity. However, in the light of algebraic data types, there can only be one source of strictness in real life Haskell: pattern matching, i.e. `case` expressions. The general rule is that pattern matching on a constructor of a `data`-type will force the function to be strict, i.e. matching ⊥ against a constructor always gives ⊥. For illustration, consider ```const1 _ = 1 ``` ```const1' True = 1 const1' False = 1 ``` The first function `const1` is non-strict whereas the `const1'` is strict because it decides whether the argument is `True` or `False` although its result doesn't depend on that. Pattern matching in function arguments is equivalent to `case`-expressions ```const1' x = case x of True -> 1 False -> 1 ``` which similarly impose strictness on `x`: if the argument to the `case` expression denotes ⊥ the while `case` will denote ⊥, too. However, the argument for case expressions may be more involved as in ```foo k table = case lookup ("Foo." ++ k) table of Nothing -> ... Just x -> ... ``` and it can be difficult to track what this means for the strictness of `foo`. An example for multiple pattern matches in the equational style is the logical `or`: ```or True _ = True or _ True = True or _ _ = False ``` Note that equations are matched from top to bottom. The first equation for `or` matches the first argument against `True`, so `or` is strict in its first argument. The same equation also tells us that `or True x` is non-strict in `x`. If the first argument is `False`, then the second will be matched against `True` and `or False x` is strict in `x`. Note that while wildcards are a general sign of non-strictness, this depends on their position with respect to the pattern matches against constructors. Exercises 1. Give an equivalent discussion for the logical `and` 2. Can the logical "excluded or" (`xor`) be non-strict in one of its arguments if we know the other? There is another form of pattern matching, namely irrefutable patterns marked with a tilde `~`. Their use is demonstrated by ```f ~(Just x) = 1 f Nothing = 2 ``` An irrefutable pattern always succeeds (hence the name) resulting in `f ⊥ = 1`. But when changing the definition of `f` to ```f ~(Just x) = x + 1 f Nothing = 2 -- this line may as well be left away ``` we have ```f ⊥ = ⊥ + 1 = ⊥ f (Just 1) = 1 + 1 = 2 ``` If the argument matches the pattern, `x` will be bound to the corresponding value. Otherwise, any variable like `x` will be bound to ⊥. By default, `let` and `where` bindings are non-strict, too: ```foo key map = let Just x = lookup key map in ... ``` is equivalent to ```foo key map = case (lookup key map) of ~(Just x) -> ... ``` Exercises 1. The Haskell language definition gives the detailed semantics of pattern matching and you should now be able to understand it. So go on and have a look! 2. Consider a function `or` of two `Bool`ean arguments with the following properties: ```or ⊥ ⊥ = ⊥ or True ⊥ = True or ⊥ True = True or False y = y or x False = x ``` This function is another example of joint strictness, but a much sharper one: the result is only ⊥ if both arguments are (at least when we restrict the arguments to `True` and ⊥). Can such a function be implemented in Haskell? ### Recursive Data Types and Infinite Lists The case of recursive data structures is not very different from the base case. Consider a list of unit values ```data List = [] | () : List ``` Though this seems like a simple type, there is a surprisingly complicated number of ways you can fit $\bot$ in here and there, and therefore the corresponding graph is complicated. The bottom of this graph is shown below. An ellipsis indicates that the graph continues along this direction. A red ellipse behind an element indicates that this is the end of a chain; the element is in normal form. and so on. But now, there are also chains of infinite length like `⊥ $\sqsubseteq$ ():⊥ $\sqsubseteq$ ():():⊥ $\sqsubseteq$ ...` This causes us some trouble as we noted in section Convergence that every monotone sequence must have a least upper bound. This is only possible if we allow for infinite lists. Infinite lists (sometimes also called streams) turn out to be very useful and their manifold use cases are treated in full detail in chapter Laziness. Here, we will show what their denotational semantics should be and how to reason about them. Note that while the following discussion is restricted to lists only, it easily generalizes to arbitrary recursive data structures like trees. In the following, we will switch back to the standard list type ```data [a] = [] | a : [a] ``` to close the syntactic gap to practical programming with infinite lists in Haskell. Exercises 1. Draw the non-flat domain corresponding `[Bool]`. 2. How is the graphic to be changed for `[Integer]`? Calculating with infinite lists is best shown by example. For that, we need an infinite list ```ones :: [Integer] ones = 1 : ones ``` When applying the fixed point iteration to this recursive definition, we see that `ones` ought to be the supremum of `⊥ $\sqsubseteq$ 1:⊥ $\sqsubseteq$ 1:1:⊥ $\sqsubseteq$ 1:1:1:⊥ $\sqsubseteq$...`, that is an infinite list of `1`. Let's try to understand what `take 2 ones` should be. With the definition of `take` ```take 0 _ = [] take n (x:xs) = x : take (n-1) xs take n [] = [] ``` we can apply `take` to elements of the approximating sequence of `ones`: ```take 2 ⊥ ==> ⊥ take 2 (1:⊥) ==> 1 : take 1 ⊥ ==> 1 : ⊥ take 2 (1:1:⊥) ==> 1 : take 1 (1:⊥) ==> 1 : 1 : take 0 ⊥ ==> 1 : 1 : [] ``` We see that `take 2 (1:1:1:⊥)` and so on must be the same as `take 2 (1:1:⊥) = 1:1:[]` because `1:1:[]` is fully defined. Taking the supremum on both the sequence of input lists and the resulting sequence of output lists, we can conclude ```take 2 ones = 1:1:[] ``` Thus, taking the first two elements of `ones` behaves exactly as expected. Generalizing from the example, we see that reasoning about infinite lists involves considering the approximating sequence and passing to the supremum, the truly infinite list. Still, we did not give it a firm ground. The solution is to identify the infinite list with the whole chain itself and to formally add it as a new element to our domain: the infinite list is the sequence of its approximations. Of course, any infinite list like `ones` can compactly depicted as ```ones = 1 : 1 : 1 : 1 : ... ``` what simply means that ```ones = (⊥ $\sqsubseteq$ 1:⊥ $\sqsubseteq$ 1:1:⊥ $\sqsubseteq$ ...) ``` Exercises 1. Of course, there are more interesting infinite lists than `ones`. Can you write recursive definition in Haskell for 1. the natural numbers `nats = 1:2:3:4:...` 2. a cycle like `cycle123 = 1:2:3: 1:2:3 : ...` 2. Look at the Prelude functions `repeat` and `iterate` and try to solve the previous exercise with their help. 3. Use the example from the text to find the value the expression `drop 3 nats` denotes. 4. Assume that the work in a strict setting, i.e. that the domain of `[Integer]` is flat. What does the domain look like? What about infinite lists? What value does `ones` denote? What about the puzzle of how a computer can calculate with infinite lists? It takes an infinite amount of time, after all? Well, this is true. But the trick is that the computer may well finish in a finite amount of time if it only considers a finite part of the infinite list. So, infinite lists should be thought of as potentially infinite lists. In general, intermediate results take the form of infinite lists whereas the final value is finite. It is one of the benefits of denotational semantics that one treat the intermediate infinite data structures as truly infinite when reasoning about program correctness. Exercises 1. To demonstrate the use of infinite lists as intermediate results, show that ```take 3 (map (+1) nats) = take 3 (tail nats) ``` by first calculating the infinite sequence corresponding to `map (+1) nats`. 2. Of course, we should give an example where the final result indeed takes an infinite time. So, what does ```filter (< 5) nats ``` denote? 3. Sometimes, one can replace `filter` with `takeWhile` in the previous exercise. Why only sometimes and what happens if one does? As a last note, the construction of a recursive domain can be done by a fixed point iteration similar to recursive definition for functions. Yet, the problem of infinite chains has to be tackled explicitly. See the literature in External Links for a formal construction. ### Haskell specialities: Strictness Annotations and Newtypes Haskell offers a way to change the default non-strict behavior of data type constructors by strictness annotations. In a data declaration like ```data Maybe' a = Just' !a | Nothing' ``` an exclamation point `!` before an argument of the constructor specifies that he should be strict in this argument. Hence we have `Just' ⊥ = ⊥` in our example. Further information may be found in chapter Strictness. In some cases, one wants to rename a data type, like in ```data Couldbe a = Couldbe (Maybe a) ``` However, `Couldbe a` contains both the elements `⊥` and `Couldbe ⊥`. With the help of a `newtype` definition ```newtype Couldbe a = Couldbe (Maybe a) ``` we can arrange that `Couldbe a` is semantically equal to `Maybe a`, but different during type checking. In particular, the constructor `Couldbe` is strict. Yet, this definition is subtly different from ```data Couldbe' a = Couldbe' !(Maybe a) ``` To explain how, consider the functions ```f (Couldbe m) = 42 f' (Couldbe' m) = 42 ``` Here, `f' ⊥` will cause the pattern match on the constructor `Couldbe'` fail with the effect that `f' ⊥ = ⊥`. But for the newtype, the match on `Couldbe` will never fail, we get `f ⊥ = 42`. In a sense, the difference can be stated as: • for the strict case, `Couldbe' ⊥` is a synonym for ⊥ • for the newtype, ⊥ is a synonym for `Couldbe ⊥` with the agreement that a pattern match on ⊥ fails and that a match on `Constructor ⊥` does not. Newtypes may also be used to define recursive types. An example is the alternate definition of the list type `[a]` ``` newtype List a = In (Maybe (a, List a)) ``` Again, the point is that the constructor `In` does not introduce an additional lifting with ⊥. ## Other Selected Topics ### Abstract Interpretation and Strictness Analysis As lazy evaluation means a constant computational overhead, a Haskell compiler may want to discover where inherent non-strictness is not needed at all which allows it to drop the overhead at these particular places. To that extent, the compiler performs strictness analysis just like we proved in some functions to be strict section Strict Functions. Of course, details of strictness depending on the exact values of arguments like in our example `cond` are out of scope (this is in general undecidable). But the compiler may try to find approximate strictness information and this works in many common cases like `power2`. Now, abstract interpretation is a formidable idea to reason about strictness: ... To do: Complete section ### Interpretation as Powersets So far, we have introduced ⊥ and the semantic approximation order $\sqsubseteq$ abstractly by specifying their properties. However, both as well as any inhabitants of a data type like `Just ⊥` can be interpreted as ordinary sets. This is called the powerset construction. NOTE: i'm not sure whether this is really true. Someone how knows, please correct this. The idea is to think of ⊥ as the set of all possible values and that a computation retrieves more information this by choosing a subset. In a sense, the denotation of a value starts its life as the set of all values which will be reduced by computations until there remains a set with a single element only. As an example, consider `Bool` where the domain looks like ```{True} {False} \ / \ / ⊥ = {True, False} ``` The values `True` and `False` are encoded as the singleton sets `{True}` and `{False}` and ⊥ is the set of all possible values. Another example is `Maybe Bool`: ``` {Just True} {Just False} \ / \ / {Nothing} {Just True, Just False} \ / \ / ⊥ = {Nothing, Just True, Just False} ``` We see that the semantic approximation order is equivalent to set inclusion, but with arguments switched: $x\sqsubseteq y \iff x \supseteq y$ This approach can be used to give a semantics to exceptions in Haskell[6]. ### Naïve Sets are unsuited for Recursive Data Types In section Naïve Sets are unsuited for Recursive Definitions, we argued that taking simple sets as denotation for types doesn't work well with partial functions. In the light of recursive data types, things become even worse as John C. Reynolds showed in his paper Polymorphism is not set-theoretic[7]. Reynolds actually considers the recursive type ```newtype U = In ((U -> Bool) -> Bool) ``` Interpreting `Bool` as the set `{True,False}` and the function type `A -> B` as the set of functions from `A` to `B`, the type `U` cannot denote a set. This is because `(A -> Bool)` is the set of subsets (powerset) of `A` which, due to a diagonal argument analogous to Cantor's argument that there are "more" real numbers than natural ones, always has a bigger cardinality than `A`. Thus, `(U -> Bool) -> Bool` has an even bigger cardinality than `U` and there is no way for it to be isomorphic to `U`. Hence, the set `U` must not exist, a contradiction. In our world of partial functions, this argument fails. Here, an element of `U` is given by a sequence of approximations taken from the sequence of domains `⊥, (⊥ -> Bool) -> Bool, (((⊥ -> Bool) -> Bool) -> Bool) -> Bool` and so on where ⊥ denotes the domain with the single inhabitant ⊥. While the author of this text admittedly has no clue on what such a thing should mean, the constructor gives a perfectly well defined object for `U`. We see that the type `(U -> Bool) -> Bool` merely consists of shifted approximating sequences which means that it is isomorphic to `U`. As a last note, Reynolds actually constructs an equivalent of `U` in the second order polymorphic lambda calculus. There, it happens that all terms have a normal form, i.e. there are only total functions when we do not include a primitive recursion operator `fix :: (a -> a) -> a`. Thus, there is no true need for partial functions and ⊥, yet a naïve set theoretic semantics fails. We can only speculate that this has to do with the fact that not every mathematical function is computable. In particular, the set of computable functions `A -> Bool` should not have a bigger cardinality than `A`. ## Notes 1. In fact, there are no written down and complete denotational semantics of Haskell. This would be a tedious task void of additional insight and we happily embrace the folklore and common sense semantics. 2. The term Laziness comes from the fact that the prevalent implementation technique for non-strict languages is called lazy evaluation 3. The term lifted is somewhat overloaded, see also Unboxed Types. 4. John C. Reynolds. Polymorphism is not set-theoretic. INRIA Rapports de Recherche No. 296. May 1984.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 162, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233648180961609, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/39415-how-heck-do-i-integrate.html
# Thread: 1. ## How the heck do I integrate this? I just got done taking a midterm from . Anyway, I managed to at least start most of the problems but this one just blew me away. I tried using some identities, etc. to make it look less evil but was not able to. 2. One way is to use parts. Let $u=x, \;\ dv=csc^{2}(2x)dx, \;\ du=dx, \;\ v=\frac{-1}{2}cot(2x)dx$ $\frac{-1}{2}xcot(2x)+\frac{1}{2}\int{cot(2x)}dx$ Take note that $\int{cot(u)}dx=ln|sin(u)|+C$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615324139595032, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/260891/chromatic-polynomials-for-graphs/261027
# Chromatic Polynomials for Graphs The chromatic polynomial of a graph $G$ is the polynomial $C_G(k)$ computed recursively using the theorem of Birkhoff and Lewis. The theorem of Birkhoff and Lewis states: $c_G(k) = c_{G-e}(k) - c_{G/e}(k)$ where $e$ is any edge from $G$, and • $G - e$ is the graph obtained from $G$ by removing edge $e$. • $G/e$ the graph obtained from $G$ by removing $e$, identifying the end vertices of $e$, and leaving only one copy of any resulting multiple edges. Given the graphs $K_{1,3}$ , $K_{1,5}$, $C_4$, $C_5$ and $K_4-e$ , find the chromatic polynomials and determine how many $5$-colorings exist. Appreciate any help and answers. - ## 1 Answer You also need some base cases, which can be deduced from the definition of a chromatic polynomial. • The chromatic polynomial for $K_n$ (the complete graph on $n$ vertices) is $k(k-1)\cdots(k-n)$. • The chromatic polynomial for $\overline{K_n}$ (the null graph on $n$ vertices; i.e. no edges) is $k^n$. The general idea is to delete/contract edges until you are left with only these base cases (or some other case you have already computed). Here's an example of performing the deletion-contraction method on the graph $K_{1,2}$. At each step, we perform deletion/contraction on the orange edge. We separately compute and . (I've used different notation to you since I can't draw the graph in the subscript.) Using the base cases, we know $=k^3-k^2$, which we substitute into the first equation to give the chromatic polynomial: $=(k^3-k^3)-k(k-1)=k(k-1)^2.$ If you want to know the number of ways of $5$-colouring $K_{1,2}$, we simply substitute $5$ into its chromatic polynomial. It turns out there are $80$ ways to $5$-colour $K_{1,2}$. Note: in addition to the "deletion contraction" relation, there is also an "addition identification" relation which, in some cases, will be significantly faster. - But if I am to draw $K_{1,3}$ like you drew, how will that look like? Your graphs look easier to use when it comes to computing polynomials, or I might be wrong. – Dream Box Dec 18 '12 at 9:39 It's essentially the same thing. You just pick an edge e, and find the two graphs formed (a) when you delete the edge and (b) when you contract the edge. If you're not at a "base case" then repeat for the graphs obtained in (a) and/or (b). – Douglas S. Stones Dec 18 '12 at 21:37 So what you've done there is.. in the first graph, you show the graph without the edge - the edge+vertices that have been "removed". Then in the second graph you take the graph above and take that other edge left and remove it. Right? Or am I not getting something right here. Though why in the second graph there are only 2 dots and not with an edge as well? – Dream Box Dec 19 '12 at 11:25 – Douglas S. Stones Dec 19 '12 at 20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238671660423279, "perplexity_flag": "head"}
http://mathoverflow.net/questions/2083/stein-manifolds-and-affine-varieties
## Stein Manifolds and Affine Varieties ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When is a Stein manifold a complex affine variety? I had thought that there was a theorem saying that a variety which is Stein and has finitely generated ring of regular functions implies affine, but in the comments to my answer here, Serre's counterexample was brought up. I'm guessing that the answer is that the ring of regular functions must be nontrivial somehow, like it must separate points, but I'm curious about what the exact condition is. - I have to ask: what is "gaga"? Google did not turn up anything useful. – Darsh Ranjan Oct 26 2009 at 5:41 GAGA refers to Serre's paper "Géométrie algébrique et géométrie analytique" and more generally, the philosophy that it embodies that there is a correspondence between complex analytic geometry and complex algebraic geometry. Serre's paper, I believe, focuses on proving that for any variety and sheaf, there is an analytic space and sheaf defined naturally to be the analytifications of what you started with, and that cohomology doesn't change. That is, you can compute cohomology of coherent sheaves in the complex topology. – Charles Siegel Oct 26 2009 at 11:52 ## 1 Answer Charlie, it is funny answering this way but here it is. The criterion you are thinking about is a criterion that is relative to an embedding. It says that if $X$ is a quasi-affine complex normal variety, whose associated analytic space $X^{an}$ is Stein, then $X$ is affine if (and only if) the algebra $\Gamma(X,\mathcal{O}_{X})$ is finitely generated. This is a theorem of Neeman. You can reformulate the requirement of $X$ being quasi-affine as a separation of points property: for any point $x \in X$ consider the subset $S_{x} \subset X$ defined as the set of all points $y \in X$ such that all regular functions on $X$ have equal values at $x$ and $y$. Then by an old theorem of Goodman and Hartshorne $X$ is quasi-affine if $S_{x}$ is finite for all $x$. So you can say that $X$ is affine if it satisfies: 1) $X^{an}$ is Stein; 2) $S_{x}$ is finite for all $x \in X$; 3) $\Gamma(X,\mathcal{O}_{X})$ is finitely generated. - 5 Answering this way (rather than in person) means more people get to see it! – Allen Knutson Feb 17 2010 at 19:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427351355552673, "perplexity_flag": "head"}
http://mathoverflow.net/questions/123135/modern-developments-in-finite-dimensional-linear-algebra
## Modern developments in finite-dimensional linear algebra ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Are there any major fundamental results in finite-dimensional linear algebra discovered after early XX century? Fundamental in the sense of non-numerical (numerical results, of course, are still interesting and important); and major in the sense of something on the scale of SVD or Jordan normal form. (EDIT) As several commenters observed, using Jordan normal form as a benchmark sets the bar way too high. Let's try lowering it to Weyl's inequality. - 6 Does the computational complexity of matrix multiplication (for arbitrary fields) count as "numerical"? If not, the critical exponent has moved as recently as last year. – Felipe Voloch Feb 27 at 20:13 1 @Felipe, does a small movement in the critical exponent count as "major"? Jordan normal form is setting the bar rather high.... – Gerry Myerson Feb 27 at 23:25 @Gerry: major enough to get published in JAMS. – Abdelmalek Abdesselam Feb 27 at 23:28 10 I'm fond of Weyl's question from 1912: given the spectra $\lambda,\mu$ of two Hermitian matrices, what can you say about the spectrum $\nu$ of the sum? which Weyl gave the first inequalities on. The full list of inequalities was conjectured in the 1960s, proven in the late 1990s, and only cut down to the minimal list this century. I won't put this as an "answer" because seriously, Jordan normal form! – Allen Knutson Feb 28 at 2:06 3 Here's our survey article on that result: arxiv.org/abs/math/0009048 – Allen Knutson Feb 28 at 4:08 show 7 more comments ## 7 Answers Definitely, some items on the top of my list are: 1. Random matrix theory --- both asymptotic and non asymptotic; including things like semi-circular law, circular law, and so on. Check out Terry Tao's blog for very nice summaries. 2. The resolution of Horn's conjecture (see this nice summary article by R. Bhatia, which also mentions several other nice connections) 3. Randomised linear algebra and progress on fast solutions to linear systems (see e.g., the very readable summary in N. Vishnoi's web book) 4. Advances in quantum information theory? Though I don't know how much of that I would push into just linear algebra 5. Not advances in linear algebra itself, but the gigantic success of basic linear algebra in new areas (machine learning, information retrieval, etc., e.g., Google's PageRank method). - While I admit that 'non-numerical' is a bit of a vague criteriion; I would still think that almost linear time solution is not 'non-numerical', in the sense it was I believe intended. – quid Feb 28 at 14:40 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I would say the theory of quivers and in particular Gabriel's theorem on finite representation type and its extensions to tame type. Representations of quivers are essentially linear algebra problems in a different language. For instance Jordan canonical form is the description of indecomposable reps of a quiver with one vertex and a loop. In general things like the classification of two endomorphisms of vector spaces, matrix pencils and the n-subspace problem are all problems in the rep theory of quivers. The intro to the book of Gabriel-Roiter says more. Added. A quiver is a directed multigraph, often assumed finite in this context. A representation of a quiver Q is an assignment of a vector space to each vertex and a linear transformation to each edge from the vector space at its source to the vector space if its target. Isomorphisms are isomorphisms of vertex spaces making commuting squares with the edge linear transformations. There is a fairly straightforward notion of direct sum and hence indecomposable rep. Finite rep type means finitely many isoclasses of indecomposables, tame type essentially means indecomposables come in 1-parameter families (plus finitely many exceptions) if you fix the dimensions of the vertex spaces. Wild means its representation theory contains that of all finite dinensional (and hence all finitely generated) algebras. In particular the first order theory is undecidable. Only finite, tame and wild occur. - Hmm. Are there any applications of quivers in linear algebra other than Jordan's form? After brief googling, it seems to me that quivers are used in all branches of mathematics, except for LA. – Timur Feb 28 at 3:39 1 Subspace problems of the form 'classify all ways to embed n subspaces in a vector space' can be studied using quivers. The four subspace problem is studied in a nice paper of Gelfand and Ponomarev 'Problems of linear algebra and classification of quadruples of subspaces...'. – George Melvin Feb 28 at 3:53 15 Timur, the representation of quivers is linear algebra. – Mariano Suárez-Alvarez Feb 28 at 4:29 Just putting the references asked for by Timur: • J. M. Landsberg, "The border rank of the multiplication of $2\times 2$ matrices is seven", J. American Math. Soc. 19 (2006), 447-459. • J. M. Landsberg and G. Ottaviani "New lower bounds for the border rank of matrix multiplication", 2011 preprint. - Since you lowered the level to Weyl's Inequalities (1912), it is worth mentionning the improvements of these inequalities made by Ky Fan, Lidskii and others. They culminated in a much involved conjecture by A. Horn (1961), eventually proved by Knutson & Tao on the turn of the century. - This is a borderline suggestion, both in terms of how "major" it is and timing (does 1931 count as "early" 20th century?), but there is the Gershgorin circle theorem. - Also a borderline suggestion since it is rather multilinear than just linear: Recent progress on low rank tensor approximation for all kinds of different applications within mathematics. A list of applications from this preprint includes • approximation of multidimensional integrals • electronic structure calculations • solving stochastic or parameter dependent PDEs • approximating Green's functions in high dimensions • solving Boltzmann-type equations or high-dimensional Schrödinger equations • rational approximation problems • computational finance • multivariate regression and machine learning. - From Wikipedia: "The linear-programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems." - 1 Both the ellipsoid and the interior point method look to me (note: I'm by no means an expert on this) like analysis-flavored algorithms built specifically for $\mathbb R$ rather than the setting of a general ordered field (or even really closed field); I wouldn't necessarily call them linear algebra for these reasons... – darij grinberg Feb 28 at 5:03 I would not say that the ellipsoid method is "build for $\mathbf R$". It has a definitely arithmetic flavor. Indeed, to get a lower bound for the volume of ellipsoids, one uses crucially the (trivial but overwhemlingly important) fact that the absolute value of a nonzero integer is at least $1$. – ACL Feb 28 at 13:24 While I admit that 'non-numerical' is a bit of a vague criteriion; I would still think that this is not 'non-numerical', in the sense it was I believe intended. – quid Feb 28 at 14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194716215133667, "perplexity_flag": "middle"}
http://www.absoluteastronomy.com/topics/Characteristic_function_(probability_theory)
Characteristic function (probability theory) # Characteristic function (probability theory) Discussion Ask a question about 'Characteristic function (probability theory)' Start a new discussion about 'Characteristic function (probability theory)' Answer questions from other users Full Discussion Forum Encyclopedia In probability theory Probability theory Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single... and statistics Statistics Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments.... , the characteristic function of any random variable Random variable In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable... completely defines its probability distribution Probability distribution In probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values.... . Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density function Probability density function In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the... s or cumulative distribution function Cumulative distribution function In probability theory and statistics, the cumulative distribution function , or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Intuitively, it is the "area so far"... s. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can even be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function Moment-generating function In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or... . There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function. ## Introduction The characteristic function provides an alternative way for describing a random variable Random variable In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable... . Similarly to the cumulative distribution function Cumulative distribution function In probability theory and statistics, the cumulative distribution function , or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Intuitively, it is the "area so far"... F_X(x) = \operatorname{E}[\,\mathbf{1}_{\{X\leq x\}}\,] (where 1{X ≤ x} is the indicator function — it is equal to 1 when , and zero otherwise) which completely determines behavior and properties of the probability distribution of the random variable X, the characteristic function also completely determines behavior and properties of the probability distribution of the random variable X. The two approaches are equivalent in the sense that knowing one of the functions it is always possible to find the other, yet they both provide different insight for understanding the features of the random variable. However, in particular cases, there can be differences in whether these functions can be represented as expressions involving simple standard functions. If a random variable admits a density function Probability density function In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the... , then the characteristic function is its dual Duality (mathematics) In mathematics, a duality, generally speaking, translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often by means of an involution operation: if the dual of A is B, then the dual of B is A. As involutions sometimes have... , in the sense that each of them is a Fourier transform Fourier transform In mathematics, Fourier analysis is a subject area which grew from the study of Fourier series. The subject began with the study of the way general functions may be represented by sums of simpler trigonometric functions... of the other. If a random variable has a moment-generating function Moment-generating function In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or... , then the domain of the characteristic function can be extended to the complex plane, and Note however that the characteristic function of a distribution always exists, even when the probability density function Probability density function In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the... or moment-generating function Moment-generating function In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or... do not. The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables. Another important application is to the theory of the decomposability Indecomposable distribution In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable:... of random variables. ## Definition For a scalar random variable X the characteristic function is defined as the expected value Expected value In probability theory, the expected value of a random variable is the weighted average of all possible values that this random variable can take on... of eitX, where i is the imaginary unit Imaginary unit In mathematics, the imaginary unit allows the real number system ℝ to be extended to the complex number system ℂ, which in turn provides at least one root for every polynomial . The imaginary unit is denoted by , , or the Greek... , and is the argument of the characteristic function: Here FX is the cumulative distribution function Cumulative distribution function In probability theory and statistics, the cumulative distribution function , or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Intuitively, it is the "area so far"... of X, and the integral is of the Riemann–Stieltjes kind. If random variable X has a probability density function Probability density function In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the... ƒX, then the characteristic function is its Fourier transform Fourier transform In mathematics, Fourier analysis is a subject area which grew from the study of Fourier series. The subject began with the study of the way general functions may be represented by sums of simpler trigonometric functions... , and the last formula in parentheses is valid. It should be noted though, that this convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform. For example some authors define , which is essentially a change of parameter. Other notation may be encountered in the literature: $\scriptstyle\hat p$ as the characteristic function for a probability measure p, or $\scriptstyle\hat f$ as the characteristic function corresponding to a density ƒ. The notion of characteristic functions generalizes to multivariate random variables and more complicated random element Random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line... s. The argument of the characteristic function will always belong to the continuous dual of the space where random variable X takes values. For common cases such definitions are listed below: • If X is a k-dimensional random vector, then for • If X is a k×p-dimensional random matrix Random matrix In probability theory and mathematical physics, a random matrix is a matrix-valued random variable. Many important properties of physical systems can be represented mathematically as matrix problems... , then for • If X is a complex random variable, then for • If X is a k-dimensional complex random vector, then for • If X(s) is a stochastic process Stochastic process In probability theory, a stochastic process , or sometimes random process, is the counterpart to a deterministic process... , then for all functions t(s) such that the integral ∫Rt(s)X(s)ds converges for almost all realizations of X Here denotes matrix transpose Transpose In linear algebra, the transpose of a matrix A is another matrix AT created by any one of the following equivalent actions:... , tr(·) — the matrix trace Trace (linear algebra) In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal of A, i.e.,... operator, Re(·) is the real part of a complex number, z denotes complex conjugate Complex conjugate In mathematics, complex conjugates are a pair of complex numbers, both having the same real part, but with imaginary parts of equal magnitude and opposite signs... , and * is conjugate transpose Conjugate transpose In mathematics, the conjugate transpose, Hermitian transpose, Hermitian conjugate, or adjoint matrix of an m-by-n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking the transpose and then taking the complex conjugate of each entry... (that is ). ## Examples Distribution Characteristic function φ(t) Degenerate δa Bernoulli  Bern(p) Binomial B(n, p) Negative binomial Negative binomial distribution In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified number of failures occur... NB(r, p) Poisson Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since... Pois(λ) Uniform Uniform distribution (continuous) In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by... U(a, b) Laplace L(μ, b) Normal N(μ, σ2) Chi-squared χ2k Cauchy Cauchy distribution The Cauchy–Lorentz distribution, named after Augustin Cauchy and Hendrik Lorentz, is a continuous probability distribution. As a probability distribution, it is known as the Cauchy distribution, while among physicists, it is known as the Lorentz distribution, Lorentz function, or Breit–Wigner... Cauchy(μ, θ) |- | Gamma Γ(k, θ) | |- | Exponential Exponential distribution In probability theory and statistics, the exponential distribution is a family of continuous probability distributions. It describes the time between events in a Poisson process, i.e... Exp(λ) | |- | Multivariate normal N(μ, Σ) | |- |} Oberhettinger (1973) provides extensive tables of characteristic functions. ## Properties • The characteristic function of a random variable always exists, since it is an integral of a bounded continuous function over a space whose measure Measure (mathematics) In mathematical analysis, a measure on a set is a systematic way to assign to each suitable subset a number, intuitively interpreted as the size of the subset. In this sense, a measure is a generalization of the concepts of length, area, and volume... is finite. • A characteristic function is uniformly continuous Uniform continuity In mathematics, a function f is uniformly continuous if, roughly speaking, it is possible to guarantee that f and f be as close to each other as we please by requiring only that x and y are sufficiently close to each other; unlike ordinary continuity, the maximum distance between x and y cannot... on the entire space • It is non-vanishing in a region around zero: . • It is bounded: . • It is Hermitian: . In particular, the characteristic function of a symmetric (around the origin) random variable is real-valued and even. • There is a bijection Bijection A bijection is a function giving an exact pairing of the elements of two sets. A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements... between distribution functions Cumulative distribution function In probability theory and statistics, the cumulative distribution function , or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Intuitively, it is the "area so far"... and characteristic functions. That is, for any two random variables X1, X2 • If a random variable X has moments Moment (mathematics) In mathematics, a moment is, loosely speaking, a quantitative measure of the shape of a set of points. The "second moment", for example, is widely used and measures the "width" of a set of points in one dimension or in higher dimensions measures the shape of a cloud of points as it could be fit by... up to k-th order, then the characteristic function φX is k times continuously differentiable on the entire real line. In this case • If a characteristic function φX has a k-th derivative at zero, then the random variable X has all moments up to k if k is even, but only up to if k is odd. • If X1, …, Xn are independent random variables, and a1, …, an are some constants, then the characteristic function of the linear combination of Xi's is One specific case would be the sum of two independent random variables and in which case one would have . • The tail behavior of the characteristic function determines the smoothness Smoothness (probability theory) In probability theory and statistics, smoothness of a density function is a measure which determines how many times the density function can be differentiated, or equivalently the limiting behavior of distribution’s characteristic function.... of the corresponding density function. ### Continuity The bijection stated above between probability distributions and characteristic functions is continuous. That is, whenever a sequence of distribution functions } converges (weakly) to some distribution F(x), the corresponding sequence of characteristic functions } will also converge, and the limit φ(t) will correspond to the characteristic function of law F. More formally, this is stated as Lévy’s continuity theorem: A sequence } of n-variate random variables converges in distribution to random variable X if and only if the sequence } converges pointwise to a function φ which is continuous at the origin. Then φ is the characteristic function of X. This theorem is frequently used to prove the law of large numbers, and the central limit theorem. ### Inversion formulas Since there is a one-to-one correspondence Bijection A bijection is a function giving an exact pairing of the elements of two sets. A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements... between cumulative distribution functions and characteristic functions, it is always possible to find one of these functions if we know the other one. The formula in definition of characteristic function allows us to compute φ when we know the distribution function F (or density ƒ). If, on the other hand, we know the characteristic function φ and want to find the corresponding distribution function, then one of the following inversion theorems can be used. Theorem. If characteristic function φX is integrable, then FX is absolutely continuous, and therefore X has the probability density function Probability density function In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the... given by when X is scalar; in multivariate case the pdf is understood as the Radon–Nikodym derivative of the distribution μX with respect to the Lebesgue measure Lebesgue measure In measure theory, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of n-dimensional Euclidean space. For n = 1, 2, or 3, it coincides with the standard measure of length, area, or volume. In general, it is also called... λ: Theorem (Lévy). If φX is characteristic function of distribution function FX, two points a<b are such that } is a continuity set Continuity set In measure theory, a continuity set of a measure μ is any Borel set B such that |\mu| = 0\,. The class of all continuity sets for given measure μ forms a ring.... of μX (in the univariate case this condition is equivalent to continuity of FX at points a and b), then if X is scalar ,   if X is a vector random variable. Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX) then ,   when X is a scalar random variable ,   when X is a vector random variable. Theorem (Gil-Pelaez). For a univariate random variable X, if x is a continuity point of FX then Inversion formula for multivariate distributions are available. ### Criteria for characteristic functions It is well-known that any non-decreasing càdlàg Càdlàg In mathematics, a càdlàg , RCLL , or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere... function F with limits F(−∞) = 0, F(+∞) = 1 corresponds to a cumulative distribution function Cumulative distribution function In probability theory and statistics, the cumulative distribution function , or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Intuitively, it is the "area so far"... of some random variable. There is also interest in finding similar simple criteria for when a given function φ could be the characteristic function of some random variable. The central result here is Bochner’s theorem Bochner's theorem In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line.- Background :... , although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type. • Bochner’s theorem Bochner's theorem In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line.- Background :... . An arbitrary function is the characteristic function of some random variable if and only if φ is positive definite, continuous at the origin, and if φ(0) = 1. • Khinchine’s criterion. An absolutely continuous complex-valued function φ equal to 1 at the origin is a characteristic function if and only if it admits the representation • Mathias’ theorem. A real, even, continuous, absolutely integrable function φ equal to 1 at the origin is a characteristic function if and only if for n = 0,1,2,…, and all p > 0. Here H2n denotes the Hermite polynomial Hermite polynomials In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence that arise in probability, such as the Edgeworth series; in combinatorics, as an example of an Appell sequence, obeying the umbral calculus; in numerical analysis as Gaussian quadrature; and in physics, where... of degree 2n. • Pólya’s theorem. If φ is a real-valued continuous function which satisfies the conditions 1. φ(0) = 1, 2. φ is even Even and odd functions In mathematics, even functions and odd functions are functions which satisfy particular symmetry relations, with respect to taking additive inverses. They are important in many areas of mathematical analysis, especially the theory of power series and Fourier series... , 3. φ is convex Convex function In mathematics, a real-valued function f defined on an interval is called convex if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph is a convex set... for t>0, 4. φ(∞) = 0, then φ(t) is the characteristic function of an absolutely continuous symmetric distribution. • A convex linear combination Convex combination In convex geometry, a convex combination is a linear combination of points where all coefficients are non-negative and sum up to 1.... (with ) of a finite or a countable number of characteristic functions is also a characteristic function. • The product of a finite number of characteristic functions is also a characteristic function. The same holds for an infinite product provided that it converges to a function continuous at the origin. • If φ is a characteristic function and α is a real number, then φ, Re[φ], |φ|2, and φ(αt) are also characteristic functions. ## Uses Because of the continuity theorem Lévy continuity theorem In probability theory, the Lévy’s continuity theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions... , characteristic functions are used in the most frequently seen proof of the central limit theorem Central limit theorem In probability theory, the central limit theorem states conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The central limit theorem has a number of variants. In its common... . The main trick involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution. ### Basic manipulations of distributions Characteristic functions are particularly useful for dealing with linear functions of independent Statistical independence In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs... random variables. For example, if X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and where the ai are constants, then the characteristic function for Sn is given by In particular, . To see this, write out the definition of characteristic function: Observe that the independence of X and Y is required to establish the equality of the third and fourth expressions. Another special case of interest is when and then Sn is the sample mean. In this case, writing X for the mean, ### Moments Characteristic functions can also be used to find moments Moment (mathematics) In mathematics, a moment is, loosely speaking, a quantitative measure of the shape of a set of points. The "second moment", for example, is widely used and measures the "width" of a set of points in one dimension or in higher dimensions measures the shape of a cloud of points as it could be fit by... of a random variable. Provided that the nth moment exists, characteristic function can be differentiated n times and For example, suppose X has a standard Cauchy distribution Cauchy distribution The Cauchy–Lorentz distribution, named after Augustin Cauchy and Hendrik Lorentz, is a continuous probability distribution. As a probability distribution, it is known as the Cauchy distribution, while among physicists, it is known as the Lorentz distribution, Lorentz function, or Breit–Wigner... . Then . See how this is not differentiable at t = 0, showing that the Cauchy distribution has no expectation Expected value In probability theory, the expected value of a random variable is the weighted average of all possible values that this random variable can take on... . Also see that the characteristic function of the sample mean X of n independent Statistical independence In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs... observations has characteristic function , using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself. The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulant Cumulant In probability theory and statistics, the cumulants κn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. The moments determine the cumulants in the sense that any two probability distributions whose moments are identical will have... s; note that some instead define the cumulant generating function as the logarithm of the moment-generating function Moment-generating function In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or... , and call the logarithm of the characteristic function the second cumulant generating function. ### Data analysis Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood Maximum likelihood In statistics, maximum-likelihood estimation is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood estimation provides estimates for the model's parameters.... estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975) and Heathcote (1977) provide some theoretical background for such an estimation procedure. In addition, Yu (2004) describes applications of empirical characteristic functions to fit time series Time series In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured typically at successive times spaced at uniform time intervals. Examples of time series are the daily closing value of the Dow Jones index or the annual flow volume of the... models where likelihood procedures are impractical. ### Example The Gamma distribution with scale parameter θ and a shape parameter k has the characteristic function Now suppose that we have with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The characteristic functions are which by independence and the basic properties of characteristic function leads to This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2, and we therefore conclude The result can be expanded to n independent gamma distributed random variables with the same scale parameter and we get ## Entire characteristic functions As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytical continuation Analytic continuation In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where an infinite series representation in terms of which... , in cases where this is possible. ## Related concepts Related concepts include the moment-generating function Moment-generating function In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or... and the probability-generating function Probability-generating function In probability theory, the probability-generating function of a discrete random variable is a power series representation of the probability mass function of the random variable... . The characteristic function exists for all probability distributions. However this is not the case for moment generating function. The characteristic function is closely related to the Fourier transform Fourier transform In mathematics, Fourier analysis is a subject area which grew from the study of Fourier series. The subject began with the study of the way general functions may be represented by sums of simpler trigonometric functions... : the characteristic function of a probability density function p(x) is the complex conjugate Complex conjugate In mathematics, complex conjugates are a pair of complex numbers, both having the same real part, but with imaginary parts of equal magnitude and opposite signs... of the continuous Fourier transform Continuous Fourier transform The Fourier transform is a mathematical operation that decomposes a function into its constituent frequencies, known as a frequency spectrum. For instance, the transform of a musical chord made up of pure notes is a mathematical representation of the amplitudes of the individual notes that make... of p(x) (according to the usual convention; see continuous Fourier transform – other conventions). where P(t) denotes the continuous Fourier transform Continuous Fourier transform The Fourier transform is a mathematical operation that decomposes a function into its constituent frequencies, known as a frequency spectrum. For instance, the transform of a musical chord made up of pure notes is a mathematical representation of the amplitudes of the individual notes that make... of the probability density function p(x). Likewise, p(x) may be recovered from φX(t) through the inverse Fourier transform: Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable. ## See also • Subindependence, a weaker condition than independence, that is defined in terms of characteristic functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8668076395988464, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/97109-factoring-sum-cubes.html
# Thread: 1. ## Factoring the sum of cubes Hi MHF, Usually I can factor, but this particular problem is just killing me today. I have been trying to figure this out for over an hour... $128r^3+686z^3$ I know two things... I know I can immediatly factor out the 2 and take it down to $2(64r^3+343z^3)$. And I know the answer is going to be in the form of $2(binomial)(trinomial).$ But for the life of me I cannot seem to find the proper middle terms to make it work. Thanks in advance. 2. consider $4^3=64$ and $7^3 = 343$ now make your problem into the form $2(a^3+b^3) = 2(a+b)(a^2-ab+b^2)$ 3. Originally Posted by pickslides consider $4^3=64$ and $7^3 = 343$ now make your problem into the form $2(a^3+b^3) = 2(a+b)(a^2-ab+b^2)$ $2(4r+7z)(16r^2-28rz+49z^2)$ Yeah, that should have been obvious. After doing 3 hours of math, even the obvious things become so complicated. I think it's time to break and brew some coffee! Thank you Pickslides.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9702922701835632, "perplexity_flag": "middle"}
http://cms.math.ca/cjm/v65/
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM « 2012 (v64) Online First » Volume 65 (2013) — Up to issue number 3 Page Contents 3 Barto, Libor We show that every finite, finitely related algebra in a congruence distributive variety has a near unanimity term operation. As a consequence we solve the near unanimity problem for relational structures: it is decidable whether a given finite set of relations on a finite set admits a compatible near unanimity operation. This consequence also implies that it is decidable whether a given finite constraint language defines a constraint satisfaction problem of bounded strict width. 22 Blomer, Valentin; Brumley, Farrell We prove a non-vanishing result for families of $\operatorname{GL}_n\times\operatorname{GL}_n$ Rankin-Selberg $L$-functions in the critical strip, as one factor runs over twists by Hecke characters. As an application, we simplify the proof, due to Luo, Rudnick, and Sarnak, of the best known bounds towards the Generalized Ramanujan Conjecture at the infinite places for cusp forms on $\operatorname{GL}_n$. A key ingredient is the regularization of the units in residue classes by the use of an Arakelov ray class group. 52 Christensen, Erik; Sinclair, Allan M.; Smith, Roger R.; White, Stuart In this paper we consider near inclusions $A\subseteq_\gamma B$ of C$^*$-algebras. We show that if $B$ is a separable type $\mathrm{I}$ C$^*$-algebra and $A$ satisfies Kadison's similarity problem, then $A$ is also type $\mathrm{I}$ and use this to obtain an embedding of $A$ into $B$. 66 Deng, Shaoqiang; Hu, Zhiguang In this paper we give an explicit formula for the flag curvature of homogeneous Randers spaces of Douglas type and apply this formula to obtain some interesting results. We first deduce an explicit formula for the flag curvature of an arbitrary left invariant Randers metric on a two-step nilpotent Lie group. Then we obtain a classification of negatively curved homogeneous Randers spaces of Douglas type. This results, in particular, in many examples of homogeneous non-Riemannian Finsler spaces with negative flag curvature. Finally, we prove a rigidity result that a homogeneous Randers space of Berwald type whose flag curvature is everywhere nonzero must be Riemannian. 82 Félix, Yves; Halperin, Steve; Thomas, Jean-Claude Let $X$ be an $n$-dimensional, finite, simply connected CW complex and set $\alpha_X =\limsup_i \frac{\log\mbox{ rank}\, \pi_i(X)}{i}$. When $0\lt \alpha_X\lt \infty$, we give upper and lower bound for $\sum_{i=k+2}^{k+n} \textrm{rank}\, \pi_i(X)$ for $k$ sufficiently large. We show also for any $r$ that $\alpha_X$ can be estimated from the integers rk$\,\pi_i(X)$, $i\leq nr$ with an error bound depending explicitly on $r$. 120 Francois, Georges; Hampe, Simon We introduce the notion of families of $n$-marked smooth rational tropical curves over smooth tropical varieties and establish a one-to-one correspondence between (equivalence classes of) these families and morphisms from smooth tropical varieties into the moduli space of $n$-marked abstract rational tropical curves $\mathcal{M}_{n}$. 149 Kellendonk, Johannes; Lenz, Daniel We characterize equicontinuous Delone dynamical systems as those coming from Delone sets with strongly almost periodic Dirac combs. Within the class of systems with finite local complexity, the only equicontinuous systems are then shown to be the crystallographic ones. On the other hand, within the class without finite local complexity, we exhibit examples of equicontinuous minimal Delone dynamical systems that are not crystallographic. Our results solve the problem posed by Lagarias as to whether a Delone set whose Dirac comb is strongly almost periodic must be crystallographic. 171 Lyall, Neil; Magyar, Ákos Let $P\in\mathbb Z[n]$ with $P(0)=0$ and $\varepsilon\gt 0$. We show, using Fourier analytic techniques, that if $N\geq \exp\exp(C\varepsilon^{-1}\log\varepsilon^{-1})$ and $A\subseteq\{1,\dots,N\}$, then there must exist $n\in\mathbb N$ such that $\frac{|A\cap (A+P(n))|}{N}\gt \left(\frac{|A|}{N}\right)^2-\varepsilon.$ In addition to this we also show, using the same Fourier analytic methods, that if $A\subseteq\mathbb N$, then the set of $\varepsilon$-optimal return times $R(A,P,\varepsilon)=\left\{n\in \mathbb N \,:\,\delta(A\cap(A+P(n)))\gt \delta(A)^2-\varepsilon\right\}$ is syndetic for every $\varepsilon\gt 0$. Moreover, we show that $R(A,P,\varepsilon)$ is dense in every sufficiently long interval, in particular we show that there exists an $L=L(\varepsilon,P,A)$ such that $\left|R(A,P,\varepsilon)\cap I\right| \geq c(\varepsilon,P)|I|$ for all intervals $I$ of natural numbers with $|I|\geq L$ and $c(\varepsilon,P)=\exp\exp(-C\,\varepsilon^{-1}\log\varepsilon^{-1})$. 195 Penegini, Matteo; Polizzi, Francesco We classify minimal surfaces of general type with $p_g=q=2$ and $K^2=6$ whose Albanese map is a generically finite double cover. We show that the corresponding moduli space is the disjoint union of three generically smooth irreducible components $\mathcal{M}_{Ia}$, $\mathcal{M}_{Ib}$, $\mathcal{M}_{II}$ of dimension $4$, $4$, $3$, respectively. 222 Sauer, N. W. A metric space $\mathrm{M}=(M;\operatorname{d})$ is {\em homogeneous} if for every isometry $f$ of a finite subspace of $\mathrm{M}$ to a subspace of $\mathrm{M}$ there exists an isometry of $\mathrm{M}$ onto $\mathrm{M}$ extending $f$. The space $\mathrm{M}$ is {\em universal} if it isometrically embeds every finite metric space $\mathrm{F}$ with $\operatorname{dist}(\mathrm{F})\subseteq \operatorname{dist}(\mathrm{M})$. (With $\operatorname{dist}(\mathrm{M})$ being the set of distances between points in $\mathrm{M}$.) A metric space $\boldsymbol{U}$ is an {\em Urysohn} metric space if it is homogeneous, universal, separable and complete. (It is not difficult to deduce that an Urysohn metric space $\boldsymbol{U}$ isometrically embeds every separable metric space $\mathrm{M}$ with $\operatorname{dist}(\mathrm{M})\subseteq \operatorname{dist}(\boldsymbol{U})$.) The main results are: (1) A characterization of the sets $\operatorname{dist}(\boldsymbol{U})$ for Urysohn metric spaces $\boldsymbol{U}$. (2) If $R$ is the distance set of an Urysohn metric space and $\mathrm{M}$ and $\mathrm{N}$ are two metric spaces, of any cardinality with distances in $R$, then they amalgamate disjointly to a metric space with distances in $R$. (3) The completion of every homogeneous, universal, separable metric space $\mathrm{M}$ is homogeneous. 241 Aguiar, Marcelo; Lauve, Aaron Following Radford's proof of Lagrange's theorem for pointed Hopf algebras, we prove Lagrange's theorem for Hopf monoids in the category of connected species. As a corollary, we obtain necessary conditions for a given subspecies $\mathbf k$ of a Hopf monoid $\mathbf h$ to be a Hopf submonoid: the quotient of any one of the generating series of $\mathbf h$ by the corresponding generating series of $\mathbf k$ must have nonnegative coefficients. Other corollaries include a necessary condition for a sequence of nonnegative integers to be the dimension sequence of a Hopf monoid in the form of certain polynomial inequalities, and of a set-theoretic Hopf monoid in the form of certain linear inequalities. The latter express that the binomial transform of the sequence must be nonnegative. 266 Bérard, Vincent Sur une surface de Riemann, l'énergie d'une application à valeurs dans une variété riemannienne est une fonctionnelle invariante conforme, ses points critiques sont les applications harmoniques. Nous proposons ici un analogue en dimension supérieure, en construisant une fonctionnelle invariante conforme pour les applications entre deux variétés riemanniennes, dont la variété de départ est de dimension $n$ paire. Ses points critiques satisfont une EDP elliptique d'ordre $n$ non-linéaire qui est covariante conforme par rapport à la variété de départ, on les appelle les applications conforme-harmoniques. Dans le cas des fonctions, on retrouve l'opérateur GJMS, dont le terme principal est une puissance $n/2$ du laplacien. Quand $n$ est impaire, les mêmes idées permettent de montrer que le terme constant dans le développement asymptotique de l'énergie d'une application asymptotiquement harmonique sur une variété AHE est indépendant du choix du représentant de l'infini conforme. 299 Grafakos, Loukas; Miyachi, Akihiko; Tomita, Naohito In this paper, we prove certain $L^2$-estimate for multilinear Fourier multiplier operators with multipliers of limited smoothness. As a result, we extend the result of Calderón and Torchinsky in the linear theory to the multilinear case. The sharpness of our results and some related estimates in Hardy spaces are also discussed. 331 Kadets, Vladimir; Martín, Miguel; Merí, Javier; Werner, Dirk We show that for spaces with 1-unconditional bases lushness, the alternative Daugavet property and numerical index 1 are equivalent. In the class of rearrangement invariant (r.i.) sequence spaces the only examples of spaces with these properties are $c_0$, $\ell_1$ and $\ell_\infty$. The only lush r.i. separable function space on $[0,1]$ is $L_1[0,1]$; the same space is the only r.i. separable function space on $[0,1]$ with the Daugavet property over the reals. 349 Müller, Peter; Richard, Christoph We provide a framework for studying randomly coloured point sets in a locally compact, second-countable space on which a metrisable unimodular group acts continuously and properly. We first construct and describe an appropriate dynamical system for uniformly discrete uncoloured point sets. For point sets of finite local complexity, we characterise ergodicity geometrically in terms of pattern frequencies. The general framework allows to incorporate a random colouring of the point sets. We derive an ergodic theorem for randomly coloured point sets with finite-range dependencies. Special attention is paid to the exclusion of exceptional instances for uniquely ergodic systems. The setup allows for a straightforward application to randomly coloured graphs. 403 Van Order, Jeanine We construct a bipartite Euler system in the sense of Howard for Hilbert modular eigenforms of parallel weight two over totally real fields, generalizing works of Bertolini-Darmon, Longo, Nekovar, Pollack-Weston and others. The construction has direct applications to Iwasawa main conjectures. For instance, it implies in many cases one divisibility of the associated dihedral or anticyclotomic main conjecture, at the same time reducing the other divisibility to a certain nonvanishing criterion for the associated $p$-adic $L$-functions. It also has applications to cyclotomic main conjectures for Hilbert modular forms over CM fields via the technique of Skinner and Urban. 467 Wilson, Glen; Woodward, Christopher T. We show that quasimap Floer cohomology for varying symplectic quotients resolves several puzzles regarding displaceability of toric moment fibers. For example, we (i) present a compact Hamiltonian torus action containing an open subset of non-displaceable orbits and a codimension four singular set, partly answering a question of McDuff, and (ii) determine displaceability for most of the moment fibers of a symplectic ellipsoid. 481 Ara, Pere; Dykema, Kenneth J.; Rørdam, Mikael The proofs of Theorem 2.2 of K. J. Dykema and M. Rørdam, Purely infinite simple $C^*$-algebras arising from free product constructions}, Canad. J. Math. 50 (1998), 323--341 and of Theorem 3.1 of K. J. Dykema, Purely infinite simple $C^*$-algebras arising from free product constructions, II, Math. Scand. 90 (2002), 73--86 are corrected. 485 Bice, Tristan Matthew In this paper we analyze states on C*-algebras and their relationship to filter-like structures of projections and positive elements in the unit ball. After developing the basic theory we use this to investigate the Kadison-Singer conjecture, proving its equivalence to an apparently quite weak paving conjecture and the existence of unique maximal centred extensions of projections coming from ultrafilters on the natural numbers. We then prove that Reid's positive answer to this for q-points in fact also holds for rapid p-points, and that maximal centred filters are obtained in this case. We then show that consistently such maximal centred filters do not exist at all meaning that, for every pure state on the Calkin algebra, there exists a pair of projections on which the state is 1, even though the state is bounded strictly below 1 for projections below this pair. Lastly we investigate towers, using cardinal invariant equalities to construct towers on the natural numbers that do and do not remain towers when canonically embedded into the Calkin algebra. Finally we show that consistently all towers on the natural numbers remain towers under this embedding. 510 Blasco de la Cruz, Oscar; Villarroya Alvarez, Paco We prove restriction and extension of multipliers between weighted Lebesgue spaces with two different weights, which belong to a class more general than periodic weights, and two different exponents of integrability which can be below one. We also develop some ad-hoc methods which apply to weights defined by the product of periodic weights with functions of power type. Our vector-valued approach allow us to extend results to transference of maximal multipliers and provide transference of Littlewood-Paley inequalities. 544 Deitmar, Anton; Horozov, Ivan We show that higher order invariants of smooth functions can be written as linear combinations of full invariants times iterated integrals. The non-uniqueness of such a presentation is captured in the kernel of the ensuing map from the tensor product. This kernel is computed explicitly. As a consequence, it turns out that higher order invariants are a free module of the algebra of full invariants. 553 Godinho, Leonor; Sousa-Dias, M. E. This paper provides an addendum and erratum to L. Godinho and M. E. Sousa-Dias, "The Fundamental Group of $S^1$-manifolds". Canad. J. Math. 62(2010), no. 5, 1082--1098. 559 Helemskii, A. Ya. We define and study the so-called extreme version of the notion of a projective normed module. The relevant definition takes into account the exact value of the norm of the module in question, in contrast with the standard known definition that is formulated in terms of norm topology. After the discussion of the case where our normed algebra $A$ is just $\mathbb{C}$, we concentrate on the case of the next degree of complication, where $A$ is a sequence algebra, satisfying some natural conditions. The main results give a full characterization of extremely projective objects within the subcategory of the category of non-degenerate normed $A$--modules, consisting of the so-called homogeneous modules. We consider two cases, non-complete' and complete', and the respective answers turn out to be essentially different. In particular, all Banach non-degenerate homogeneous modules, consisting of sequences, are extremely projective within the category of Banach non-degenerate homogeneous modules. However, neither of them, provided it is infinite-dimensional, is extremely projective within the category of all normed non-degenerate homogeneous modules. On the other hand, submodules of these modules, consisting of finite sequences, are extremely projective within the latter category. 575 Kallel, Sadok; Taamallah, Walid Permutation products and their various fat diagonal'' subspaces are studied from the topological and geometric point of view. We describe in detail the stabilizer and orbit stratifications related to the permutation action, producing a sharp upper bound for its depth and then paying particular attention to the geometry of the diagonal stratum. We write down an expression for the fundamental group of any permutation product of a connected space $X$ having the homotopy type of a CW complex in terms of $\pi_1(X)$ and $H_1(X;\mathbb{Z})$. We then prove that the fundamental group of the configuration space of $n$-points on $X$, of which multiplicities do not exceed $n/2$, coincides with $H_1(X;\mathbb{Z})$. Further results consist in giving conditions for when fat diagonal subspaces of manifolds can be manifolds again. Various examples and homological calculations are included. 600 Kroó, A.; Lubinsky, D. S. We establish asymptotics for Christoffel functions associated with multivariate orthogonal polynomials. The underlying measures are assumed to be regular on a suitable domain - in particular this is true if they are positive a.e. on a compact set that admits analytic parametrization. As a consequence, we obtain asymptotics for Christoffel functions for measures on the ball and simplex, under far more general conditions than previously known. As another consequence, we establish universality type limits in the bulk in a variety of settings. 621 Lee, Paul W. Y. In this paper, we introduce two notions on a surface in a contact manifold. The first one is called degree of transversality (DOT) which measures the transversality between the tangent spaces of a surface and the contact planes. The second quantity, called curvature of transversality (COT), is designed to give a comparison principle for DOT along characteristic curves under bounds on COT. In particular, this gives estimates on lengths of characteristic curves assuming COT is bounded below by a positive constant. <p> We show that surfaces with constant COT exist and we classify all graphs in the Heisenberg group with vanishing COT. This is accomplished by showing that the equation for graphs with zero COT can be decomposed into two first order PDEs, one of which is the backward invisicid Burgers' equation. Finally we show that the p-minimal graph equation in the Heisenberg group also has such a decomposition. Moreover, we can use this decomposition to write down an explicit formula of a solution near a regular point. 634 Mezzetti, Emilia; Miró-Roig, Rosa M.; Ottaviani, Giorgio We prove that $r$ independent homogeneous polynomials of the same degree $d$ become dependent when restricted to any hyperplane if and only if their inverse system parameterizes a variety whose $(d-1)$-osculating spaces have dimension smaller than expected. This gives an equivalence between an algebraic notion (called Weak Lefschetz Property) and a differential geometric notion, concerning varieties which satisfy certain Laplace equations. In the toric case, some relevant examples are classified and as byproduct we provide counterexamples to Ilardi's conjecture. 655 Shemyakova, E. Darboux Wronskian formulas allow to construct Darboux transformations, but Laplace transformations, which are Darboux transformations of order one cannot be represented this way. It has been a long standing problem on what are other exceptions. In our previous work we proved that among transformations of total order one there are no other exceptions. Here we prove that for transformations of total order two there are no exceptions at all. We also obtain a simple explicit invariant description of all possible Darboux Transformations of total order two. 675 Strungaru, Nicolae Meyer sets have a relatively dense set of Bragg peaks and for this reason they may be considered as basic mathematical examples of (aperiodic) crystals. In this paper we investigate the pure point part of the diffraction of Meyer sets in more detail. The results are of two kinds. First we show that given a Meyer set and any positive intensity $a$ less than the maximum intensity of its Bragg peaks, the set of Bragg peaks whose intensity exceeds $a$ is itself a Meyer set (in the Fourier space). Second we show that if a Meyer set is modified by addition and removal of points in such a way that its density is not altered too much (the allowable amount being given explicitly as a proportion of the original density) then the newly obtained set still has a relatively dense set of Bragg peaks. 702 Taylor, Michael We analyze the regularity of standing wave solutions to nonlinear Schrödinger equations of power type on bounded domains, concentrating on Lipschitz domains. We establish optimal regularity results in this setting, in Besov spaces and in Hölder spaces.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 112, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8349848985671997, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/37729/is-there-a-finite-amount-of-mass-in-the-universe/37733
# Is there a finite amount of mass in the universe? So, I'm not too physics savvy but I am curious to ask. Is there a finite amount of mass in the universe? or is there more and more being created from somewhere or something? If the universe is infinite, and there's a finite amount of mass, that just seems kinda weird I guess. Hopefully this isn't too dumb a question... - – Qmechanic♦ Feb 7 at 2:44 ## 3 Answers To start with we don't know whether the universe is infinite or finite but unbounded (i.e. closed on some length scale much larger than the visible universe). If the universe is infinite the amount of matter is probably also infinite, while if the universe is finite the amount of matter is definitely finite. Also when you say "matter", you need to bear in mind that matter is being converted into energy in stars, so it would be better to ask about combined matter and energy, where we treat the two as the same and link them with Einstein's famous equation $E = mc^2$. Having made these two points, with one exception we know of no mechanism for matter/energy to appear from nothing i.e. the amount of matter/energy is conserved. In fact this conservation law follows from a fundamental symmetry that the laws of physics are time invarient. So whether the universe is infinite or finite, the amount of matter/energy in it is constant with time. I did say there was one exception: you've probably heard of dark energy, which is a fundamental property of spacetime. The thing is that because the universe is expanding the amount of spacetime is increasing and therefore the total amount of dark energy is increasing. Since we don't know much about dark energy, it's hard to comment on the significance of this. - John, could you elaborate on this: "...because the universe is expanding, the amount of spacetime is increasing..." Increasing in what sense? – William Dec 24 '12 at 8:46 1 Hmm, rereading my answer I think that phrase was a bit careless. Dark energy is an energy per unit volume so if you define a volume bounded by e.g. some galaxies too far away to be gravitationally bound, the amount of dark energy in the region bounded by those galaxies increases as the Hubble expansion moves the galaxies apart. You could say that the amount of spacetime bounded by our selected galaxies is increasing, but I wouldn't attach any great physical significance to the statement. – John Rennie Dec 24 '12 at 9:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572974443435669, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/169495-maximal-ideals.html
Thread: 1. Maximal Ideals Prove <x^2+1> is maximal in R[x] R[x] = {all polynomials with real coefficients} <x^2+1> = {f(x)*(x^2+1) ; f(x) is from R[x]} Could you guys give me some hints? I am not really sure how to even start this. This exact problem is actually an example in my book but I could not even follow it! (Gallian) Thank you 2. One way (there is an immediate characterization in terms of irredutible polynomials): $<x^2+1>\;\textrm{maximal}\;\Leftrightarrow \mathbb{R}[x]/<x^2+1>\;\textrm{is\:a\;field}$ and if $I\subset \mathbb{R}[x]$ is an ideal, then $\mathbb{R}/I$ is a commutative ring. So, you only have to prove that $ax+b+<x^2+1>\;\;(ax+b\neq 0)$ is a unit in $\mathbb{R}[x]/<x^2+1>$ . Fernando Revilla 3. Thanks I was actually trying to prove that R[x]/<x^2+1> was a field by showing <x^2+1> was maximal but I guess showing that its all the elements are units would be easier 4. yes that is the easiest way to show that something is maximal. Think about what happens when you take the quotient R[x]/(x^2+1). Remember when you mod out by an ideal you partion the ring into cosets: a+(x^2+1) for each a in R[x]. Something gets "squashed" to 0 iff it is in the ideal (x^2+1). So what type of elements are left? What can you decide this quotient ring is isomorphic to? Think about this for a bit, you should be able to show it's a field without too much trouble.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528362154960632, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2896/maximization-of-cara-utility-function-unique-solution-with-an-unbounded-paramet/2901
# Maximization of CARA utility function: unique solution with an unbounded parameter? An investor at time $t_0$ can invest his wealth $w_0$ in a risky asset $x$ for an amount $a$ and the remain part in the riskless asset $w_0-a$. At the end of the period $t_1$, the investor will obtain the wealth $w$: $$w = a(1+x)+(w_0-a)(1+r_f) =a(x-r_f)+w_0(1+r_f).$$ Using a CARA (exponential negative) utility function we have, $$U(w)=-e^{-\lambda w}=-e^{-\lambda a(x-r_f)+w_0(1+r_f)}$$ where $\lambda$ is an exogenous parameter for the risk coefficient aversion. Then taking its expectation, it is clear that its maximization it does not depend on $w_0$ that is a fixed quantity but from $a$, $$\max_a\textrm{ }E[U(w)]=E[-e^{-\lambda a(x-r_f)}\times e^{w_0(1+r_f)}]$$ where $e^{w_0(1+r_f)}$ is a fixed quantity $\tilde{q}$, $$\max_a\textrm{ }E[U(w)]=E[-e^{-\lambda a(x-r_f)}\times \tilde{q}]$$ Here is my problem, $$\max_a\textrm{ }E[U(w)]=E[-e^{-\lambda a(x-r_f)}]$$ If $a$ is unbounded this expected function has no point of maxima. It goes to infinity. So $a$ must be bounded and depends from the budget constraint of the initial wealth $w_0$. The professor by mail told me that is a well know result in finance and $a$ exists as an unique optimal solution. Where am I wrong? - – Alexey Kalmykov Feb 3 '12 at 12:14 2 By the way, my previous comment was not intended to encourage cross posting. – Alexey Kalmykov Feb 3 '12 at 14:16 ## 3 Answers This is the canonical Arrow-Pratt "portfolio" model. Couple of points on terminology: 1. For a function $u$, we define the risk aversion function by $r_u(x):=-\frac{u''(x)}{u'(x)}$. In your utility function, $r_u(x) = \lambda$; hence, it is a constant absolute risk aversion utility and $\lambda$ is the "coefficient of risk aversion," not the "risk coefficient aversion". 2. The two points in time, $t_0,t_1$ can be seen as "beginning of period" and "end of period", where "period" is here the time interval $[t_0,t_1]$. This may be important: you don't need a dynamic approach as was suggested by some people. The guy in your problem allocates $a$ to the risky asset and $w_0-a$ to the riskless, over a time interval included between $t_0$ and $t_1$. 3. Your problem is the basic, canonical portfolio choice model with utility over final wealth. The guy in your problem just consumes what he has in the end of the time period. This is also important to bear in mind. 4. It's "negative exponential", not "exponential negative". 5. Rewrite $w(a)$ for final wealth (end of period, or at $t_1$); it depends on $a$, i.e. the part of $w_0$ that is invested in the risky asset. Your problem is: $$\max_a \;E[U(w(a)] = \max_a \;E[-e^{-\lambda(x-r_f)}]$$ Let $\chi = x-r_f$, i.e. the excess return of the risky asset (relative to the risk-free). Denote its distribution function by $dF(\chi)$ and hence $$\max_a \;E[-e^{-\lambda\chi}] = \max_a \;\int -e^{-\lambda z} dF(z)$$ Let $a^* = \arg\max_a \;E[U(w(a))]$. The following condition should hold in order for the (interior) optimum $a^*$ of this function to be bounded (note the redundancy in what I wrote just now): Assumption (I) The values of the excess return random variable $\chi = x - r_f$ alternate in sign, i.e. $\chi$ takes values $\underline{\chi}\leq 0 \leq \overline{\chi}$ with positive probability. If $\chi$ was positive almost surely, then $a$ is unbounded precisely because the objective is unbounded, as you very well understood from the beginning. Hence, Assumption (I) should be retained. Addendum: if $a^*\rightarrow \infty$, i.e. if the optimal solution is unbounded, then the derivative of the expected utility evaluated at the optimal solution is zero - and since this doesn't make any sense, you have to rephrase it as $$\lim_{a\rightarrow\infty} E\left[\frac{d}{da}U(w(a)) \right] = 0$$ Now $U$ is concave. Hence, in order for $a^* \rightarrow \infty$ not to be a critical point, you have to have $$\lim_{a\rightarrow\infty} E\left[\frac{d}{da}U(w(a)) \right] <0$$ and not positive. Replace the parametric form, take the derivative, and you will find a (strict) inequality relating the distribution function and the marginal utility at the limits. And since this is supposed to be a hint and not a homework helpdesk, I have to stop here :) Already, you were right in your original answer, but you have to prove it as well. - This looks like a general equilibrium model in Economics. It should be described in most of microeconomics textbooks (e.g. this). Yes, you need a budget constraint here for$\ a$, otherwise your optimization problem makes no sense. Moreover, the household prefers consumption today to consumption tomorrow and, hence, you may want to enhance your model by discounting next period’s utility. If you want to derive an equilibrium (along with household consumption problem) you also need to consider firm's profit maximization problem. Edit: Can you point out a reference where you got this model from? In your framework investor is trying to maximize his wealth. Naturally he invests everything (using leverage) to get maximum return. If you want to run unconstrained optimization, I think that you target function should look a bit different, as you are solving multi-period optimization problem and investor want to maximize his total utility. Usually in economic theory they consider 2 period optimization problem like: $\ \mathop {\arg \max }\limits_a E[U({C_0}) + \frac{1}{{1 + DF }}U({C_1}))]\$, where$\ C$ is his wealth/consumption,$\ DF$ is discount factor (investor prefers wealth today, rather than tomorrow). - 1 They assume that the investor has a wealth of $w_0$ and invests $a$ in the risky asset and, therefore, $w-a$ in the risk-free asset. As shown above the maximization of the expected utility depends on $a$ and not on $w_0$. So, many authors set $w_0$ = 0 for convenience, so what is the implications for $a$? I don't get this point, it seems as $a$ exists and is unique without any assumption. If we bounded $a$ the maxima of the expected utility will be on the bound of $a$. – Marco Feb 3 '12 at 13:47 @Marco I extended my answer. If you want to run unconstrained optimization, I think you need to consider another target function for optimization (see my edits). – Alexey Kalmykov Feb 3 '12 at 14:09 Probably I looked at it in a wrong perspective. $E(x)$ is a the expected value of the pdf returns. So We have to evaluate this: $E[U(w)] = E[-e^{-\lambda a(x-r_f)}]=\max_a\int_\infty^\infty -e^{-\lambda a(x-rf)}f(x)$ where $f(x)$ is the pdf of the returns. In this case we should have an unique optimal solution for $a$. – Marco Feb 3 '12 at 15:18 @AlexeyKalmykov "you need a budget constraint here for a, otherwise your optimization problem makes no sense" - if $a>w_0$ you can think of it as if the consumer borrows $a-w_0$ at $r_f$ rate, i.e. it can make sense. – Max Li Feb 4 '12 at 16:43 @MaxLi Yes, thanks. I've already noted this in my last edit, i.e. "he invests everything (using leverage) to get maximum return" – Alexey Kalmykov Feb 4 '12 at 18:01 First, your statement that your utility function goes to infinity is wrong. It's minus exponenta. You can think of it as a minimum of $e^{f(x)}$ which is bounded below by zero whatever $f(x)$ is. In other words, your utility function is bounded above by 0. Second, maximizing expected value, you need to calculate it before deploying maximization techniques. As an example, assume that $x-r_f$ is distributed as $N(0,1)$. Then $y:=-\lambda a (x-r_f)$ is distributed as $N(-a \lambda,a^2\lambda^2)$. Then, $e^y$ follows the lognormal distribution which mean we can look up, i.e. $E(e^y)=e^{-a\lambda+\frac{a^2\lambda^2}{2}}$ Thus, we can rewrite initial maximization program as $\min_a \; [e^{-a\lambda+\frac{a^2\lambda^2}{2}}]$. Minimum is achieved at $a=\frac{1}{\lambda}$. If you face troubles calculating it, write it in the comment and I'll write down the steps - 1 Sure, it tends asymptotically to zero but in fact we have no maxima. $a^*$ strongly dependes from your pdf assumption, in case of a normal you have that $a^*$ is approximatively the Sharpe ratio. – Marco Feb 4 '12 at 16:40 Of course the exact solution depends on the pdf assumption, but if we assume that pdf is continuous, then utility function will be also continuous. Then we can use an analogue of extreme value theorem (continuous, bounded from above and closed from right), which states that a maximum does exist. – Max Li Feb 4 '12 at 16:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395867586135864, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/181670/find-the-number-of-different-sets-c-such-that-b-subset-c-subset-a
# Find the number of different sets C such that $B\subset C\subset A$ Assume that A consists of n elements and $B\subset A$ consists of k elements. Find the number of different sets C such that $B\subset C\subset A$. I am reading the A. Shen and N. K. Vereshchagin book, Basic Set Theory. I try to use Combinatorics for solve the question but still I can not find a solution for the general case. Many Thanks. - Hint: Elements in C must contain elements from set B. So There are only $(n-k)$ options to choose from because the $k$ of them are mandatory. – FrenzY DT. Aug 12 '12 at 13:48 ## 4 Answers So one must have $k \leq n$. $A - B$ is a set of $n - k$ elements. If $P \subset A - B$, then $A \subset B \cup P \subset A$. On the other hand if $B \subset C \subset A$, then $P = C - B$ is a subset of $A - B$. So all such $C$ are in correspondence with subsets of $A - B$. Since $A - B$ is a set of size $n - k$. There are $2^{n -k}$ possibilities for $C$. - 1 Not leaving much for OP to do. – Gerry Myerson Aug 12 '12 at 13:52 1 @GerryMyerson It is not tagged as homework and did not ask for a hint, so I assumed the person just wanted the answer. – William Aug 12 '12 at 13:54 @William Whether this was tagged as (homework) or not, the problem is: Should one provide a full answer to such a question? What do you think? – Did Aug 12 '12 at 14:02 2 @did Personally, I just answer homework questions. People who are morally oppose to it, think the asker did not put enough effort into it, or think they are taking advantage of the commmunity, or whatever other reason, should just refrain from answering. Getting the answer is useful for learning if you take the time to really understand the answer. If there is only value in figuring everything out yourself, you would never accomplish anything. – William Aug 12 '12 at 14:17 1 William, you assumed OP just wanted the answer; I assumed OP would enjoy working out the details after being pointed in the right direction. Only OP knows for sure. But if you're right, then OP can always ask for more help, if a hint isn't what OP wants, whereas, if I'm right, then a full answer rather spoils things for OP. So I'll continue to give pointers (mostly), and I'll continue to comment when others don't leave any work for OPs. – Gerry Myerson Aug 12 '12 at 23:21 show 4 more comments $C$ must have all the elements that are in $B$, and none of the elements that aren't in $A$, so the only wiggle room is in the elements that are in $A$ but not in $B$ - each of those elements could either be in $C$, or not be in $C$. So, how many elements are there that are in $A$ but not in $B$? and how many ways to choose the ones to put in $C$? - Many Thanks to all – Hernan Aug 12 '12 at 14:04 Choose any subset of the $n-k$ elements that are in A but not in B. The number of possible such subsets is $2^{n-k}$. - 2^(n-k) - 2. Because C must contain all the k elements. From the other n-k elements, there are 2^(n-k) possible different subsets, but two cases (namely A and B) must be excluded. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547662734985352, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/16214/plotting-chebyshevs-theta-function-varthetax/16216
# Plotting Chebyshev's theta function $\vartheta(x)$ The function I would like to plot is defined as $\sum\limits_{p\leq x}\log p.$ The following gives me I think a plot of the points of interest, but the function is defined for all $x > 0$ and so it's basically a step function between primes. Is there a way to augment the following to include the "steps?" ````a8 = Table[Sum[Log[Prime[i]], {i, 1, j}], {j, 1, 10}]; a7 = Table[Prime[i], {i, 1, 10}]; a9 = Table[{a7[[i]], a8[[i]]}, {i, 1, 10}]; ListPlot[a9] ```` Thanks for any suggestions. - ## 2 Answers Here's Eric Weisstein's implementation from MathWorld: ````Primorial[0] := 1; Primorial[1] := 2; Primorial[n_] := Primorial[n] = Prime[n] Primorial[n - 1]; ChebyshevTheta[n_] := Log[Primorial[PrimePi[n]]] Plot[ChebyshevTheta[x], {x, 0, 100}] ```` Many MathWorld pages have attached notebooks linked near the top of the page. That's where this came from. - Thank you I had seen this but did not realize there was code to be found. – daniel Dec 12 '12 at 21:48 @daniel Yes, the code is the best part of the answer! – Mark McClure Dec 12 '12 at 21:49 You might need some more `PlotPoints` in there to get rid of that wee glitch in the middle. – wxffles Dec 13 '12 at 0:52 It might be more prudent to just add logarithms directly instead of multiplying a bunch of primes before taking the logarithm. – J. M.♦ Apr 19 at 16:39 Trying to stay as close to your definitions as possible, one may actually want to avoid `Plot` because it will have trouble when the number of discontinuities is too large to be properly resolved for a given choice of `PlotPoints`. Instead, as you already started out doing, one can use a list plot. But to get lines, you should use `ListLinePlot` and define the corners of the step functions: ````n = 50; a8 = Table[Sum[Log[Prime[i]], {i, 1, j}], {j, 1, n}]; a7 = Table[Prime[i], {i, 1, 50}]; a9 = Transpose[{a7, a8}]; a10 = Transpose[{Rest[a7], Most[a8]}]; ListLinePlot[Riffle[a9, a10]] ```` This guarantees all steps to be nicely rectangular. Here, the number of jumps is given by `n = 50`. What I did is to simplify your definition of `a9` without changing it, and then adding a list `a10` where all the `x`-values are shifted to the right in order to define the right side of each horizontal segment. These two lists `a9` and `a10` are then combined with `Riffle` so that elements from each list alternate, giving the desired line. You can increase `n` without having to worry about the `PlotPoints` option in `Plot`. - Thank you, I hadn't seen "Riffle" before. This is a nice idea and I did notice for large n a lot of error messages before the Weisstein implementation actually displayed the graph. +1 – daniel Dec 13 '12 at 15:49 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941857218742371, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/161362/proof-that-something-is-undefined/161374
# Proof that something is undefined? How can one tell the difference when the result is undefined or math just doesn't know how to provide a value for that particular equation? (the value still exists however) For example, how could one prove that by definition division by zero is undefined; it's not that math doesnt' know the value, the value just doesn't exist. - 1 What does it mean for “math” to know a value? Anyway, you can’t prove something is undefined. For something to be undefined just means that we haven’t defined what it should be. In the case of division by zero, if we consider the set $\mathbb{R}$ of real numbers, there is no “natural” value in $\mathbb{R}$ that we could define $1/0$ to be. – Adeel Jun 21 '12 at 21:16 ## 3 Answers Whether something is defined or not is a matter of, well, definition. Division by zero is undefined because we explicitly exclude it from the definition of division. The reasons we exclude it from the definition are varied, of course, but it's not a matter of lack of knowledge. It is not quite clear what you mean by "provide a value"; there are numbers which we can prove cannot be explicitly described in terms of a terminating algorithm (that is, there is no Turing Machine that will produce the number). But does that mean we do not provide a value? We cannot write down exactly a number that solves the equation $x^2-2=0$. We cheat when we say the solutions are $\sqrt{2}$ and $-\sqrt{2}$ because... what does "$\sqrt{2}$" mean? It means "the positive real number that is a solution to $x^2-2=0$". Does that mean we "don't know how to provide a value"? On the other hand, there are equations which we may genuinely not know whether they have solutions of a special kind or not. For a long time, it was unknown whether there were any positive integers $a$, $b$, and $c$, and a positive integer $n\gt 2$, such that $a^n+b^n=c^n$. Now we know there are none. - We don't "prove" that such statements are undefined, we "choose" not to define them, because we believe that in some contexts it doesn't make sense to do so. For instance, over the real numbers, we choose not to divide by zero because there is no explicit interest in doing so. But over the complex numbers, it makes great sense to say that $1/0 = \infty$, and in some contexts it is very important to understand what it means. For instance, over real numbers, if we would try to define $1/0$ by continuity, we would suggest $1/0 = \lim_{x \to 0} \frac 1x$, but this limit doesn't exist. However, if we take the same definition over $\overline{\mathbb C}$, it works! The limit exists and is worth $\infty$ in the compactification of the complex plane. We don't prove that something is undefined, we just don't define it when we don't want to. That's the big idea. Hope that helps, - In the real numbers we can prove that division by zero is not only undefined but cannot be defined at all. Why? Because we would like the real numbers to have certain properties which are not consistent with the idea of division by zero. We do can prove the "undefinability" by showing that if division by zero were possible to define we could derive a contradiction (e.g. $1=2$), and contradictions are bad. So we avoid things which prove contradictions. To see that indeed this is the case suppose that $\frac10$ was defined, then $0\times\frac10=0$ since everything times zero is zero; on the other hand $\frac10\times0=1$ because we multiply a number by its inverse. Therefore $0=1$... contradiction! On the other hand, we can prove that a continuous function $f$ which satisfies: $$\lim_{x\to\infty} f(x)=\infty,\text{ and }\lim_{x\to-\infty}f(x)=-\infty$$ Has at least one root, that is there exists some $c\in\mathbb R$ such that $f(c)=0$. Even though we don't know what $f$ is or how to find this $c$. Why can we prove that? If we assume that this is not the case then we can once again derive contradiction in one form or another. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603813886642456, "perplexity_flag": "head"}
http://mathoverflow.net/questions/22523/implicit-derivative/26271
## Implicit derivative? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If we have function $y=L(x_1,x_2,x_3,...,x_n)$, and function $z=R(x_1,x_2,x_3,...,x_n)$. How to compute the derivative $\frac{dy}{dz}$? Shall I do $\frac{dy}{dz} = \sup_{g\in \Re^n}\frac{\bigtriangledown_x L \cdot g}{\bigtriangledown_x R \cdot g}$? Is there any mathematical term associated with this kind of derivatives? - ## 2 Answers There is no reason to expect that such a thing as $dy/dz$ exists. If you rephrase everything in terms of differentials, you have `$$dy=\sum_{i=1}^n\frac{\partial L}{\partial x_i}dx_i,\quad dz=\sum_{i=1}^n\frac{\partial R}{\partial x_i}dx_i$$` where `$dx_1,\ldots,dx_n$` are linearly independent, and so you find `$dy=a\, dz$` if and only if $\nabla L=a\nabla R$. But normally, the two gradient won't be parallel, so you cannot define $dy/dz$. You can do so in any given direction, though: Your fraction $(\nabla L\cdot g)/(\nabla R\cdot g)$ is a perfectly good expression for $dy/dz$ as measured in the direction given by $g$. - Agree. The sup must lead to infinity if the two gradients are not parallel. Thanks. – pacificmoth Apr 25 2010 at 19:34 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In general you cannot even define $y(z)$ given generic functions $R$ and $L$. If instead there exists $f$ such that $y=f(z)$ is satisfied for any point $\mathbf{x}=(x_1,...,x_n)$ then you have $$L(\mathbf{x})=y=f(z)=f(R(\mathbf{x}))$$ which is a nontrivial relation between the functions $R$ and $L$. This relation in particular implies $$\nabla L(\mathbf{x})=\nabla R(\mathbf{x}) f'(R(\mathbf{x}))$$ that means the gradients are parallel in any point. This relation is in fact an array of relations which uniquely define $f'=\frac{dy}{dz}$ given $L$ and $R$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293389916419983, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/24158/list
## Return to Answer 4 edited body Let me make a rather crude remark about the easiest case of the rings, namely the ring around zero. Or even better, the one around infinity. John Baez mentions the above picture is about integer polynomials of height 1 with degree less than 25. Where by the height of a polynomial I mean the maximum absolute value of the coefficients. The simplest phenomenon we're seeing in the picture expresses the relation between the height and the Mahler measure. The Mahler measure of a polynomial is the max of the roots that are outside the unit circle. And there is an elementary bound $M(f) \leq H(f)\sqrt{d+1}$ where H is the height of the polynomial f and M the Mahler measure and d the degree. In the picture H is always 1 so there can be no roots farther out than 24. So the crudest thing we are seeing is that there are no roots of norm more than 5. Replacing $x$ by $\frac{1}{x}$ we see that by the same token there can be no root with norm smaller than 1/5 either. So we see a ring of roots, all with modulus between 5 and 1/5. I suppose one can explain the other rings in a similar way by modifying the polynomial a bit. For example the ring around 1. If f(x) has a root r that is close to one, then g(x) = f(x+1) has a root r-1 very close to zero. So $|r-1| \leq \frac{1}{5}\frac{1}{H(g)}$ The height of f was 1 but the height went up due to the substitution, so H(g) is big and we see a smaller gap around 1. In terms of Mahler measure, things also get more interesting when one asks for polynomials with small Mahler measure, just a tad above 1. Lehmer's conjecture says the minimal Mahler measure is attained at a very specific polynomial, which happens to be the Alexander polynomial of the (-2,3,7) pretzel knot! 3 added 2 characters in body Let me make a rather crude remark about the easiest case of the rings, namely the ring around zero. Or even better, the one around infinity. John Baez mentions the above picture is about integer polynomials of height 1 with degree less than 25. Where by the height of a polynomial I mean the maximum absolute value of the coefficients. The simplest phenomenon we're seeing in the picture expresses the relation between the height and the Mahler measure. The Mahler measure of a polynomial is the max of the roots that are outside the unit circle. And there is an elementary bound $M(f) \leq H(f)\sqrt{d+1}$ where H is the height of the polynomial f and M the Mahler measure and d the degree. In the picture H is always 1 so there can be no roots farther out than 24. So the crudest thing we are seeing is that there are no roots of norm more than 5. Replacing $x$ by $\frac{1}{x}$ we see that by the same token there can be no root with norm smaller than 1/5 either. So we see a ring of roots, all with modulus between 5 and 1/5. I suppose one can explain the other rings in a similar way by modifying the polynomial a bit. For example the ring around 1. If f(x) has a root r that is close to one, then g(x) = f(x+1) has a root r-1 very close to zero. So $|r-1| \leq \frac{1}{5}\frac{1}{H(g)}$ The height of f was 1 but the height went up due to the substitution, so H(g) is big and we see a smaller gap around 1. In terms of Mahler measure, things get more interesting when one asks for polynomials with small Mahler measure, just a tad above 1. Lehmer's conjecture says the minimal Mahler measure is attained at a very specific polynomial, which happens to be the Alexander polynomial of the (-2,3,7) pretzel knot! 2 added 413 characters in body; added 3 characters in body; added 2 characters in body; deleted 4 characters in body Let me make a rather crude remark about the easiest case of the rings, namely the ring around zero. Or even better, the one around infinity. John Baez mentions the above picture is about integer polynomials of height 1 with degree less than 25. Where by the height of a polynomial I mean the maximum absolute value of the coefficients. The simplest phenomenon we're seeing in the picture expresses the relation between the height and the Mahler measure. The Mahler measure of a polynomial is the max of the roots that are outside the unit circle. And there is an elementary bound $M(f) \leq H(f)\sqrt{d+1}$ where H is the height of the polynomial and M the Mahler measure and d the degree. In the picture H is always 1 so there can be no roots farther out than 24. So the crudest thing we are seeing is that there are no roots of norm more than 5. Replacing $x$ by $\frac{1}{x}$ we see that by the same token there can be no root with norm smaller than 1/5 either. So we see a ring of roots, all with modulus between 5 and 1/5. Things I suppose one can explain the other rings in a similar way by modifying the polynomial a bit. For example the ring around 1. If f(x) has a root r that is close to one, then g(x) = f(x+1) has a root r-1 very close to zero. So $|r-1| \leq \frac{1}{5}\frac{1}{H(g)}$ The height of f was 1 but the height went up due to the substitution, so H(g) is big and we see a smaller gap around 1. In terms of Mahler measure, things get more interesting when one asks for polynomials with small Mahler measure, say just a tad above 1. Lehmer's conjecture says the minimal Mahler measure is attained at a very specific polynomial, which happens to be the Alexander polynomial of the (-2,3,7) pretzel knot! 1 Let me make a rather crude remark about the easiest case of the rings, namely the ring around zero. Or even better, the one around infinity. John Baez mentions the above picture is about integer polynomials of height 1 with degree less than 25. Where by the height of a polynomial I mean the maximum absolute value of the coefficients. The simplest phenomenon we're seeing in the picture expresses the relation between the height and the Mahler measure. The Mahler measure of a polynomial is the max of the roots that are outside the unit circle. And there is an elementary bound $M(f) \leq H(f)\sqrt{d+1}$ where H is the height of the polynomial and M the Mahler measure and d the degree. In the picture H is always 1 so there can be no roots farther out than 24. So the crudest thing we are seeing is that there are no roots of norm more than 5. Replacing $x$ by $\frac{1}{x}$ we see that by the same token there can be no root with norm smaller than 1/5 either. So we see a ring of roots, all with modulus between 5 and 1/5. Things get more interesting when one asks for polynomials with small Mahler measure, say just a tad above 1. Lehmer's conjecture says the minimal Mahler measure is attained at a very specific polynomial, which happens to be the Alexander polynomial of the (-2,3,7) pretzel knot!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356953501701355, "perplexity_flag": "head"}
http://mathoverflow.net/questions/11669/what-is-the-difference-between-matrix-theory-and-linear-algebra/11679
## What is the difference between matrix theory and linear algebra? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, Currently, I'm taking matrix theory, and our textbook is Strang's Linear Algebra. Besides matrix theory, which all engineers must take, there exists linear algebra I and II for math majors. What is the difference,if any, between matrix theory and linear algebra? Thanks! - 16 Likely the version of the course called "linear algebra" is proof-based and gets deeper into the conceptual content, whereas "matrix theory" probably focuses on applications. It's a matter of emphasis, really. – Qiaochu Yuan Jan 13 2010 at 17:20 The other difference I've seen is that matrix theory usually concentrates on the theory of real complex matrices. Linear algebra cares about those, but also rational canonical forms, etc... – Pace Nielsen Apr 1 2010 at 14:52 ## 8 Answers Let me elaborate a little on what Steve Huntsman is talking about. A matrix is just a list of numbers, and you're allowed to add and multiply matrices by combining those numbers in a certain way. When you talk about matrices, you're allowed to talk about things like the entry in the 3rd row and 4th column, and so forth. In this setting, matrices are useful for representing things like transition probabilities in a Markov chain, where each entry indicates the probability of transitioning from one state to another. You can do lots of interesting numerical things with matrices, and these interesting numerical things are very important because matrices show up a lot in engineering and the sciences. In linear algebra, however, you instead talk about linear transformations, which are not (I cannot emphasize this enough) a list of numbers, although sometimes it is convenient to use a particular matrix to write down a linear transformation. The difference between a linear transformation and a matrix is not easy to grasp the first time you see it, and most people would be fine with conflating the two points of view. However, when you're given a linear transformation, you're not allowed to ask for things like the entry in its 3rd row and 4th column because questions like these depend on a choice of basis. Instead, you're only allowed to ask for things that don't depend on the basis, such as the rank, the trace, the determinant, or the set of eigenvalues. This point of view may seem unnecessarily restrictive, but it is fundamental to a deeper understanding of pure mathematics. - 8 While it is true that people doing "Matrix Theory" often spend a lot of time with a choice of basis, it's important to note that this is frequently in pursuit of quantities that are invariant of choice of basis. – Dan Piponi Jan 13 2010 at 23:44 1 An even more basic question but in the same line is "What is the difference of a vector and a row (collum) matrix".Vectors are mathematical object living in a linear space or vector space (which satisfy certain properties). Choosing a special set of vectors called a base, we can decompose every vector in the vector space into a kind of sum of vectors in this base. Thus every vector in a code, and this is the row (collum) matrix. The next step is to look at the homomorphisms (maps) between linear spaces. Choosing the base of the domain and the range we can represent the homomorphism by a matrix – Tran Chieu Minh Jan 14 2010 at 15:00 2 Even worse, matrices depend on a choice of an ordered basis. – Harry Gindi Mar 30 2010 at 22:30 3 Belated comment: Depends on what you call a matrix, Harry. If $X$ and $Y$ are sets and $K[X]$ and $K[Y]$ are their free K-vector spaces then a linear map $\colon K[X] \to K[Y]$ is the same as a map of sets $X \to K^Y = X \times Y \to K$. I'd argue this is what a matrix really is and that ordering is an artifact of trying to write something in linear order on a piece of paper. – Per Vognsen Aug 4 2010 at 5:36 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A counter-quotation to the one from Dieudonné: We share a philosophy about linear algebra: we think basis-free, we write basis-free, but when the chips are down we close the office door and compute with matrices like fury. (Irving Kaplansky, writing of himself and Paul Halmos) - 2 I totally agree with this one. Thanks for sharing. – XX Apr 2 2010 at 2:27 +1! I confess that I like this quote much more than the other one (Dieudonné's), which, at least to me, appears a little arrogant. In my opinion, 'abstract' is not automatically 'better.' There are cases when one needs a concrete and efficient computation [Or, are all the algorithms implemented in Matlab just not smart enough because they use matrices? :) ] – unknown (google) Apr 2 2010 at 7:48 To echo the comment above: Kaplansky's quotation is that much more appropriate for people who code in low-level or numerical languages. It's possible to do a heck of a lot of symbolic calculation in such settings through the judicious use of integral matrices (here "integral" should be considered broadly). – Steve Huntsman Apr 23 2010 at 16:26 The difference is that in matrix theory you have chosen a particular basis. - Let me quote without further comment from Dieudonné's "Foundations of Modern Analysis, Vol. 1". There is hardly any theory which is more elementary [than linear algebra], in spite of the fact that generations of professors and textbook writers have obscured its simplicity by preposterous calculations with matrices. - 3 You should add this to the great quotes in mathematics thread. – Harry Gindi Mar 30 2010 at 22:30 6 It is ironic that a textbook on analysis would make such an outrageous claim on the trivially of another field: the analytic parts of linear algebra are truly deep and quite actively researched. See, for example, Loewner's classification of matrix-monotone functions, or most any paper in quantum Shannon theory. Additionally, the entire field of quantum information theory (QIT) is essentially the study of unitary and self-adjoint operators on tensor products of Hilbert spaces, and a large majority the interesting questions in QIT retain 99% of their interest in the finite-dimensional case. – Jon Apr 4 2010 at 18:06 10 I don't think that there are many who can claim to have a better understanding of that things than Dieudonné. So instead of trying so hard to misunderstand him, try to find a meaning in his comment. – Tilemachos Vassias Apr 4 2010 at 18:53 3 It should also be pointed out that the "analytic parts of linear algebra" are more properly thought of as linear analysis, or in the case of operator monotone functions and calculations with the c.b. norm, even as non-linear analysis. I think castigating Dieudonné for this quote is taking unnecessary umbrage. – Yemon Choi Apr 4 2010 at 19:25 4 Of course Dieudonne meant "elementary" as in "simple and foundational", here not using "simple" to mean easy, but simple in the sense of structural complexity. It's not an arrogant statement about how easy he thinks linear algebra is, but rather a castigation of those "generations of professors and textbook writers" who turned an elegant subject into a jumbled mess. – Harry Gindi Jun 10 2010 at 8:45 show 5 more comments Although some years ago I would have agreed with the above comments about the relationship between Linear Algebra and Matrix Theory, I DO NOT agree any more! See, for example Bhatia's "Matrix Analysis" GTM book. For example, doubly-(sub)stochastic matrices arise naturally in the classification of unitarily-invariant norms. They also naturally appear in the study of quantum entanglement, which really has nothing to do with a basis. (In both instances, all sorts of NONarbitrary bases come into play, mainly after the spectral theorem gets applied.) Doubly-stochastic matrices turn out to be useful to give concise proofs of basis-independent inequalities, such as the non-commutative Holder inequality: tr |AB| $\le$ $||A||_p$ $||B||_q$ with 1/p+1/q=1, $|A|=(A^*A)^{1/2}$, and $||A||_p = (tr |A|^p)^{1/p}$ - Doubly-stochastic matrices (in one interpretation, anyway) describe transition probabilities of some Markov chain where all the transitions are reversible. The relevant vector space is the free vector space over the states of the chain. Maybe this interpretation isn't directly relevant to the application you're thinking of, but there should be some connection. – Qiaochu Yuan Apr 4 2010 at 18:25 In the application to the Holder inequality, one uses the fact that if U is a unitary operator, then replacing the matrix elements of U by the squares of their absolute values yields a doubly-stochastic matrix. – Jon Apr 7 2010 at 1:33 Matrix theory is the specialization of linear algebra to the case of finite dimensional vector spaces and doing explicit manipulations after fixing a basis. More precisely: The algebra of $n \times n$ matrices with coefficients in a field $F$ is isomorphic to the algebra of $F$-linear homomorphisms from an $n$-dimensional vector space $V$ over $F$, to itself. And the choice of such an isomorphism is precisely the choice of a basis for $V$. Sometimes you need concrete computations for which you use the matrix viewpoint. But for conceptual understanding, application to wider contexts and for overall mathematical elegance, the abstract approach of vector spaces and linear transformations is better. In this second approach you can take over linear algebra to more general settings such as modules over rings(PIDs for instance), functional analysis, homological algebra, representation theory, etc.. All these topics have linear algebra at their heart, or, rather, "is" indeed linear algebra.. - My opinion: matrix theory mostly deals with matrix of a paticular kind , or a few relevant ones. But linear algebra cares about the general, underlying structrue. - I'm with Jon. Matrices don't always appear as linear transformations. Yes, you can look at them as linear transformations, but there are times when it's better not to and study them for their own right. Jon already gave one example. Another example is the theory of positive (semi)definite matrices. They appear naturally as covariance matrices of random vectors. The notions like schur complements appear naturally in a course in matrix theory, but probably not in linear algebra. - 2 Covariance matrices are essentially inner products, aren't they? That's just thinking of matrices as tensors of type (0, 2) instead of as tensors of type (1, 1). I think the theory of linear algebra is really good at clarifying the distinction between this type of matrix and the "usual" type of matrix; for example it gets to the heart of when similarity is relevant vs. when conjugation is relevant. So I don't think this is a good example. – Qiaochu Yuan Apr 4 2010 at 18:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320604205131531, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/262362/what-is-the-greatest-amount-of-postage-you-would-not-be-able-to-pay
# What is the greatest amount of postage you would not be able to pay… What is the greatest amount of postage you would not be able to pay using only a combination of seven cent and seventeen cent stamps? I have done a similar problem and got it correct but I am just wondering if there are other ways to do this. Please help me out thanks - 2 – Lopsy Dec 19 '12 at 23:47 How can we know whether there are "other ways" when we don't know what your way was? – Gerry Myerson Dec 20 '12 at 4:42 ## 1 Answer Suppose we can make $n$ cents. If we use two or more $17$s, we can replace two $17$s with five $7$s to make $n+1$. If we use twelve or more $7$s, we can replace twelve $7$s with five $17$s to make $n+1$. If we don't use two $17$s, and we don't use twelve $7$s, how large can $n$ be? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529072046279907, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3798/why-rsa-cant-handle-numbers-above-76?answertab=oldest
# Why RSA can't handle numbers above 76? I'm going to encrypt the characters `Zhu`, and decrypt them using RSA. I'm using the public key $\{e, n\}$ and private key $\{d, n\}$. The values of $e$, $d$ and $p$ I get from my textbook: $e = 17$, $d = 53$, $n = 77$. It works fine when the number is smaller than 76, but in my example below, it doesn't work: ````z = 90 h = 104 u = 107 e = 17 d = 53 n = 77 print z,h,u c_z = z ** e % n c_h = h ** e % n c_u = u ** e % n print c_z, c_h, c_u p_z = c_z ** d % n p_h = c_h ** d % n p_u = c_u ** d % n print p_z, p_h, p_u ```` The outputs are: ````90 104 107 62 69 46 13 27 30 ```` Do you have any idea what's going wrong? - Interesting question. I'm guessing the answer is something to do with the number of bits you can encrypt with such low p and q (or e and d in your case) values, but I'm unsure as to how the modulus affects this. This question would be better on Crypto SE though, so I've flagged for a moderator to move it. – Polynomial Sep 14 '12 at 13:12 @Polynomial thanks for your quick reply,it starts to make sense that in the textbook it says the p,q must be prime 1,024bits or above – yozloy Sep 14 '12 at 13:19 @yozloy that's more for security, the larger the primes the harder it is to factorize, technically you could use single digit primes, but it'll be trivial to break the keys, something is still wrong with the math here, this expression must hold true: %n !>= n . The only thing I can think of causing an issue is an order of operations error somewhere possibly due to optimization or something. Which programming language is it (I can think of several which have close enough syntax). – ewanm89 Sep 14 '12 at 13:26 @ewanm89 it's python, and if I can change the value of `z` `h` `u` to number that equal or smaller than 76, than it works, I just change the number and run the program again, the result is same as expected ```16 77 55 25 0 55 16 0 55 ``` – yozloy Sep 14 '12 at 13:30 ## 2 Answers By definition you cannot encrypt values greater than the modulus in RSA, because the plaintext is immediately reduced modulo $n$ which loses information. This is because textbook RSA works in the $\mathbb{Z}/n\mathbb{Z}$ congruence ring, so from RSA's point of view, as long as two values have the same remainder modulo $n$, they are effectively equivalent. So with your $n = 77$ example, RSA will not distinguish plaintexts of $10$ and $87$ since they both leave a remainder of $10$ when divided by $77$, i.e. they are equal in $\mathbb{Z}/n\mathbb{Z}$. They will produce the same ciphertext, and hence decrypt to the same value ($10$). Put differently, RSA does not care what your plaintext is, all it cares about is its remainder when divided by your modulus. Anything else is irrelevant, all that matters is the remainder (hence everything being done modulo $n$). You will notice "z" (90) decrypts to 13 which just so happens to be the remainder of 90 when divided by 77, as $90 = 1 \times 77 + 13$. Similarly, "h" (104) decrypts to 27, and sure enough, $104 = 1 \times 77 + 27$. And again, "u" (107) decrypts to 30... you guessed it, $107 = 1 \times 77 + 30$. Coincidence? Not at all, it is a direct consequence of the first paragraph. Another way to think of it is that since RSA can only output values between $0$ and $n - 1$ (because the ciphertext is taken modulo $n$), then there can only be $n$ possible plaintext inputs (pigeonhole principle). It then becomes clear encrypting values above $n - 1$ is not useful. You'll need to use a bigger modulus, or work with a different charset e.g. a = 0, b = 1, etc... instead of using the ASCII one, since in ASCII most small values are unfortunately taken up by control characters. If you want all basic characters to work properly, try this keypair: $\left ( ~ p = 13, ~ q = 29, ~ n = 377, ~ e = 17, ~ d = 257 ~ \right )$ This should work - but do remember this is only for learning purposes. Fun fact: this small RSA keypair is very interesting in that its encryption and decryption exponents are both prime fermat numbers, this is amazingly rare and quite surprising. Just thought I'd mention it. - To be technically correct, it works fine, except there are multiple possible decryption for each character and it's non-deterministic to know which it is. – ewanm89 Sep 14 '12 at 20:20 Your plaintext message (you actually have three different plaintext messages - z, h and u) are all greater than the modulus n, which simply doesn't work with RSA. You will notice that the p_x values are just the original x value modulus n=77 in this case (13 = 90 mod 77 etc). For RSA to work (that is, the plaintext to be recovered after encryption then decryption) the message being raised by the exponent must be smaller than the modulus (for this to happen securely, a secure padding scheme must also be used, but that's another matter). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282827973365784, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/59069?sort=newest
## Can a complex non-skew Hermitian matrix have purely imaginary eigenvalues? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to determine if a certain matrix can have purely imaginary eigenvalues. My question in its most general form is weather a complex matrix that is not skew-Hermitian and irreducible can contain eigenvalues on the imaginary axis. My question, however, arises from a more particular instance. I am trying to determine if there can be eigenvalues on the imaginary axis of the matrix $(j\omega I + L)_{(kl)}$. Here, $\omega$ is some real number, $A_{(kl)}$ denotes the sub-matrix of $A$ obtained by deleting the $k$-th row and $l$th column, and $L$ is the combinatorial Laplacian of a connected graph. The Matrix-Tree theorem tells us that the determinant of any sub-matrix of $L$ is equal to the number of spanning trees in the graph. This, of course, implies that $L_{(k,l)}$ is invertible. I would like to know $(j\omega I + L)_{(kl)}$ inherits that property, i.e. it is invertible for any choice of $\omega \in R$. If for example, $L_{(kl)}$ does contain a purely imaginary eigenvalue, then there exists an $\omega$ that makes the matrix singular. - 1 The matrix $\left(\begin{matrix i & i \\ 0 & i\end{matrix}\right)$ hs $i$ as only eigenvalue, is not skew Hermitian (not even diagonalizable). So...? – Stefan Waldmann Mar 21 2011 at 15:54 Yes, that provides a simple example. I've modified my question to further require that the matrix is irreducible, which I neglected to mention originally. – dan Mar 21 2011 at 16:48 2 If you multiply by $i$, then the question is: is a complex matrix having only real eigenvalues necessarily Hermitian. The answer is "no". For example $\left(\begin{array} {cc} 5&3\\2&4 \end{array}\right)$. – Peter Shor Mar 21 2011 at 18:58 To prove the theorem you want (if it's true), I suspect you would have to use the fact that $L$ has all positive entries. – Peter Shor Mar 21 2011 at 19:00 Is the $j$ in $j\omega I$ the imaginary unit or the column index as in $L_{(i,j)}$? – Federico Poloni Mar 21 2011 at 21:01 show 3 more comments ## 1 Answer Take $$L = \left( \begin{array} {rrrr} 7&-2&-2&-3\\ -2 & 4 & 0 & -2 \\ -2 & 0 &4 & -2 \\ -3 & -2 & -2 & 7 \end{array} \right).$$ Remove the last row and first column. The remaining matrix has two purely imaginary eigenvalues. Does this answer your question? UPDATE: Can I point out that, even though this matrix has imaginary eigenvalues, there is no value of $\omega$ such that $(j\omega I + L)_{(k,\ell)}$ has determinant zero, where $j = \sqrt{-1}$. This is because the operations of adding the identity and removing row $k$ and column $\ell$ do not commute. You may want to rethink your question. - This is a counter example to the general question, thank-you. However, for the question related to the Laplacian, I am only considering unweighted graphs. – dan Mar 22 2011 at 6:38 In fact, it's a counterexample to the Laplacian question for weighted graphs, which makes me suspect that there is probably a counterexample for unweighted graphs. – Peter Shor Mar 22 2011 at 11:18 That would be very interesting. May I ask how you constructed the example for weighted graphs? As I mentioned, I have yet to find an example for unweighted graphs. – dan Mar 22 2011 at 12:05 @dan: I stuck a lot of parameters in (with the symmetry you see) and did a small-scale computer search. I don't know how well that would work for unweighted graphs, especially if you've been testing them. – Peter Shor Mar 22 2011 at 14:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925144612789154, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/17938/point-charge-potential-sign-problem?answertab=active
# Point charge potential (sign problem) I'm a bit embarrassed, but I'm not able to compute the electric potential at point $P$ (at a distance $R$ from the origin) generated by a positive unitary point charge in the origin with the right sign. Simply use the definition $V(P) = -\int_\infty^P \vec E\cdot d\vec l$, forgetting the constant and choosing a straight line to integrate from infinity (so in the direction of $d\vec l=-\frac{\vec r}{r})$: $$V(P) = -\int_\infty^P \frac{\vec r}{r^3}\cdot d\vec l = -\int_\infty^P \frac{\vec r}{r^3}\cdot d\left(-\frac{\vec r}{r}\right) = \int_\infty^R \frac{1}{r^2}dr = -\frac{1}{R}$$ - It's not clear to me what exactly you're asking here. What exactly do you mean by "potential of a point charge"? Are you asking about the potential caused by a point charge at some other point? Or about the potential required to move a point charge from one place to another? Are you wondering why, if it's possible to compute the potential of a sphere of charge, you can't do so for a point? I think the question could use a little more explanation. – David Zaslavsky♦ Dec 7 '11 at 0:58 The way I understand the question, it asks about the potential that a point charge in the origin would cause at some point $P$ a distance $R$ from the origin, and apparently the sign is wrong – Lagerbaer Dec 7 '11 at 2:15 The definition is wrong. The right definition is $V(P)-V(Q) = -\int_P^Q E dl$, that is, the change in potential is the line integral. If you integrate from P to infinity, it's the opposite sign as integrating from infinity to P. – Ron Maimon Dec 7 '11 at 6:03 @Ron: his definition (except for the P-R typo) is the one given by Wikipedia, and it seems correct. The line integral is the amount of work done by the field on the hypothetical particle, so it is the negative of the change in potential. – Harry Johnston Dec 7 '11 at 8:00 @Harry: it is correct if used properly, but the OP is confused because of orders of limits, things we do by physical intuition. He wants the formalism to get it right, which is reasonable. In this case you need the integration range to go from R to infinity, and to justify this, you need to explain the minus sign. This comes from the fact that one integration gives V(A)-V(B), while integrating the other direction gives V(B)-V(A). I am not saying that Wikipedia is wrong, but for OP, it is confusing. In order for a formal person to not get confused, it is better to give the difference form. – Ron Maimon Dec 7 '11 at 9:30 show 2 more comments ## 2 Answers You're calculating the line integral incorrectly; the direction of $\vec l$ is determined by the parametrization you use to describe the curve you want to integrate over, not by the direction of the curve itself. Because you're integrating backwards over $r$, you have to use $\vec l = \frac{\vec r}{r}$, not $\vec l = - \frac{\vec r}{r}$. The easiest way to avoid this problem is to reverse the direction of the integration. I won't write out this calculation since I see someone already beat me to it. :-) Another way is to use a more explicit parametrization, e.g., $\vec r(t) = (\frac{R}{t}, 0, 0)$ with $t$ running from 0 to 1. $$V(R) = -\int_\infty^R \frac{\vec r}{r^3}\cdot d\vec l = -\int_0^1 \frac{\vec r}{r^3}\cdot \vec r' dt$$ $$= -\int_0^1 ((\frac{R}{t})^{-2}, 0, 0) \cdot (\frac{-R}{t^2}, 0, 0) dt$$ $$= \int_0^1 \frac{1}{R} dt = \frac{1}{R}$$ For extra credit (and to really see what's going on) try $\vec r(t) = (\frac{-R}{t}, 0, 0)$ with $t$ running from 0 to -1. The reversal of the direction of $\vec r'$ cancels out the fact that the integration is backwards. EDIT: If I understand correctly, you want to understand why the intuitive approach doesn't work. Here's another way of looking at it. Conceptually, what you are trying to do is to add up the infinitesimal changes in potential $$\delta V = - \vec E \cdot d \vec l$$ over the curve from infinity to R. (I say "add up" rather than "integrate" deliberately, you'll see why in a moment.) On this curve, going in that direction, $\vec E$ and $\vec l$ are in opposite directions so the dot product is negative, making $\delta V$ positive. If all the $\delta V$s are positive, then of course so is the sum. So if the sum is positive, why is the integral negative? Because you've silently switched from doing a line integral (now over a scalar field) to doing an ordinary integral. By convention, for an ordinary integral, if $a < b$ then $$\int_b^a = -\int_a^b$$ But for a line integral over a scalar field, $$\int_b^a = \int_a^b$$ So since in this case you are integrating from $\infty$ to $R$, mistaking a line integral for a regular integral causes the sign to switch. The reason for the difference between a line integral and an ordinary integral is that the line integral represents (loosely speaking) the sum of the values along the curve whereas the ordinary integral is defined as the inverse of differentiation. When summing up scalars, it doesn't matter which end you start at, the result is the same; but differentiation reverses sign when you change directions. - This is exactly what I did: use the definition of path integral to find a more correct way, the point is that it is not clear to my what is the direction of $d\vec l$ and why – wiso Dec 7 '11 at 12:01 @wiso: I've added another answer. – Harry Johnston Dec 7 '11 at 19:17 Apart from a sign problem (which basically is caused by wrongly doing an integration in a direction opposite to the $\vec{E}$ field direction), there is also a problem(v3) with an apparent identification of (the change of) $\vec{\ell}$ (which has dimension of length) with (the change of) $\pm \frac{\vec r}{r}$ (which is dimensionless). Try to compare with the following reasoning. (Let us for simplicity assume that the charge in the origin satisfies $\frac{Q}{4\pi\varepsilon_0}=1$.) The electrostatic field $\vec{E}$ is $$-\vec{\nabla} V~=~ \vec{E}~=~\frac{\vec{r}}{r^3}.$$ Its length is $$E~=~ |\vec{E}|~=~\frac{1}{r^2}.$$ Then the potential is $$V(R)~=~ - \int_{r=\infty}^{r=R} \vec{E}\cdot {\rm d}\vec{r}~=~\int_{r=R}^{r=\infty} \vec{E}\cdot {\rm d}\vec{r}~=~\int_{r=R}^{r=\infty} E ~{\rm d}r$$ $$=~\int_{r=R}^{r=\infty} ~\frac{{\rm d}r}{r^2}~=~\left[\frac{-1}{r}\right]_{r=R}^{r=\infty} ~=~\frac{1}{R}.$$ - perfect, but I don't understand what's wrong in my computation – wiso Dec 7 '11 at 11:53 sorry, I repeat: I perfectly understand your answer, but my question is: "Where is my computation wrong?", not "Give me another way to compute the correct result" – wiso Dec 7 '11 at 12:12 My hope was that OP by comparing the two methods step-by-step could figure out where he goes wrong. OP is basically forgetting a factor $\cos (180^\circ)$ when he evaluates a dot product. – Qmechanic♦ Dec 7 '11 at 12:33 @Qmechanic: no, his dot product was fine, he just had the direction of dl wrong. – Harry Johnston Dec 7 '11 at 18:43 1 Well, one can argue that $\vec{r}\cdot d\vec{r}<0$ should be negative in one interpretation of OP's flawed calculation. – Qmechanic♦ Dec 7 '11 at 19:24 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490554928779602, "perplexity_flag": "head"}
http://nrich.maths.org/1015/solution
nrich enriching mathematicsSkip over navigation ### Four Go This challenge is a game for two players. Choose two numbers from the grid and multiply or divide, then mark your answer on the number line. Can you get four in a row before your partner? ### Shapes in a Grid Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column? ### Follow the Numbers What happens when you add the digits of a number then multiply the result by 2 and you keep doing this? You could try for different numbers and different rules. # Forgot the Numbers ##### Stage: 2 Challenge Level: We had nearly $100$ solutions sent in for this challenge. From Forest Lake State School in Australia we had contributions from $3$ pupils, Long, Daniel and this is Connor's": At first I did this: $46$ divided by $15=3.0666666$ $44$ divided by $14=3.1428571$ $43$ divided by $14=3.0714285$ $42$ divided by $14=3$ $41$ divided by $14=3.1538461$ $39$ divided by $13=3.25$ $38$ divided by $12=3.1666666$v $36$ divided by $12=3$ $29$ divided by $12=3.2222222$ $28$ divided by $9=3.1111111$ $27$ divided by $9=3$ $26$ divided by $9=3.25$ $25$ divided by $8=3.125$ The last division sum is the correct answer. At first I did $46$ sided by $15=3.06666666$, so I knew the two numbers had to be lower. When I got down to $25$, I knew that the dividing number had to be reasonable. So I tried 8. Hey presto! I got the answer right. Last algorithm - $25$ divided by $8= 3.125$. Isabella from Sharp school, together with Mellisa and Rebecca had a different way of approaching it: Firstly I wrote down the 3.125 times table: $3.125, 6.25, 9.375, 12.5, 15.625, 18.75, 21.875, 25, 28.125, 31.25$ I saw the only whole number was $25$ ($8$ lots of $3.125$) so that means $25$ divided by $8$ is $3.125$. A number of different approaches were shown by pupils from Rykneld School in the UK. The pupils were, Alice, David, Jordan, Kieran, Daniel, Alicia and Alice who suggested a further challenge: After we had worked out the solution to the problem, we made our own! I divided two numbers and got the answer of $13.5$.  I can't remember my two numbers but they are both under $75$ and are whole numbers.  Can you work out what my numbers were? From Huy at the Australian International School of Vietnam we had the following: We call these two numbers as X and Y. I know that $3.125$ equals to fraction $3 1/8$. $X = (3 1/8)$ times Y and Y is the whole number -->   Y must be a multiple of $8$. Because X and Y are under $50$, I figure out $Y = 8, X = 25$ From Varsity Acres in Canada we had the following message, ( they sent in a pictures of their work but unfortunately we were not able to use it.) Student work, done in French , a student discovered the connection between $0.125$ and $125$. $8$ x $125 = 1000$ so - $1.0$!\$ A fantastic leap and she brought the class along. Well done all of you. I'm sorry we cannot publish all the solutions! The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9710360765457153, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/47459/smith-normal-form-of-graded-modules-major-edit
# Smith normal form of graded modules. (Major edit) Ok, this is a major rewriting of my previous entry which no one answered. Let us have two graded $F[t]$-modules M and N with bases $m_1, \ldots, m_m$ and $n_1, \ldots, n_n$, respectively, and $F$ is a field ($F[t]$ is a PID). Note that $m_i$ and $n_j$ are homogeneous elements. We are now interested in calculating the image of a certain 0-degree graded homomorphism from $M$ to $N$. Note that $$M \simeq F[t]\deg(m_1)\oplus\cdots\oplus F[t]\deg(m_m)$$ and similarly for $N$. The grading is the standard grading, i.e. $t(c_0, c_1, \ldots) = (0,c_0, c_1, \ldots)$. Now let $\partial$ be our homomorphism and the basis elements of $M$ and $N$ with degrees in () are given by: $$ab(1), bc(1), cd(2), ad(2), ac(3)$$ for $M$ and $$a(0), b(0), c(1), d(1)$$ for $N$. Then $\partial$ is defined by $$\partial(ab) = t^{\deg(ab)-\deg(b)}b - t^{\deg(ab)-\deg(a)}a = tb-ta$$ and exactly the same for the rest, $$\partial(bc) = c - tb$$ $$\partial(cd) = td - tc$$ $$\partial(ad) = td - t^2a$$ $$\partial(ac) = t^2c - t^3a$$ Now sorting the basis elements of $N$ in descending order $d,c,b,a$ we can represent $\partial$ by $$\begin{pmatrix} * & ab & bc & cd & ad & ac \\ d & 0 & 0 & t & t & 0 \\ c & 0 & 1 & t & 0 & t^2 \\ b & t& t & 0 & 0 & 0 \\ a & t & 0 & 0 & t^2 & t^3 \end{pmatrix}$$ Using column operations we can keep homogeneous bases and reduce the matrix to column echelon form $$\begin{pmatrix} * & cd & bc & ab & z_1 & z_2 \\ d & t & 0 & 0 & 0 & 0 \\ c & t & 1 & 0 & 0 & 0 \\ b & 0& t & t & 0 & 0 \\ a & 0 & 0 & t & 0 & 0 \end{pmatrix}$$ where $z_1 = ad - cd - t\cdot bc - t\cdot ab$ and $z_2 = ac - t^2\cdot bc - t^2\cdot ab$ form homogenous basis for the kernel. Note that we have that for an entry in the matrix that the degree of the element at that position + the degree of the row basis element = the degree of column basis element. Now the author of the paper argues: The pivots in column-echelon form are the same as the diagonal elements in Smith normal form. Moreover, the degree of the basis elements on pivot rows is the same in both forms. With proof: Because of our sort, the degree of row basis elements is monotonically decreasing from the top rown down. Within each fixed column $j$ the degree of the column basis element is constant equal to $c$ and therefore $\deg\partial_{i,j} = c - \deg(\text{row}~i)$. Therefore, the degree of the elements in each column is monotonically increasing with row. We may eliminate non-zero elements below pivots using row operations that do not change the pivot elements or the degrees of the row basis elements. We then place the matrix in diagonal form with row and column swaps. $\square$ How is it possible to do row operations WITHOUT altering the degree and keeping a homogeneous basis element? Am I missing something obvious? Note that this proof shall hold for any such $\partial$ where the degree of the row + degree of element is equal to the degree of the column. Another example would be $$\begin{pmatrix} * & ab \\ a & t \\ b & t^2 \end{pmatrix}$$ where $\deg(ab) = 3, \deg(a) = 2, \deg(b) = 1$. How would even that be possible... What we are really interested in is the image of $\partial$. This becomes $$\deg(d)tF(t)\oplus \deg(c)F(t)\oplus \deg(b)tF(t)$$ according to the statement and matrix above. I do, however, believe that the result is true and I think I can give a proof for it: Assume that we have column-echelon-form. Then the degree of the homogenous elements along each column increases as we go top rown down. We may also assume that along each row the degree of the pivot element is greater than the other elements on the row, if not, use column operations to remove the element with greater degree. This gives us a matrix of the form (assuming just 2 elements for simplicitiy) $$\begin{pmatrix} * & m_1 & m_2 \\ n_1 & t^{\beta_1^1} & 0 \\ n_2 & t^{\beta_2^1} & t^{\beta_2^2} \end{pmatrix}$$ where $\beta_2^2 \geq \beta_2^1 \geq \beta_1^1$. Then use $n_1$ to remove first coordinate of $n_2$. The image then becomes: $$m_1 \to t^{\beta_1^1}n_1\cdot f(t)$$ where $f(t) \in F(t)$. $$m_2 \to t^{\beta_2^2}(n_2 - n_1\cdot t^{\beta_2^1 - \beta_1^1})g(t)$$ where $g(t) \in F(t)$. Writing in terms of the basis elements of the codomain we have image equal to $$n_1(t^{\beta_1^1}f(t) - t^{\beta_2^2 +\beta_2^1-\beta_1^1}g(t))$$ $$n_2t^{\beta_2^2}$$ and since $\beta_2^2 \geq \beta_1^1$ we have that this is isomorphic to $$\deg(n_1)t^{\beta_1^1}F(t)\oplus \deg(n_2)t^{\beta_2^2}F(t)$$ and the proof generalizes in an obvious way to higher dimensions. Non-pivot rows are skipped. Source: http://comptop.stanford.edu/preprints/persistence1.pdf Chapter 4.1 - Ok.. The problem should be more accessible now. – M.B. Jun 26 '11 at 5:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225676655769348, "perplexity_flag": "head"}
http://sumidiot.wordpress.com/2009/06/06/hardy-and-wright-chapter-11-part-1/
# ∑idiot's Blog The math fork of sumidiot.blogspot.com ## Hardy and Wright, Chapter 11 (part 1) Now that we’ve got continued fractions under our belt, from chapter 10, we can go on and start looking at “Approximation of Irrationals by Rationals”, chapter 11. One of the (many) cool things about continued fractions, is that they provide “best” rational approximations. We decided, yet again, to split the chapter into two weeks. In our meeting today, discussing the content of 11.1-11.9, we spent most of our time trying to sort out some typos and see how a few of the inequalities came about. In particular, a typo on page 211, in the theorem that at least one in three consecutive convergents is particularly close to a starting irrational, took us quite a while to sort out. Eric brought up a comment from the chapter notes that is quite fascinating. The first several sections talk about “the order of an approximation”. Given an irrational $\xi$, is there a constant $K$ (depending on $\xi$) so that there are infinitely many approximations with $|p/q-\xi|<K/q^n$? This would be an order $n$ approximation. In theorem 191, they show that an algebraic number of degree $n$ (solution to polynomial of that degree) is not approximable to any order greater than $n$ (which seems to be a slightly weaker (by 1) statement than Lioville’s Approximation Theorem). The note Eric pointed out was about Roth’s theorem which states that, in fact, no algebraic number can be approximated to order greater than 2. According to the Mathworld page, this earned Roth a Fields medal. This reminded me about some things I had seen about the irrationality measure of a number. Roth’s theorem, reworded, says something like: every algebraic number has irrationality 1 (in which case it is rational) or 2. So if a number has irrationality measure larger than 2, you know it is transcendental. Apparently, finding the irrationality measure of a particular value is quite a trick. According to the Mathworld page, $e$ has irrationality measure 2, so you can’t use that to decide about it being transcendental. The whole thing is interesting, as pointed out in H&W, because you think of algebraic numbers as sort of nice (it doesn’t get much nicer than polynomials), but, in terms of rational approximations, they are the worst. ### Like this: Tags: continued fraction, hardy and wright, irrationality measure, liouville, number theory, roth, transcendental This entry was posted on June 6, 2009 at 12:00 am and is filed under Play. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 15 Responses to “Hardy and Wright, Chapter 11 (part 1)” 1. Hardy and Wright, Chapter 11 (part 2) « ∑idiot’s Blog Says: June 16, 2009 at 3:23 am | Reply [...] ∑idiot’s Blog The math fork of sumidiot.blogspot.com « Hardy and Wright, Chapter 11 (part 1) [...] 2. Tom Joad Says: December 15, 2009 at 7:03 pm | Reply Do you know any specific examples, or even just an existence proof, of a number that has irrationality measure that is greater than 2 but still finite? (It’s easy enough to construct numbers that have infinite irrationality measure, or measure that is greater than 2 but it’s not clear if the measure is finite. Some well-known numbers like pi have finite irrationality measure, but it might be =2.) • sumidiot Says: December 15, 2009 at 7:40 pm | Reply Hmm, good question. Mathworld’s Irrationality Measure page only indicates upper bounds for some numbers… That’s the best I know off the top of my head. I’ll think about it some more though. • Tom Joad Says: December 17, 2009 at 12:02 pm Thanks – at least now I know it wasn’t a dumb question! Here’s a candidate: x=.100100001…. This is defined as following. Take the sequence of integers n(1)=1, n(2)=4, … n(k)=3*n(k-1)+1 then x=sum(10^(-n(k)). The partial sums of the series are of the form p/q where q=10^(-n(k)). And 0<abs(x-p/q)=3. But I haven’t shown that the irrationality measure is =3, or even that it is finite. • Tom Joad Says: December 17, 2009 at 12:06 pm Sorry, something went wrong and part of that last comment got lost. It should have read “And 0<abs(x-p/q)=3. But I haven’t shown that the irrationality measure is =3, or even that it is finite.” • Tom Joad Says: December 17, 2009 at 12:19 pm Sumidiot, I apologize – twice now, the text that occurred between a “less than” and a “greater than” has disappeared. The text editor must think it is a pair of brackets and disappeared it. I was trying to point out that the partial sums of that series x=sum(10^(-n(k)) are themselves approximations showing that x has irrationality measure at least 3, I’ll try one more time: “And 0<abs(x-p/q)<1/q^3 . So the irrationality measure of x is at least 3. But I haven’t shown that the irrationality measure is equal to 3, or even that it is finite." Hope that text comes through OK. Again, I apologize for the multiple postings, • Tom Joad Says: January 10, 2010 at 5:23 pm | Reply Well, as we used to say when I was growing up in Arkansas, “if it had been a snake it would have bit me”. We were looking at the number x=sum(10^(-n(k))), where n(1)=1 and n(k)=3*n(k-1)+1. In other words , x=0.1001000000001…(26 zeros)…1…. The partial sums of this series are of the form s(k)/t(k) where t(1)=10 and t(k+1)=10t(k)^3. Our intuition is that the only convergents of the simple continued fraction for x, that satisfy the inequality abs(x-p/q)<1/q^3, are exactly the partial sums of the series x=sum(10^(-n(k))). This intuition turns out to be correct. Consider the inequality 1/(2q(n)q(n+1))<abs(x-p(n)/q(n))<1/(q(n)q(n+1)). This inequality holds for all terms p(n)/q(n) in the sequence of convergents to x, where x is irrational. (it is Theorem 3.8 in C.D. Olds' book.) We use the inequality a couple of times in what follows. Now let p(n)/q(n)) be a convergent which satisfies abs(x-p(n)/q(n))<1/q(n)^3. Then we have 1/(2q(n)q(n+1))<1/q(n)^3, and therefore (q(n)^2)/2<q(n+1). "If p(n)/q(n) is a 'very good' approximation, then q(n+1) becomes 'very large'." We can now see (by contradiction) that none of the convergents that lie between the partial sums s(k)/t(k) and s(k+1)/t(k+1) are "as good as" the partial sums. For let p(m)/q(m) be a convergent, where t(k)<q(m)<t(k+1). And suppose for purposes of contradiction that p(m)/q(m) is "very good", i.e. that abs(x-p(m)/q(m))<1/q(m)^3. Then q(m) will be "too large". We would have (t(k)^2)/2<q(m) and (q(m)^2)/2<q(m+1)≤t(k+1). Combining these two inequalities, we get [(t(k)^2)/2]^2]/2≤t(k+1). That yields (t(k)^4)/8≤10t(k)^3. This yields the contradiction (except in the trivial case t(1)=10). Now we can see that the irrationality measure of x is 3. As we have seen, only the partial sums s(k)/t(k) satisfy the inequality abs(x-s(k)/t(k))<1/t(k)^3. And these partial sums actually satisfy abs(x-s(k)/t(k))=(0.1+e)/t(k)^3 where e approaches 0 as k become large. So we cannot substitute anything larger than 3 for the exponent in the inequality. Thanks very much for helping me with this. I think I should apologize for luring you into it. I have time to mess around with something like this, but you don't. Happy New Year, and let 2010 be the year of "the dissertation, the dissertation, and nothing but the dissertation!" • sumidiot Says: January 13, 2010 at 11:18 pm Nice! And no need to apologize. It’s good for me. Surely. I’ll see what I can do about that dissertation. Hopefully I can occasionally find some other fun things to post about here for your amusement. 3. sumidiot Says: December 17, 2009 at 9:16 pm | Reply @Tom Joad, I haven’t wrestled with the inequalities enough, but it looks like your idea is sound. Constructing $x$ as you do seems to give those lower bounds for the irrationality measure, as you point out, and it feels like there is reason for some optimism that 3 might be an upper bound as well, in your example. Somewhat similarly, I was thinking about constructing $x$ using the continued fraction expansion, which might make checking upper bounds easier as well. If you’ve got a copy of Khinchin’s (delightful) “Continued Fractions” book, equation (34) is (in my edition anyway) $\frac{1}{q_k(a_{k+1}+2)}<\left| x-\frac{p_k}{q_k}\right|\leq \frac{1}{q_k^2a_{k+1}}$. I think, then, choosing $a_{k+1}=q_k^c$, where $c=n-2$, might just show that $x$ has irrationality measure $n$. I could, of course, be wrong. These are my first (maybe we're up to second now) guesses. A limited preview of Khinchin's book is on Google Books, if you don't have a copy. Here’s the page with the inequality I mentioned. 4. Tom Joad Says: December 26, 2009 at 10:07 pm | Reply Sumidiot, thanks for the hint. I saw that you suggested looking at the a(k), and went to my copy of Continued Fractions by C D Olds (which I’ve had for 45 years) and worked out the answer – then came back and saw that it was close to what you had suggested to begin with. Here’s my version of your answer: There are recursive equations for the convergents of a simple continued fraction – p(k)=p(k-2)+a(k)*p(k-1) q(k)=q(k-2)+a(k)*q(k-1) Furthermore, according to wolfram.com on irrationality measure, the irrationality measure m is given by m=1+lim_sup{log(q(k)/log(q(k-1)}. We can start out (for instance) with p(1)/q(1)=0/1 p(2)/q(2)=1/2 Then recursively set a(k)=q(k-1) for k>2 Then we get p(k)=q(k-1)*p(k-1)+p(k-2); q(k)=q(k-2)+q(k-1)^2. It’s pretty easy to see that for this recursively defined number with convergents c(k)=p(k)/q(k) the irrationality measure is exactly 3. The specific transcendental number that we get with this definition isn’t particularly pretty – it starts out 0.592643049…. I used some of the same ideas to look further at our original candidate x=sum(10^(-n(k)). The partial sums of the series are themselves convergents because they satisfy the inequality abs(x-p/q)<1/(2*q^2). But unfortunately they are not sequential convergents. The best I could do, using that lim_sup formula from wolfram.com, was to see that the irrationality measure of x is no greater than 4. So the measure is somewhere in the interval [3,4] – and almost certainly equal to 3. At least x it is a simple example of a number with a finite irrationality measure greater than 2. I'm not certain of all this – I'm a retired engineer, not a mathematician. • Tom Joad Says: December 27, 2009 at 12:26 pm | Reply When I said “It’s pretty easy to see … the irrationality measure is exactly 3″, I was referring to that formula m=1+lim_sup{log(q(k))/log(q(k-1))}. (Sorry, I left out some parentheses in the last post.) In this case, we have m=1+lim_sup{log(q(k-2)+q(k-1)^2)/log(q(k-1))} The quotient of logs approaches 2 monotonically from above quite rapidly. So m=3. For the original candidate x=sum(10^(-n(k))), if if x(n)/y(n) and x(n+1)/y(n+1) are two consecutive partial sums, then 1+log(y(n+1))/log(y(n)) approaches 4 in the limit. Also, log(y(n+1)/log(y(n)) is strictly greater than log(q(k+1))/log(q(k)) for all convergents p(k)/q(k) with q(k)<y(n). So the irrationality measure of x is no greater than 4. (We already know it is at least 3.) Hope this is correct…. • Tom Joad Says: December 28, 2009 at 9:44 pm Oops, careless. Regarding x=sum(10^(-n(k))), should have said “Also, log(y(n+1)/log(y(n)) is greater than or equal to log(q(k+1))/log(q(k)) for all convergents p(k)/q(k) with y(n)<=q(k)<y(n+1). So the irrationality measure of x is no greater than 4." Oh well, as I said, I'm not a mathematician. 5. sumidiot Says: December 29, 2009 at 8:30 pm | Reply Sorry it has taken me so long to get back to these comments. It’s all quite fun, so I wanted to give it an honest look. First up, the lim_sup you point out. I don’t know how many times I’ve looked at that wolfram page on irrationality measures, but that lim_sup never stuck in my mind. So thanks for bringing it up. Also, I think the second lim_sup, about ln(a(k+1))/ln(q(k)), is even easier to calculate in the continued fraction example above. I think I agree with the things you are saying above. I don’t think I’ve thought quite enough about your very last comment, about $y(n)\leq q(k)<y(n+1)$, but I still agree that the irrationality measure of x is no greater than 4. Here's my thought process: As you say, the partial sums are convergents to $x$. As they form an increasing sequence of convergents to $x$, they must be convergents of what Khinchin calls even-order (the 0th, 2nd, 4th, 6th, etc, convergents). This is because the even-order convergents are all less than $x$, while odd-order convergents are all bigger than $x$. My gut reaction is that the partial sums are consecutive even-order convergents, but I could be wrong. Either way, as you point out, if the partial sums have values $s(n)/t(n)$ (I'll switch away from $x(n)$ and $y(n)$ so I don't confuse myself), then the limit of $\ln(t(n+1))/\ln(t(n))$ is 3. Suppose $s(n)/t(n)$ is the $m$-th convergent, $p(m)/q(m)$, and latex $s(n+1)/t(n+1)$ is the $M$-th convergent (as they are even, we could write $m=2k$ and $M=2K$ for some $k,K$, but I won't use that). Then $\ln(t(n+1))/\ln(t(n))=\ln(q(M))/\ln(q(m))$ which we could write as $\dfrac{\ln q(M)}{\ln q(M-1)}\dfrac{\ln q(M-1)}{\ln q(M-2)}\cdots\dfrac{\ln q(m+1)}{\ln q(m)}$. This product goes to 3 (from above) in the limit, putting an upper bound of 3 on the lim_sup that Mathworld suggests we calculate. I don't see how to go any further down this road, actually explicitly determining the irrationality measure, without figuring out more about the convergents to $x$. Namely, are the partial sums actually consecutive even-order convergents? And what are the missing convergents? 6. sumidiot Says: December 29, 2009 at 8:49 pm | Reply My gut instinct was wrong. I no longer believe that the partial sums are consecutive even-order convergents (though they are still even-order). I decided this based on some playing with Wolfram|Alpha, for example convergents of the 4th partial sum. Some more playing with some computer tools, getting convergents for bigger partial sums, could likely point out the pattern for all of the convergents. 7. Tom Says: February 13, 2010 at 4:50 am | Reply Just thought I’d point you to the following paper by Brisebarre, in which he mentions (halfway down page 2) a way to construct a family of reals with any given irrationality measure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 35, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388957023620605, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Parametric_feature_based_modeler
# Solid modeling (Redirected from Parametric feature based modeler) The geometry in solid modeling is fully described in 3‑D space; objects can be viewed from any angle. Solid modeling (or modelling) is a consistent set of principles for mathematical and computer modeling of three-dimensional solids. Solid modeling is distinguished from related areas of geometric modeling and computer graphics by its emphasis on physical fidelity.[1] Together, the principles of geometric and solid modeling form the foundation of computer-aided design and in general support the creation, exchange, visualization, animation, interrogation, and annotation of digital models of physical objects. ## Overview The use of solid modeling techniques allows for the automation of several difficult engineering calculations that are carried out as a part of the design process. Simulation, planning, and verification of processes such as machining and assembly were one of the main catalysts for the development of solid modeling. More recently, the range of supported manufacturing applications has been greatly expanded to include sheet metal manufacturing, injection molding, welding, pipe routing etc. Beyond traditional manufacturing, solid modeling techniques serve as the foundation for rapid prototyping, digital data archival and reverse engineering by reconstructing solids from sampled points on physical objects, mechanical analysis using finite elements, motion planning and NC path verification, kinematic and dynamic analysis of mechanisms, and so on. A central problem in all these applications is the ability to effectively represent and manipulate three-dimensional geometry in a fashion that is consistent with the physical behavior of real artifacts. Solid modeling research and development has effectively addressed many of these issues, and continues to be a central focus of computer-aided engineering. ## Mathematical foundations The notion of solid modeling as practiced today relies on the specific need for informational completeness in mechanical geometric modeling systems, in the sense that any computer model should support all geometric queries that may be asked of its corresponding physical object. The requirement implicitly recognizes the possibility of several computer representations of the same physical object as long as any two such representations are consistent. It is impossible to computationally verify informational completeness of a representation unless the notion of a physical object is defined in terms of computable mathematical properties and independent of any particular representation. Such reasoning led to the development of the modeling paradigm that has shaped the field of solid modeling as we know it today.[2] All manufactured components have finite size and well behaved boundaries, so initially the focus was on mathematically modeling rigid parts made of homogeneous isotropic material that could be added or removed. These postulated properties can be translated into properties of subsets of three-dimensional Euclidean space. The two common approaches to define solidity rely on point-set topology and algebraic topology respectively. Both models specify how solids can be built from simple pieces or cells. Regularization of a 2-d set by taking the closure of its interior According to the continuum point-set model of solidity, all the points of any X ⊂ ℝ3 can be classified according to their neighborhoods with respect to X as interior, exterior, or boundary points. Assuming ℝ3 is endowed with the typical Euclidean metric, a neighborhood of a point p ∈X takes the form of an open ball. For X to be considered solid, every neighborhood of any p ∈X must be consistently three dimensional; points with lower dimensional neighborhoods indicate a lack of solidity. Dimensional homogeneity of neighborhoods is guaranteed for the class of closed regular sets, defined as sets equal to the closure of their interior. Any X ⊂ ℝ3 can be turned into a closed regular set or regularized by taking the closure of its interior, and thus the modeling space of solids is mathematically defined to be the space of closed regular subsets of ℝ3 (by the Heine-Borel theorem it is implied that all solids are compact sets). In addition, solids are required to be closed under the Boolean operations of set union, intersection, and difference (to guarantee solidity after material addition and removal). Applying the standard Boolean operations to closed regular sets may not produce a closed regular set, but this problem can be solved by regularizing the result of applying the standard Boolean operations.[3] The regularized set operations are denoted ∪∗, ∩∗, and −∗. The combinatorial characterization of a set X ⊂ ℝ3 as a solid involves representing X as an orientable cell complex so that the cells provide finite spatial addresses for points in an otherwise innumerable continuum.[1] The class of semi-analytic bounded subsets of Euclidean space is closed under Boolean operations (standard and regularized) and exhibits the additional property that every semi-analytic set can be stratified into a collection of disjoint cells of dimensions 0,1,2,3. A triangulation of a semi-analytic set into a collection of points, line segments, triangular faces, and tetrahedral elements is an example of a stratification that is commonly used. The combinatorial model of solidity is then summarized by saying that in addition to being semi-analytic bounded subsets, solids are three-dimensional topological polyhedra, specifically three-dimensional orientable manifolds with boundary.[4] In particular this implies the Euler characteristic of the combinatorial boundary[5] of the polyhedron is 2. The combinatorial manifold model of solidity also guarantees the boundary of a solid separates space into exactly two components as a consequence of the Jordan-Brouwer theorem, thus eliminating sets with non-manifold neighborhoods that are deemed impossible to manufacture. The point-set and combinatorial models of solids are entirely consistent with each other, can be used interchangeably, relying on continuum or combinatorial properties as needed, and can be extended to n dimensions. The key property that facilitates this consistency is that the class of closed regular subsets of ℝn coincides precisely with homogeneously n-dimensional topological polyhedra. Therefore every n-dimensional solid may be unambiguously represented by its boundary and the boundary has the combinatorial structure of an n−1-dimensional polyhedron having homogeneously n−1-dimensional neighborhoods. ## Solid representation schemes Based on assumed mathematical properties, any scheme of representing solids is a method for capturing information about the class of semi-analytic subsets of Euclidean space. This means all representations are different ways of organizing the same geometric and topological data in the form of a data structure. All representation schemes are organized in terms of a finite number of operations on a set of primitives. Therefore the modeling space of any particular representation is finite, and any single representation scheme may not completely suffice to represent all types of solids. For example, solids defined via combinations of regularized boolean operations cannot necessarily be represented as the sweep of a primitive moving according to a space trajectory, except in very simple cases. This forces modern geometric modeling systems to maintain several representation schemes of solids and also facilitate efficient conversion between representation schemes. Below is a list of common techniques used to create or represent solid models.[4] Modern modeling software may use a combination of these schemes to represent a solid. ### Parameterized primitive instancing This scheme is based on the notion of families of objects, each member of a family distinguishable from the other by a few parameters. Each object family is called a generic primitive, and individual objects within a family are called primitive instances. For example a family of bolts is a generic primitive, and a single bolt specified by a particular set of parameters is a primitive instance. The distinguishing characteristic of pure parameterized instancing schemes is the lack of means for combining instances to create new structures which represent new and more complex objects. The other main drawback of this scheme is the difficulty of writing algorithms for computing properties of represented solids. A considerable amount of family-specific information must be built into the algorithms and therefore each generic primitive must be treated as a special case, allowing no uniform overall treatment. ### Spatial occupancy enumeration This scheme is essentially a list of spatial cells occupied by the solid. The cells, also called voxels are cubes of a fixed size and are arranged in a fixed spatial grid (other polyhedral arrangements are also possible but cubes are the simplest). Each cell may be represented by the coordinates of a single point, such as the cell's centroid. Usually a specific scanning order is imposed and the corresponding ordered set of coordinates is called a spatial array. Spatial arrays are unambiguous and unique solid representations but are too verbose for use as 'master' or definitional representations. They can, however, represent coarse approximations of parts and can be used to improve the performance of geometric algorithms, especially when used in conjunction with other representations such as constructive solid geometry. ### Cell decomposition This scheme follows from the combinatoric (algebraic topological) descriptions of solids detailed above. A solid can be represented by its decomposition into several cells. Spatial occupancy enumeration schemes are a particular case of cell decompositions where all the cells are cubical and lie in a regular grid. Cell decompositions provide convenient ways for computing certain topological properties of solids such as its connectedness (number of pieces) and genus (number of holes). Cell decompositions in the form of triangulations are the representations used in 3d finite elements for the numerical solution of partial differential equations. Other cell decompositions such as a Whitney regular stratification or Morse decompositions may be used for applications in robot motion planning.[6] ### Boundary representation Main article: Boundary representation In this scheme a solid is represented by the cellular decomposition of its boundary. Since the boundaries of solids have the distinguishing property that they separate space into regions defined by the interior of the solid and the complementary exterior according to the Jordan-Brouwer theorem discussed above, every point in space can unambiguously be tested against the solid by testing the point against the boundary of the solid. Recall that ability to test every point in the solid provides a guarantee of solidity. Using ray casting it is possible to count the number of intersections of a cast ray against the boundary of the solid. Even number of intersections correspond to exterior points, and odd number of intersections correspond to interior points. The assumption of boundaries as manifold cell complexes forces any boundary representation to obey disjointedness of distinct primitives, i.e. there are no self-intersections that cause non-manifold points. In particular, the manifoldness condition implies all pairs of vertices are disjoint, pairs of edges are either disjoint or intersect at one vertex, and pairs of faces are disjoint or intersect at a common edge. Several data structures that are combinatorial maps have been developed to store boundary representations of solids. In addition to planar faces, modern systems provide the ability to store quadrics and NURBS surfaces as a part of the boundary representation. Boundary representations have evolved into a ubiquitous representation scheme of solids in most commercial geometric modelers because of their flexibility in representing solids exhibiting a high level of geometric complexity. ### Constructive solid geometry Main article: Constructive Solid Geometry Constructive solid geometry (CSG) connotes a family of schemes for representing rigid solids as Boolean constructions or combinations of primitives via the regularized set operations discussed above. CSG and boundary representations are currently the most important representation schemes for solids. CSG representations take the form of ordered binary trees where non-terminal nodes represent either rigid transformations (orientation preserving isometries) or regularized set operations. Terminal nodes are primitive leaves that represent closed regular sets. The semantics of CSG representations is clear. Each subtree represents a set resulting from applying the indicated transformations/regularized set operations on the set represented by the primitive leaves of the subtree. CSG representations are particularly useful for capturing design intent in the form of features corresponding to material addition or removal (bosses, holes, pockets etc.). The attractive properties of CSG include conciseness, guaranteed validity of solids, computationally convenient Boolean algebraic properties, and natural control of a solid's shape in terms of high level parameters defining the solid's primitives and their positions and orientations. The relatively simple data structure and elegant recursive algorithms have further contributed to the popularity of CSG. ### Sweeping The basic notion embodied in sweeping schemes is simple. A set moving through space may trace or sweep out volume (a solid) that may be represented by the moving set and its trajectory. Such a representation is important in the context of applications such as detecting the material removed from a cutter as it moves along a specified trajectory, computing dynamic interference of two solids undergoing relative motion, motion planning, and even in computer graphics applications such as tracing the motions of a brush moved on a canvas. Most commercial CAD systems provide (limited) functionality for constructing swept solids mostly in the form of a two dimensional cross section moving on a space trajectory transversal to the section. However, current research has shown several approximations of three dimensional shapes moving across one parameter, and even multi-parameter motions. ### Implicit representation Main article: Function representation A very general method of defining a set of points X is to specify a predicate that can be evaluated at any point in space. In other words, X is defined implicitly to consist of all the points that satisfy the condition specified by the predicate. The simplest form of a predicate is the condition on the sign of a real valued function resulting in the familiar representation of sets by equalities and inequalities. For example if $f= ax + by + cz + d$ the conditions $f(p) =0$, $f(p) > 0$, and $f(p) < 0$ represent respectively a plane and two open linear halfspaces. More complex functional primitives may be defined by boolean combinations of simpler predicates. Furthermore, the theory of R-functions allow conversions of such representations into a single function inequality for any closed semi analytic set. Such a representation can be converted to a boundary representation using polygonization algorithms, for example, the marching cubes algorithm. ### Parametric and feature-based modeling Features are defined to be parametric shapes associated with attributes such as intrinsic geometric parameters (length, width, depth etc.), position and orientation, geometric tolerances, material properties, and references to other features.[7] Features also provide access to related production processes and resource models. Thus, features have a semantically higher level than primitive closed regular sets. Features are generally expected to form a basis for linking CAD with downstream manufacturing applications, and also for organizing databases for design data reuse. ## History of solid modelers This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2012) The historical development of solid modelers has to be seen in context of the whole history of CAD, the key milestones being the development of the research system BUILD followed by its commercial spin-off Romulus which went on to influence the development of Parasolid, ACIS and Solid Modeling Solutions. Other contributions came from Mäntylä, with his GWB and from the GPM project which contributed, among other things, hybrid modeling techniques at the beginning of the 1980s. This is also when the Programming Language of Solid Modeling PLaSM was conceived at the University of Rome. ## Computer-aided design Main article: Computer-aided design The modeling of solids is only the minimum requirement of a CAD system’s capabilities. Solid modelers have become commonplace in engineering departments in the last ten years[when?] due to faster computers and competitive software pricing. Solid modeling software creates a virtual 3D representation of components for machine design and analysis.[8] A typical graphical user interface includes programmable macros, keyboard shortcuts and dynamic model manipulation. The ability to dynamically re-orient the model, in real-time shaded 3-D, is emphasized and helps the designer maintain a mental 3-D image. A solid part model generally consists of a group of features, added one at a time, until the model is complete. Engineering solid models are built mostly with sketcher-based features; 2-D sketches that are swept along a path to become 3-D. These may be cuts, or extrusions for example. Design work on components is usually done within the context of the whole product using assembly modeling methods. An assembly model incorporates references to individual part models that comprise the product.[9] Another type of modeling technique is 'surfacing' (Freeform surface modeling). Here, surfaces are defined, trimmed and merged, and filled to make solid. The surfaces are usually defined with datum curves in space and a variety of complex commands. Surfacing is more difficult, but better applicable to some manufacturing techniques, like injection molding. Solid models for injection molded parts usually have both surfacing and sketcher based features. Engineering drawings can be created semi-automatically and reference the solid models. ### Parametric modeling Parametric modeling uses parameters to define a model (dimensions, for example). Examples of parameters are: dimensions used to create model features, material density, formulas to describe swept features, imported data (that describe a reference surface, for example). The parameter may be modified later, and the model will update to reflect the modification. Typically, there is a relationship between parts, assemblies, and drawings. A part consists of multiple features, and an assembly consists of multiple parts. Drawings can be made from either parts or assemblies. Example: A shaft is created by extruding a circle 100 mm. A hub is assembled to the end of the shaft. Later, the shaft is modified to be 200 mm long (click on the shaft, select the length dimension, modify to 200). When the model is updated the shaft will be 200 mm long, the hub will relocate to the end of the shaft to which it was assembled, and the engineering drawings and mass properties will reflect all changes automatically. Related to parameters, but slightly different are Constraints. Constraints are relationships between entities that make up a particular shape. For a window, the sides might be defined as being parallel, and of the same length. Parametric modeling is obvious and intuitive. But for the first three decades of CAD this was not the case. Modification meant re-draw, or add a new cut or protrusion on top of old ones. Dimensions on engineering drawings were created, instead of shown. Parametric modeling is very powerful, but requires more skill in model creation. A complicated model for an injection molded part may have a thousand features, and modifying an early feature may cause later features to fail. Skillfully created parametric models are easier to maintain and modify. Parametric modeling also lends itself to data re-use. A whole family of capscrews can be contained in one model, for example. ### Medical solid modeling This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2012) Modern computed axial tomography and magnetic resonance imaging scanners can be used to create solid models of internal body features, so-called volume rendering. Optical 3D scanners can be used to create point clouds or polygon mesh models of external body features. Uses of medical solid modeling; • Visualization • Visualization of specific body tissues (just blood vessels and tumor, for example) • Designing prosthetics, orthotics, and other medical and dental devices (this is sometimes called mass customization) • Creating polygon mesh models for rapid prototyping (to aid surgeons preparing for difficult surgeries, for example) • Combining polygon mesh models with CAD solid modeling (design of hip replacement parts, for example) • Computational analysis of complex biological processes, e.g. air flow, blood flow • Computational simulation of new medical devices and implants in vivo If the use goes beyond visualization of the scan data, processes like image segmentation and image-based meshing will be necessary to generate an accurate and realistic geometrical description of the scan data. ### Engineering This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2012) Mass properties window of a model in Cobalt Because CAD programs running on computers “understand” the true geometry comprising complex shapes, many attributes of/for a 3‑D solid, such as its center of gravity, volume, and mass, can be quickly calculated. For instance, the cube shown at the top of this article measures 8.4 mm from flat to flat. Despite its many radii and the shallow pyramid on each of its six faces, its properties are readily calculated for the designer, as shown in the screenshot at right. ## See also • PLaSM - Programming Language of Solid Modeling. • Computer graphics • Computational geometry • Euler boundary representation • Engineering drawing • Technical drawing • List of CAD companies ## References 1. ^ a b Shapiro, Vadim (2001). Solid Modeling. Elsevier. Retrieved 20 April 2010. 2. Requicha, A.A.G and Voelcker, H. (1983). Solid Modeling: Current Status and Research Directions. IEEE Computer Graphics. Retrieved 20 April 2010. 3. Tilove, R.B. and Requicha, A.A.G (1980). Closure of Boolean operations on geometric entities. Computer Aided Design. Retrieved 20 April 2010. 4. ^ a b Requicha, A.A.G. (1980). Representations for Rigid Solids: Theory, Methods, and Systems. ACM Computing Surveys. Retrieved 20 April 2010. 5. Hatcher, A. (2002). Algebraic Topology. Cambridge University Press. Retrieved 20 April 2010. 6. Canny, John F. (1987). The Complexity of Robot Motion Planning. MIT press, ACM doctoral dissertation award. Retrieved 20 April 2010. 7. Mantyla, M., Nau, D. , and Shah, J. (1996). Challenges in feature based manufacturing research. Communications of the ACM. Retrieved 20 April 2010. 8. LaCourse, Donald (1995). "2". Handbook of Solid Modeling. McGraw Hill. p. 2.5. ISBN 0-07-035788-9. 9. LaCourse, Donald (1995). "11". Handbook of Solid Modeling. McGraw Hill. p. 111.2. ISBN 0-07-035788-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9119256734848022, "perplexity_flag": "middle"}
http://cms.math.ca/Reunions/hiver10/abs/cst.html
Réunion d'hiver SMC 2010 Coast Plaza Hotel and Suites, Vancouver, 4 - 6 décembre 2010 www.smc.math.ca//Reunions/hiver10 Acquisition comprimée : Théorie, algorithmes et application Org: Michael Friedlander, Felix Herrmann et Ozgur Yilmaz (UBC) [PDF] LORNE APPLEBAUM, Princeton University Multiuser Detection in Asynchronous On--Off Random Access Channels Using Lasso  [PDF] We consider on--off random access channels where users transmit either a one or a zero to a base station. Such channels represent an abstraction of control channels used for scheduling requests in third-generation cellular systems and uplinks in wireless sensor networks deployed for target detection. A key characteristic of these systems is their asynchronous nature. We will introduce a Lasso-based scheme for multiuser detection in asynchronous on--off random access channels that does not require knowledge of the delays or the instantaneous received signal-to-noise ratios of the individual users at the base station. For any fixed maximum delay in the system, the proposed scheme allows an exponential number of total users with respect to code length---achieving almost the same problem dimension scaling relationships as that required in the ideal case of fully synchronous channels. Further, the computational complexity of the proposed scheme differs from that of a similar oracle-based scheme with perfect knowledge of the user delays by at most a log factor. The results presented here are non-asymptotic, in contrast to previous work for synchronous channels that only guarantees that the probability of error at the base station goes to zero asymptotically with the number of users. Finally, we give a deterministic code construction suitable for the delay agnostic system. The code construction is based on a cyclic code in which equivalence classes are assigned to users. The code's low coherence permits recovery guarantees with Lasso. STEPHEN BECKER, California Institute of Technology First-order methods for constrained linear inverse problems  [PDF] Many algorithms have recently been proposed to solve the unconstrained forms of linear inverse problems, but few algorithms solve the constrained form. We show a general framework for solving constrained problems that applies to all problems of interest to compressed sensing. The technique is based on smoothing and solving the dual formulation. Using this method, it is possible to solve problems such as the Dantzig Selector, or composite problems such as minimizing a combination of the TV norm and weighted $\ell_1$ norm. Additionally, we discuss recent results about exact regularization and about accelerated continuation. http://arxiv.org/abs/1009.2065 JIM BURKE, University of Washington Sparsity and Nonconvex Nonsmooth Optimization  [PDF] Sparsity (or parsimonious) optimization is a framework for examining the trade-off between optimality and the number independent variables to optimize over. Much of this framework was first developed for application to a range of problems in statistics where the goal is to explain as much of a data as possible with the fewest number of explanatory variables. Within the past few years, the general methodology of sparsity optimization has found its way into a wide range of applications. In this talk we consider a general sparsity optimization framework for arbitrary extended real-valued objective functions that are continuous on their essential domains. General approximation properties for the associated sparsity optimization problem are given, and a fixed point iteration for the subdifferential inclusion identifying approximate stationarity is developed. Convergence results and numerical experiments are presented. MIKE DAVIES, University of Edninburgh Rank Aware Algorithms for Joint Sparse Recovery  [PDF] This talk will revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation also has links with work on direction of arrival estimation in the field of array signal. Inspired by these links, we consider a new family of MMV algorithms based on the well-know MUSIC method in array processing highlighting the role of the rank of the unknown signal matrix in determining the difficulty of the recovery problem. We will show that the recovery conditions for our new algorithms take into account the observed rank of the signal matrix. This is in contrast to previously proposed techniques such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques which we show to be effectively blind to the rank information. Numerical simulations demonstrate that our new rank aware techniques are significantly better than existing methods in dealing with multiple measurements. LAURENT DEMANET, MIT Fitting matrices from applications to random vectors  [PDF] What can be determined about the inverse $A^{-1}$ of a matrix $A$ from one application of $A$ to a vector of random entries? If the n-by-n inverse $A^{-1}$ belongs to a specified linear subspace of dimension $p$, then come to the talk to hear which assumptions on this subspace, $p$, and $n$, guarantee an accurate recovery of $A^{-1}$ with high probability. This randomized fitting method provides a compelling preconditioner for the wave-equation Hessian (normal operator) in seismic imaging. Joint work with Pierre-David Letourneau (Stanford) and Jiawei Chiu (MIT). FELIX HERRMANN, the University of British Columbia Compressive Sensing and Sparse Recovery in Exploration Seismology  [PDF] During this presentation, I will talk about how recent results from compressive sensing and curvelet-based sparse recovery can be used to solve problems in exploration seismology where incomplete sampling is ubiquitous. I will also talk about how these ideas apply to dimensionality reduction of full-waveform inversion by randomly phase-encoded sources. In this second application, compressive sensing allows us to reduce the number of PDE's needed to solve this parameter-estimation problem. GITTA KUTYNIOK, University of Osnabrueck Geometric Separation by Single-Pass Alternating Thresholding  [PDF] Modern data is customarily of multimodal nature, and analysis tasks typically require separation into the single components such as, for instance, in neurobiological imaging a separation into spines (pointlike structures) and dendrites (curvilinear structures). Although a highly ill-posed problem, inspiring empirical results show that the morphological difference of these components sometimes allows a very precise separation. In this talk we will present a theoretical study of the separation of a distributional model situation of point- and curvilinear singularities exploiting a surprisingly simple single-pass alternating thresholding method applied to wavelets and shearlets as two complementary frames. Utilizing the fact that the coefficients are clustered geometrically in the chosen frames, we prove that at sufficiently fine scales arbitrarily precise separation is possible. Surprisingly, it turns out that the thresholding index sets even converge to the wavefront sets of the point- and curvilinear singularities in phase space and that those wavefront sets are perfectly separated by the thresholding procedure. Main ingredients of our analysis are the novel notion of cluster coherence and a microlocal analysis viewpoint. This is joint work with David Donoho (Stanford U.). ZHAOSONG LU, Simon Fraser University Penalty Decomposition Methods for Rank and $l_0$-Norm Minimization  [PDF] In the first part of this talk, we consider general rank minimization problems. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition (PD) methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. In the second part, we consider general $l_0$-norm minimization problems. we first reformulate the $l_0$-norm constrained problem as an equivalent rank minimization problem and then apply the above PD method to solve the latter problem. Further, by utilizing the special structures, we obtain a PD method that only involves vector operations. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD method satisfies a first-order optimality condition that is generally stronger than one natural optimality condition. Finally, we test the performance of our PD methods on matrix completion, nearest low-rank correlation matrix, sparse logistic regression and sparse inverse covariance selection problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. HASSAN MANSOUR, University of British Columbia Recovery of Compressively Sampled Signals Using Partial Support Information  [PDF] In this talk, we discuss the recovery conditions of weighted $\ell_1$ minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least $50\%$ of the (partial) support information is accurate, then weighted $\ell_1$ minimization is stable and robust under weaker conditions than the analogous conditions for standard $\ell_1$ minimization. Moreover, weighted $\ell_1$ minimization provides better bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic as well as audio and video signals. ALI PEZESHKI, Colorado State University Sense and Sensitivity: Model Mismatch in Compressed Sensing  [PDF] We study the sensitivity of compressed sensing to mismatch between the assumed and the actual models for sparsity. We start by analyzing the effect of model mismatch on the best $k$-term approximation error, which is central to providing exact sparse recovery guarantees. We establish achievable bounds for the $\ell_1$ error of the best $k$-term approximation and show that these bounds grow linearly with the signal dimension and the mismatch level between the assumed and actual models for sparsity. We then derive bounds, with similar growth behavior, for the basis pursuit $\ell_1$ recovery error, indicating that the sparse recovery may suffer large errors in the presence of model mismatch. Although, we present our results in the context of basis pursuit, our analysis applies to any sparse recovery principle that relies on the accuracy of best $k$-term approximations for its performance guarantees. We particularly highlight the problematic nature of model mismatch in Fourier imaging, where spillage from off-grid DFT components turns a sparse representation into an incompressible one. We substantiate our mathematical analysis by numerical examples that demonstrate a considerable performance degradation for image inversion from compressed sensing measurements in the presence of model mismatch. \medskip This is joint work with Yuejie Chi, Louis Scharf, and Robert Calderbank. HOLGER RAUHUT, Hausdorff Center for Mathematics, University of Bonn Recovery of functions in high dimensions via compressive sensing  [PDF] Compressive sensing predicts that sparse vectors can be recovered efficiently from highly undersampled measurements. It is known in particular that multivariate sparse trigonometric polynomials can be recovered from a small number of random samples. Classical methods for recovering functions in high spatial dimensions usually suffer the curse of dimension, that is, the number of samples scales exponentially in the dimension (the number of variables of the function). We introduce a new model of functions in high dimensions that uses "sparsity with respect to dimensions". More precisely, we assume that the function is very smooth in most of the variables, and is allowed to be rather rough in only a small but unknown set of variables. This translates into a certain sparsity model on the Fourier coefficients. Using techniques from compressive sensing, we are able to recover functions in this model class efficiently from a small number of samples. In particular, this number scales only logarithmically in the spatial dimension - in contrast to the exponential scaling in classical methods. BENJAMIN RECHT, University of Wisconsin-Madison The Convex Geometry of Inverse Problems  [PDF] Building on the success of generalizing compressed sensing to matrix completion, this talk discusses progress on further extending the catalog of objects and structures that can be recovered from partial information. I will focus on a suite of data analysis algorithms designed to decompose signals into sums of atomic signals from a simple but not necessarily discrete set. These algorithms are derived in a convex optimization framework that encompasses previous methods based on $\ell_1$-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices. I will discuss general recovery guarantees and implementation schemes for this suite of algorithms and will describe several example classes of atoms and applications. JUSTIN ROMBERG, Georgia Institute of Technology Random coding for forward modeling  [PDF] Compressed sensing has shown us that sparse signal acquisition can be made efficient by injecting randomness into the measurement process. In this talk, we will show how these same ideas can be used to dramatically reduce the computation required for two types of simulation problems in acoustics. In the first, we show how all of the channels in a multiple-input multiple-output system can be acquired jointly by simultaneously exciting all of the inputs with different random waveforms, and give an immediate application to seismic forward modeling. In the second, we consider the problem of acoustic source localization in a complicated channel. We show that the amount of computation to perform matched field processing'' (matched filtering) can be reduced by precomputing the response of the channel to a small number of dense configurations of random sources. RAYAN SAAB, The University of British Columbia Sobolev Duals of Random Frames and Sigma-Delta Quantization for Compressed Sensing  [PDF] Compressed sensing, as a signal acquisition technique, has been shown to be highly effective for dimensionality reduction. On the other hand, quantization of compressed sensing measurements has been a relatively under-addressed topic. In particular, the results of Candes, Romberg and Tao, and of Donoho guarantee that if a uniform quantizer of step size $\delta$ is used to quantize $m$ measurements $y = \Phi x$ of a $k$-sparse signal $x \in \mathbb{R}^N$, where $\Phi$ satisfies the restricted isometry property, then the reconstruction error via $\ell_1$-minimization is $O(\delta)$. This is the simplest and most commonly assumed approach for quantization of compressed sensing measurements. On the other hand, in this talk we show that if instead of uniform quantization, an $r$th order $\Sigma\Delta$ quantization scheme with the same output alphabet is used to quantize $y$, then there is an alternative recovery method via Sobolev dual frames which guarantees a reduction of the approximation error by a factor of $(m/k)^{(r-1/2)\alpha}$ for any $0 < \alpha < 1$, if $m \gtrsim_r k (\log N)^{1/(1-\alpha)}$. The result holds with high probability on the initial draw of the measurement matrix $\Phi$ from the Gaussian distribution, and uniformly for all $k$-sparse signals $x$ that satisfy a mild size condition on their supports. THOMAS STROHMER, University of California, Davis How sparsity can enrich wireless communications  [PDF] We demonstrate how concepts from compressive sensing and random matrix theory can combat some challenging problems of next-generation wireless communication systems. EWOUT VAN DEN BERG, Stanford University Spot - A linear operator toolbox for Matlab  [PDF] Spot is a Matlab toolbox for the construction, application, and manipulation of linear operators. One of the main achievements of the package is that it provides operators in such a way that they are as easy to work with as explicit matrices, while represented implicitly. This combines the benefits of explicit matrices with the scalability of implicit representations, thus clearing the way for fast prototyping with complex and potentially large linear operators. (This is joint work with Michael Friedlander.) RACHEL WARD, Courant Institute, NYU New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property  [PDF] The Johnson-Lindenstrauss (JL) Lemma states that any set of $p$ points in high dimensional Euclidean space can be embedded into $O(\delta^{-2} \log(p))$ dimensions, without distorting the distance between any two points by more than a factor between $1 - \delta$ and $1 + \delta$. We establish a "near-equivalence" between the JL Lemma and the Restricted Isometry Property (RIP), a well-known concept in the theory of sparse recovery often used for showing the success of $\ell_1$-minimization. In particular, we show that matrices satisfying the Restricted Isometry of optimal order, with randomized column signs, provide optimal Johnson-Lindenstrauss embeddings up to a logarithmic factor in $N$. Our results have implications for dimensionality reduction and sparse recovery: on the one hand, we arrive at the best known bounds on the necessary JL embedding dimension for a wide class of structured random matrices; on the other hand, our results expose several new families of universal encoding strategies in compressed sensing. This is joint work with Felix Krahmer. REBECCA WILLET, Duke University Compressed Sensing with Poisson Noise  [PDF] Compressed sensing has profound implications for the design of new imaging and network systems, particularly when physical and economic limitations require that these systems be as small and inexpensive as possible. However, several aspects of compressed sensing theory are inapplicable to real-world systems in which noise is signal-dependent and unbounded. In this work we discuss some of the key theoretical challenges associated with the application of compressed sensing to practical hardware systems and develop performance bounds for compressed sensing in the presence of Poisson noise. We develop two novel sensing paradigms, based on either pseudo-random dense sensing matrices or expander graphs, which satisfy physical feasibility constraints. In these settings, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8950313329696655, "perplexity_flag": "head"}
http://crypto.stackexchange.com/tags/chosen-plaintext-attack/hot?filter=all
# Tag Info ## Hot answers tagged chosen-plaintext-attack 13 ### What is the difference between known-plaintext attack and chosen-plaintext attack? It's the difference between an active and a passive attacker: Known plaintext attack: The attacker knows at least one sample of both the plaintext and the ciphertext. In most cases, this is recorded real communication. If the XOR cipher is used for example, this will reveal the key as plaintext xor ciphertext. Chosen plaintext attack: The attacker can ... 9 ### What is the smallest plaintext/ciphertext size for an algorithm like? "Known plaintext" means that the attacker has knowledge of some data and its encrypted counterpart, but he did not choose either (it is "chosen plaintext" when the attacker chooses the plaintext and obtains the corresponding ciphertext, and "chosen ciphertext" when he chooses the ciphertext and obtains the corresponding plaintext). What is "plaintext" ... 6 ### Message space in security definitions In your formula, $n$ appears to relate to the key space, not the message space. The message space does not intervene in the definition of IND-CPA, and that's a good thing because practical message spaces consist in messages which "make sense" in a given context. There are situations where the attacker already guesses quite a lot of the attacked message, and ... 6 ### Practical necessity of semantic security under chosen plain text attack (CPA) in CBC mode Repeatedly encrypting the same message to the same ciphertext is full of practical attacks. Encryption is supposed to leak no information about the content of the message other than its length, and there are very real ways to exploit the information leakage you mention. Some of them have to do with the fact that plaintext domains are not always very large. ... 6 ### Is using a predictable IV with CFB mode safe or not? Thomas is correct; there's no attack on CFB mode if you can predict the IV; NIST is just being cautious. With CBC, the value of the first encrypted block $C_0 = E_k( IV \oplus P_0)$, where $IV$ is the IV used for that packet, $P_0$ is the value of the first plaintext block, and $E_k$ is the evaluation of the block cipher. If an attacker can predict the ... 6 ### For public-key encryption, why does COA resistance imply CPA resistance? I was/am assuming that for public key encryption, COA means "other than the public key, ciphertext only". Otherwise, any secure symmetric cipher with the key published becomes a "COA resistant" PKE scheme. With that in mind, access to an encryption oracle cannot possibly help an attacker, since the attacker can already encrypt any plaintext using the ... 5 ### Is using a predictable IV with CFB mode safe or not? I found a little more info on Google, so let me provide a partial answer to my own question. In particular, I found a post by David Wagner to sci.crypt in 2004, titled "IND-CPA for CFB mode", which in turn led me to a paper titled "Practical symmetric on-line encryption", published in FSE 2003 by Fouque, Martinet and Poupard. In this paper, the authors ... 4 ### Which categories of cipher are semantically secure under a chosen-plaintext attack? Encryption using a block cypher such as AES by passing plaintext blocks directly to the encryption function is known as Electronic Code Book mode (ECB) and is not CPA secure as (as you say in your question) it is entirely deterministic and two identical plaintext blocks will result in two identical ciphertext blocks. To prevent this an initialisation ... 4 ### How does a chosen plaintext attack on RSA work? Slight revision based on Paulo's remark in the comments - in a public key system a chosen plaintext attack is pretty much part of the design - arbitrary plaintexts can be encrypted to produce ciphertexts at will - by design, however, these shouldn't give any information that will allow you to deduce the private key. A chosen ciphertext attack can be used ... 3 ### Why is an Encrypt-and-MAC scheme with deterministic MAC not IND-CPA secure? I'm a little bit confused by your notation (what's $1^n$ supposed to mean? based on context, it looks like a key or a passphrase, but I've never seen that notation before), but the exercise itself seems to just amount to proving that an Encrypt-and-MAC scheme, using a deterministic MAC of the plaintext which is sent in plain, cannot be IND-CPA secure. To ... 3 ### Figuring out key in hill cipher (chosen-plaintext attack) Sure. Assuming that you're using the encoding $A = 0$, $B = 1$, etc., just choose your plaintext messages to be the one-block strings: $$BA \dots A \\ AB \dots A \\ \vdots \\ AA \dots B$$ The encryptions of these strings will then directly give you the columns of your key matrix. 3 ### CPA distinguisher for matrix multiplication in GF(256) with randomized padding While Ilmari answered the specific question you asked (Chosen Plaintext Distinguishers), I would like to note that the attack can be sharpened into a Known Plaintext Key Recovery attack (where, by key recovery, I don't mean recovering the $E_k$ matrix, but instead allowing an attacker to reconstruct enough information to decrypt arbitrary texts). One ... 2 ### Practical necessity of semantic security under chosen plain text attack (CPA) in CBC mode Cryptography is not just about confidentiality of the message, but also confidentiality of information about the message. Given the ciphertext, an attacker should not be able to determine any information about a message without knowing the key. If you can tell that message A is equal to message B, that's a leak of information. This could be useful when ... 2 ### CPA distinguisher for matrix multiplication in GF(256) with randomized padding Split the matrix $E_k$ into four $x \times n$ blocks like this: $$E_k = \begin{bmatrix}P & Q \\ R & S\end{bmatrix}$$ Let $C = [A, B]$, where $A$ and $B$ are $n$ elements vectors. If $M$ is an $n$ element null vector $[0, 0, \dotsc, 0]$, the $P$ and $R$ matrices won't affect the result, and we thus have $Q^{-1} A^T = Pad^T = S^{-1} B^T$. Collect ... 2 ### What is the difference between known-plaintext attack and chosen-plaintext attack? As others have pointed out, there are some ciphers that can be broken if all you have is a known plaintext and the ciphertext. In general, because of this, those ciphers are considered very vulnerable and are not used anywhere. Or I should say, where they are used, the keys are generated (pseudo-) randomly and only used once. However, if the attacker can ... 2 ### Is the AES encryption scheme CPA secure? You are mistaken about what an encryption scheme is. As CodeInChaos pointed out AES is a primitive and we assume that it is a preudo random permutation. That is an assumption since the way AES is built means that we won't be able to formally prove that it is one. With that PRP we try to build modes of encryption that might or might not be CPA-secure I ... 2 ### Security model for privacy-preserving aggregation scheme. The standard approach is to break this problem into two pieces: What information is unavoidably leaked, merely by computing the desired function? In your case, the goal is to compute $\sum_i x_i$. This sum unavoidably leaks a little bit of information about the $x_i$'s. For instance, as you correctly state, if we somehow know that all $x_i$'s are ... 1 ### Which categories of cipher are semantically secure under a chosen-plaintext attack? To be secure against a chosen-plaintext attack, an encryption scheme must be non-deterministic — that is, its output must include a random element, so that e.g. encrypting the same plaintext twice will result in two different ciphertexts. Indeed, if that was not the case, an attacker could easily win the IND-CPA game just by using the encryption ... 1 ### Security model for privacy-preserving aggregation scheme. This is tricky and I don't know that there is a generic way to take care of all domain/auxiliary information. The way we typically do proofs in multi-party computations is by defining an ideal world and show that the information generated in the ideal world (usually the encrypted inputs and the outputs) could be used to simulate the real world protocol ... 1 ### How to construct a variable length IND-CPA cipher from a fixed length one? The typical way to make an encryption scheme work for variable length message is to use a mode of operation. Since you are starting with an already IND-CPA secure cipher, even the often despised ECB mode will work. That said, you will still need padding to make the plaintext length a multiple of the blocksize. If adding padding is out of the question, a ... 1 ### What is the difference between known-plaintext attack and chosen-plaintext attack? A known plaintext attack is that if you know any of the plaintext that has been encrypted and have the resulting encrypted file, with a flawed encryption algorithm you can use that to break the rest of the encryption. Example: We saw this with the old pkzip encryption method. In this case if you had any of the unencrypted files in the archive, you could ... 1 ### For public-key encryption, why does COA resistance imply CPA resistance? I have a definition of COA-security in my head but I cannot find this definition (applied to public key cryptography) in the literature or reference books. Under it, textbook RSA is an example of a public key cryptosystem that is COA-secure but not CPA-secure. To be CPA-secure, a necessary but not sufficient property is that encrypting the same message ... 1 ### What is the smallest plaintext/ciphertext size for an algorithm like? No. A known plaintext attack uses some real-life plaintext-ciphertext pair which the attacker somehow got to know (or guess, in the case of plaintext), or multiple such pairs known (or assumed) to be enciphered by the same key. As you normally don't use a block cipher as-is, but in a mode of operation, this means usually some sequence of blocks, together ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202675819396973, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2009/03/14/a-beautiful-garden-of-hypertrees/?like=1&source=post_flair&_wpnonce=8cb43e0c56
Gil Kalai’s blog ## A Beautiful Garden of Hypertrees Posted on March 14, 2009 by We had a series of posts (1,2,3,4) “from Helly to Cayley” on weighted enumeration of Q-acyclic simplicial complexes. The simplest case beyond  Cayley’s theorem were Q-acyclic complexes  with $n$ vertices, ${n \choose 2}$ edges, and ${{n-1} \choose {2}}$ triangles. One example is the six-vertex triangulation of the real projective plane. But here, as in many other places, we are short of examples. Nati Linial,  Roy Meshulam and Mishael Rosenthal wrote a paper with very surprising examples of Q-acyclic simplicial complexes called “sum complexes”. The basic idea is very simple: The vertices are $\{1,2,\dots , n\}$. Next you pick three numbers $a,b$ and $c$ and consider all the triples $i,j,k$ so that $i+j+k$ is either $a$ or $b$ or $c$. And let’s assume that $n$ is a prime. So how many triangles do we have? A fraction of $3/n$ of all possible triangles which is what we want (${{n-1} \choose {2}}$). If the three numbers form an arithmetic progression then the resulting simplicial complex is collapsible. In all other cases it is not collapsible. The proof that it is Q-acyclic uses a result of Chebotarëv on Fourier analysis. (So what does Fourier analysis have to do with computing homology? You will have to read the paper!) The paper considers the situation in all dimensions. What about such combinatorial constructions for Q-homology spheres?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903745710849762, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Normal-gamma_distribution
# Normal-gamma distribution Parameters $\mu\,$ location (real) $\lambda > 0\,$ (real) $\alpha \ge 1\,$ (real) $\beta \ge 0\,$ (real) $x \in (-\infty, \infty)\,\!, \; \tau \in (0,\infty)$ $f(x,\tau|\mu,\lambda,\alpha,\beta) = \frac{\beta^\alpha \sqrt{\lambda}}{\Gamma(\alpha)\sqrt{2\pi}} \, \tau^{\alpha-\frac{1}{2}}\,e^{-\beta\tau}\,e^{ -\frac{ \lambda \tau (x- \mu)^2}{2}}$ [1] $\operatorname{E}(X)=\mu\,\! ,\quad \operatorname{E}(\Tau)= \alpha \beta^{-1}$ $\left(\mu, \frac{\alpha - \frac12}{\beta}\right)$ [1] $\operatorname{var}(X)= \frac{\beta}{\lambda (\alpha-1)} ,\quad \operatorname{var}(\Tau)=\alpha \beta^{-2}$ In probability theory and statistics, the normal-gamma distribution (or Gaussian-gamma distribution) is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.[2] ## Definition For a pair of random variable, (X,T), suppose that the conditional distribution of X given T is given by $X|T=\tau, \mu, \lambda \sim N(\mu,1 /(\lambda \tau)) \,\! ,$ where this mean that the condition distribution is a normal distribution with mean $\mu$ and precision $\lambda \tau$ — equivalently, with variance $1 / (\lambda \tau) .$ Suppose also that the marginal distribution of T is given by $T |\alpha, \beta \sim \mathrm{Gamma}(\alpha,\beta) \! ,$ where this means that T has a gamma distribution. Here λ, α and β are parameters of the joint distribution. Then (X,T) has a normal-gamma distribution, and this is denoted by $(X,T) \sim \mathrm{NormalGamma}(\mu,\lambda,\alpha,\beta) \! .$ ## Properties ### Probability density function The joint probability density function of (X,T) is[citation needed] $f(x,\tau|\mu,\lambda,\alpha,\beta) = \frac{\beta^\alpha \sqrt{\lambda}}{\Gamma(\alpha)\sqrt{2\pi}} \, \tau^{\alpha-\frac{1}{2}}\,e^{-\beta\tau}\,e^{ -\frac{ \lambda \tau (x- \mu)^2}{2}}$ ### Marginal distributions By construction, the marginal distribution over $\tau$ is a gamma distribution, and the conditional distribution over $x$ given $\tau$ is a Gaussian distribution. The marginal distribution over $x$ is a three-parameter Student's t-distribution with parameters $(\nu, \mu, \sigma^2)=(2\alpha, \mu, \beta/(\lambda\alpha))$.[citation needed] ### Exponential family The normal-gamma distribution is a four-parameter exponential family with natural parameters $\alpha-1/2, -\beta-\lambda\mu^2/2, \lambda\mu, -\lambda/2$ and natural statistics $\ln\tau, \tau, \tau x, \tau x^2$.[citation needed] ### Moments of the natural statistics The following moments can be easily computed using the moment generating function of the sufficient statistic:[citation needed] $\operatorname{E}(\ln T)=\psi\left(\alpha\right) - \ln\beta$, where $\psi\left(\alpha\right)$ is the digamma function, $\operatorname{E}(T)=\frac{\alpha}{\beta}$, $\operatorname{E}(TX)=\mu \frac{\alpha}{\beta}$, $\operatorname{E}(TX^2)=\frac{1}{\lambda} + \mu^2 \frac{\alpha}{\beta}$. ### Scaling If $(X,T) \sim \mathrm{NormalGamma}(\mu,\lambda,\alpha,\beta),$ then for any b > 0, (bX,bT) is distributed as[citation needed] ${\rm NormalGamma}(b\mu, \lambda, \alpha, b^2\beta).$[dubious ] ## Posterior distribution of the parameters Assume that x is distributed according to a normal distribution with unknown mean $\mu$ and precision $\tau$. $x \sim \mathcal{N}(\mu, \tau^{-1})$ and that the prior distribution on $\mu$ and $\tau$, $(\mu,\tau)$, has a normal-gamma distribution $(\mu,\tau) \sim \text{NormalGamma}(\mu_0,\lambda_0,\alpha_0,\beta_0) ,$ for which the density π satisfies $\pi(\mu,\tau) \propto \tau^{\alpha_0-\frac{1}{2}}\,\exp[{-\beta_0\tau}]\,\exp[{ -\frac{\lambda_0\tau(\mu-\mu_0)^2}{2}}].$ Given a dataset $\mathbf{X}$, consisting of $n$ independent and identically distributed random_variables (i.i.d), $\{x_1,...,x_n\}$, the posterior distribution of $\mu$ and $\tau$ given this dataset can be analytically determined by Bayes' theorem. Explicitly,[citation needed] $\mathbf{P}(\tau,\mu | \mathbf{X}) \propto \mathbf{L}(\mathbf{X} | \tau,\mu) \pi(\tau,\mu)$, where $\mathbf{L}$ is the likelihood of the data given the parameters. Since the data are i.i.d, the likelihood of the entire dataset is equal to the product of the likelihoods of the individual data samples: $\mathbf{L}(\mathbf{X} | \tau, \mu) = \prod_{i=1}^n \mathbf{L}(x_i | \tau, \mu) .$ This expression can be simplified as follows: $\begin{align} \mathbf{L}(\mathbf{X} | \tau, \mu) & \propto \prod_{i=1}^n \tau^{1/2} \exp[\frac{-\tau}{2}(x_i-\mu)^2] \\ & \propto \tau^{n/2} \exp[\frac{-\tau}{2}\sum_{i=1}^n(x_i-\mu)^2] \\ & \propto \tau^{n/2} \exp[\frac{-\tau}{2}\sum_{i=1}^n(x_i-\bar{x} +\bar{x} -\mu)^2] \\ & \propto \tau^{n/2} \exp[\frac{-\tau}{2}\sum_{i=1}^n\left((x_i-\bar{x})^2 + (\bar{x} -\mu)^2\right)] \\ & \propto \tau^{n/2} \exp[\frac{-\tau}{2}\left(n s + n(\bar{x} -\mu)^2\right)] , \end{align}$ where $\bar{x}= \frac{1}{n}\sum_{i=1}^n x_i$, the mean of the data samples, and $s= \frac{1}{n} \sum_{i=1}^n(x_i-\bar{x})^2$, the sample variance. The posterior distribution of the parameters is proportional to the prior times the likelihood. $\begin{align} \mathbf{P}(\tau, \mu | \mathbf{X}) &\propto \mathbf{L}(\mathbf{X} | \tau,\mu) \pi(\tau,\mu) \\ &\propto \tau^{n/2} \exp[\frac{-\tau}{2}\left(n s + n(\bar{x} -\mu)^2\right)] \tau^{\alpha_0-\frac{1}{2}}\,\exp[{-\beta_0\tau}]\,\exp[{ -\frac{\lambda_0\tau(\mu-\mu_0)^2}{2}}] \\ &\propto \tau^{\frac{n}{2} + \alpha_0 - \frac{1}{2}}\exp[-\tau \left( \frac{1}{2} n s + \beta_0 \right) ] \exp\left[- \frac{\tau}{2}\left(\lambda_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\right)\right] \\ \end{align}$ The final exponential term is simplified by completing the square. $\begin{align} \lambda_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2&=\lambda_0 \mu^2 - 2 \lambda_0 \mu \mu_0 + \lambda_0 \mu_0^2 + n \mu^2 - 2 n \bar{x} \mu + n \bar{x}^2 \\ &= (\lambda_0 + n) \mu^2 - 2(\lambda_0 \mu_0 + n \bar{x}) \mu + \lambda_0 \mu_0^2 +n \bar{x}^2 \\ &= (\lambda_0 + n)( \mu^2 - 2 \frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n} \mu ) + \lambda_0 \mu_0^2 +n \bar{x}^2 \\ &= (\lambda_0 + n)\left(\mu - \frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n} \right) ^2 + \lambda_0 \mu_0^2 +n \bar{x}^2 - \left( \frac{\lambda_0 \mu_0 +n \bar{x}}{\lambda_0 + n} \right)^2 \\ &= (\lambda_0 + n)\left(\mu - \frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n} \right) ^2 + \frac{\lambda_0 n (\bar{x} - \mu_0 )^2}{\lambda_0 +n} \end{align}$ On inserting this back into the expression above, $\begin{align} \mathbf{P}(\tau, \mu | \mathbf{X}) & \propto \tau^{\frac{n}{2} + \alpha_0 - \frac{1}{2}} \exp \left[-\tau \left( \frac{1}{2} n s + \beta_0 \right) \right] \exp \left[- \frac{\tau}{2} \left( \left(\lambda_0 + n \right) \left(\mu- \frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n} \right)^2 + \frac{\lambda_0 n (\bar{x} - \mu_0 )^2}{\lambda_0 +n} \right) \right]\\ & \propto \tau^{\frac{n}{2} + \alpha_0 - \frac{1}{2}} \exp \left[-\tau \left( \frac{1}{2} n s + \beta_0 + \frac{\lambda_0 n (x - \mu_0 )^2}{2(\lambda_0 +n)} \right) \right] \exp \left[- \frac{\tau}{2} \left(\lambda_0 + n \right) \left(\mu- \frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n} \right)^2 \right] \end{align}$ This final expression is in exactly the same form as a Normal-Gamma distribution, i.e., $\mathbf{P}(\tau, \mu | \mathbf{X}) = \text{NormalGamma}\left(\frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n}, \lambda_0 + n, \alpha_0+\frac{n}{2}, \beta_0+ \frac{1}{2}\left(n s + \frac{\lambda_0 n (\bar{x} - \mu_0 )^2}{\lambda_0 +n} \right) \right)$ ### Interpretation of parameters The interpretation of parameters in terms of pseudo-observations is as follows: • The mean was estimated from $\lambda$ pseudo-observations with sample mean $\mu$. • The precision was estimated from $2\alpha$ pseudo-observations (i.e. possibly a different number of pseudo-observations, to allow the variance of the mean and precision to be controlled separately) with sample mean $\mu$ and sample variance $\frac{\beta}{\alpha}$ (i.e. with sum of squared deviations $2\beta$). • The posterior updates the number of pseudo-observations used for estimating the mean and precision simply by adding up the corresponding number of new (pseudo-)observations. • The new mean of the pseudo-observations takes a weighted average of the old pseudo-mean and the observed mean, weighted by the number of associated (pseudo-)observations. • The new sum of squared deviations is computed by adding the previous respective sums of squared deviations. However, a third "interaction term" is needed because the two sets of squared deviations were computed with respect to different means, and hence the sum of the two underestimates the actual total squared deviation. As a consequence, if one has a prior mean of $\mu_0$ from $n_\mu$ samples and a prior precision of $\tau_0$ from $n_\tau$ samples, the prior distribution over $\mu$ and $\tau$ is $\mathbf{P}(\tau,\mu | \mathbf{X}) = \text{NormalGamma}(\mu_0, n_\mu ,\frac{n_\tau}{2}, \frac{n_\tau \tau_0}{2})$ and after observing $n$ samples with mean $\mu$ and variance $s$, the posterior probability is $\mathbf{P}(\tau,\mu | \mathbf{X}) = \text{NormalGamma}\left( \frac{n_\mu \mu_0 + n \mu}{n_\mu +n}, n_\mu +n ,\frac{1}{2}(n_\tau+n), \frac{1}{2}\left(n_\tau \tau_0 + n s + \frac{n_\mu n (\mu-\mu_0)^2}{n_\mu+n}\right) \right)$ Note that in some programming languages, such as Matlab, the gamma distribution is implemented with the inverse definition of $\beta$, so the fourth argument of the Normal-Gamma distribution is $2 \tau_0 /n_\tau$. ## Generating normal-gamma random variates Generation of random variates is straightforward: 1. Sample $\tau$ from a gamma distribution with parameters $\alpha$ and $\beta$ 2. Sample $x$ from a normal distribution with mean $\mu$ and variance $1/(\lambda \tau)$ ## Related distributions • The normal-inverse-gamma distribution is essentially the same distribution parameterized by variance rather than precision • The normal-exponential-gamma distribution ## Notes 1. ^ a b Bernardo & Smith (1993, p.434) 2. Bernardo & Smith (1993, pages 136, 268, 434) ## References • Bernardo, J.M.; Smith, A.F.M. (1993) Bayesian Theory, Wiley. ISBN 0-471-49464-X • Dearden et al. "Bayesian Q-learning", Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), July 26–30, 1998, Madison, Wisconsin, USA.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 78, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8339502811431885, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/35267/total-probability-problem/35269
# total probability problem One bag contains 5 white and 4 black balls. Another bag contains 7 white and 9 black balls. A ball is transferred from the first bag to the second and then a ball is drawn from second. What will be the probability that the ball is white. - ## 1 Answer It would be the following: $$P(W) = P(W \cap W_1)+P(W \cap B_1)$$ $$= P(W|W_1)P(W_1)+P(W|B_1)P(B_1)$$ $$= \frac{5}{9} \frac{8}{17}+ \frac{4}{9} \frac{7}{17}$$ where $W_1$ denotes the event that the first ball chosen is white, $B_1$ denotes the event that the first ball chosen is black, and $W$ denotes the event the the ball from the second bag is white. - 2 Did you mean $P(W)=P(W|W_1)P(W_1)+P(W|B1)P(B_1)$ ? – Henry Apr 26 '11 at 22:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312624931335449, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/15261/convergence-tests-for-improper-multiple-integrals?answertab=active
# Convergence tests for improper multiple integrals For improper single integrals with positive integrands of the first, second or mixed type there are the comparison and the limit tests to determine their convergence or divergence. There is also the absolute convergence theorem. For multiple integrals I know that the comparison test can be used as well. Question: But can the limit or other tests be generalized to multiple integrals? Could you provide some references? Added: I have thought of Riemann integration only. Example: When we evaluate (Proof 1 of this and Apostol's article) this improper double integral $$\int_{0}^{1}\int_{0}^{1}\left(\dfrac{1}{1-xy}\right) \mathrm{d}x\mathrm{d}y,$$ we conclude that it is finite. Question 2: Which test should we apply to know in advance that it converges? - 1 – PEV Dec 23 '10 at 0:02 @Trevor: Thanks! – Américo Tavares Dec 23 '10 at 0:04 In what theory of integration? In Lebesgue integration you can take the same theorems and just apply it to a different measure. – Jonas Teuwen Dec 23 '10 at 0:43 @Jonas T: I have thought of Riemann integration only. – Américo Tavares Dec 23 '10 at 0:45 ## 1 Answer ### Introduction The following response attempts to address two aspects I perceive to be of interest in this question: one concerns whether there are some relatively rote or standard procedures for evaluating the convergence of certain kinds of improper multivariate integrals; the other is whether there is some simple intuition behind the subject. To keep the discussion from becoming too abstract, I will use the double integral (introduced in the question) as a running example. ### Synopsis Many integrals that are improper due to singular behavior of their integrand can be analyzed, both rigorously and intuitively, with a simple comparison. The idea is that any singularity of the integrand that doesn't blow up too fast compared to the codimension of the manifold of singular points can be "smoothed over" by the integral, provided the domain of integration doesn't contact the singular points too "tightly." It remains to make these ideas precise. ### Analysis When the domain of integration is relatively compact, as the one in the example is, the problems with convergence occur only at the possible singularities of the integrand within the closure of the domain, which in this case is the isolated point $(x,y) = (1,1)$. However, it is evident that if the domain of integration were to be expanded, any zero of the curve $1 - x y = 0$ could be a singularity. In general, many singularities occur this way: an integrand $f(\bf{x})$ is locally of the form $g(||h(\bf{x})||)$ where $h: \mathbb{R}^n \to \mathbb{R}^k$ and $g(r) = r^{-s} u(r)$ for some function $u$ that is bounded in some nonnegative neighborhood of $0$. In this case $h(\bf{x}) = 1 - x y$ and $g(r) = r^{-1}$, so $s=1$. When $h$ is differentiable at $\bf{0}$ with nonsingular derivative there, principles of Calculus invite us to linearize the situation and geometric ideas suggest a simple form for the linearization. Specifically, the Implicit Function Theorem guarantees that local coordinates can be found near such a singularity in which the derivative of $h$ is in the form $\left( \bf{1}_{k \times k} ; \bf{0}_{k \times n-k} \right)$. The singularity itself can be translated to the origin where, to first order, the zeros of $h$ locally correspond with the vector subspace generated by the last $n-k$ coordinate axes. The effect on the integral is to introduce a factor given by the determinant of the Jacobian, which is locally bounded and so does not affect the convergence. In this example we can explicitly make these changes of coordinates by translating the singularity to $\bf{0}$, computing the gradient of $h$ there, and rotating it to point along the first axis. This amounts to the change of variables $(u, v)$ = $(2 - x - y, y - x)$, in which $h(u, v) = u - \frac{1}{4}(u^2 - v^2)$, equal to $u$ to first order. Within a small neighborhood of the origin, the domain of integration becomes a 90-degree wedge containing positive $(u,v)$ for which $u \le |v|$ and the zeros of $h$ coincide with the $v$ axis to first order. Let's consider the case where the domain of integration locally contains an isolated point of the singular set $H$ (the zeros of $h$), which we translate to $\bf{0}$. Estimate the integral near this singularity by adopting spherical coordinates there. To do this, we need a closer look at the domain of integration in spherical coordinates. Any $\epsilon \gt 0$ determines the set of all possible unit direction vectors from $\bf{0}$ toward elements of the domain of integration that lie within a distance $\epsilon$ of $\bf{0}$. In the example, this corresponds to the set of points on the unit circle with angles between $-\pi/4$ and $\pi/4$. Suppose that for sufficiently small $\epsilon$ the closure (in the unit sphere) of this set of direction vectors does not include any of the tangent vectors of $H$ at $\bf{0}$. Then an easy estimate shows that the angular part of the integral in spherical coordinates is bounded. This reduces the task of integration to estimating the radial part. Because the volume element in spherical coordinates $(\rho, \Omega)$ is $\rho^{n-1} d\rho d\Omega$, the radial integral is proportional to $\rho^{n-1} g(\rho) d \rho$. This is the key calculation: it shows how integration in $n$ dimensions can "cancel" a factor of $\rho^{1-n}$ in the integrand. We're reduced to evaluating a 1D integral, improper at $0$, whose integrand behaves like $\rho^{n-1} g(\rho)$. By virtue of the assumptions about $g$, this is bounded above by a multiple of $\rho^{n-1-s}$. Provided $n-1-s \gt -1$, this converges (at a rate proportional to $\rho^{n-s}$). In the example, $n=2$, $s=1$, so convergence is assured. ### Generalizations When $n-1-s \le -1$, the behavior of the original integral depends on exactly how the domain "pinches out" as it approaches the singularity, so more detailed analysis is needed. The spirit of the exercise doesn't change, though: with Calculus we linearize the situation (relying on simple estimates to take care of the second order remainder term), we use spherical coordinates centered at an isolated singularity in the domain (or more generally, cylindrical coordinates near a more complicated singularity) to perform the integration, and we estimate the rate of growth of the integral near the singularity in terms of a radial component and a contribution from a spherical angle. The behavior of the integral over the spherical angle as we approach the singularity is determined by the shape of the domain near the singularity, so often some careful (and interesting) geometric analysis is needed. This approach is characteristic of the proofs of many theorems in Several Complex Variables, such as the Edge of the Wedge Theorem (if I recall correctly--I'm reaching back to memories now a quarter century old). One good reference is Steven Krantz's book on Function Theory of SCV. ### Summary The behavior of the integral of $(1-x y)^{-1}$ near $(1,1)$ can be analyzed by adopting polar coordinates $(\rho, \theta)$ centered at $(1,1)$, observing that the domain here forms a wedge whose point contacts the singular set $1 - x y = 0$ "transversely," noticing that the radial behavior of the integrand is $\rho^{-1}$, remembering that the area element in polar coordinates has a $\rho^1 d \rho$ term, and noting that the integral of $\rho^{-1} \rho^1 d \rho$ converges at $0$. Indeed, what originally looked like a singularity has entirely disappeared. This procedure generalizes to multidimensional integrals subject to fairly mild restrictions on the nature of the singularities of their integrands and on the geometry of the domain of integration near those singularities. - – whuber Dec 24 '10 at 19:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160912036895752, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/03/29/sets-measurable-by-an-outer-measure-i/
# The Unapologetic Mathematician ## Sets Measurable by an Outer Measure I An outer measure $\mu^*$ on a hereditary $\sigma$-ring $\mathcal{H}$ is nice and all, but it’s not what we really want, which is a measure. In particular, it’s subadditive rather than additive. We want to fix this by restricting to a nice collection of sets within $\mathcal{H}$. Every set $E$ splits every other set into two pieces: the part that’s in $E$ and the part that’s not. What we want to focus on are the sets that split every other set additively. That is, sets $E\in\mathcal{H}$ so that for every set $A\in\mathcal{H}$ we have $\displaystyle\mu^*(A)=\mu^*(A\cap E)+\mu^*(A\cap E^c)$ We call such sets “$\mu^*$-measurable”. Actually, to show that $E$ is $\mu^*$-measurable we just need to show that $\mu^*(A)\geq\mu^*(A\cap E)+\mu^*(A\cap E^c)$ for every $A\in\mathcal{H}$, because the opposite inequality follows from the subadditivity of $\mu^*$. This condition seems sort of contrived at first, and there’s really not much to justify it at first besides the foregoing handwaving. But we will soon see that this definition turns out to be useful. For one thing, the collection $\overline{\mathcal{S}}\subseteq\mathcal{H}$ of $\mu^*$-measurable sets is a ring! The proof of this fact is straightforward, but it feels like pulling a rabbit out of a hat, so follow closely. Given sets $\mu^*$-measurable sets $E$ and $F$, we need to show that their union $E\cup F$ and difference $E\setminus F=E\cap F^c$ are both $\mu^*$-measurable as well. Saying that $E$ is $\mu^*$-measurable means that for every $A\in\mathcal{H}$ we have $\displaystyle\mu^*(A)=\mu^*(A\cap E)+\mu^*(A\cap E^c)$ Saying that $F$ is $\mu^*$-measurable means that for every $A\in\mathcal{H}$ we have $\displaystyle\begin{aligned}\mu^*(A\cap E)&=\mu^*(A\cap E\cap F)+\mu^*(A\cap E\cap F^c)\\\mu^*(A\cap E^c)&=\mu^*(A\cap E^c\cap F)+\mu^*(A\cap E^c\cap F^c)\end{aligned}$ We can take each of these and plug them into the first equation to find the key equation $\displaystyle\begin{aligned}\mu^*(A)&=\mu^*(A\cap E\cap F)+\mu^*(A\cap E\cap F^c)\\&+\mu^*(A\cap E^c\cap F)+\mu^*(A\cap E^c\cap F^c)\end{aligned}$ Now this key equation works for $A\cap(E\cup F)$ as well as $A$. We know that $(E\cup F)\cap E=E$ and $(E\cup F)\cap F=F$, but $(E\cup F)\cap E^c\cap F^c=\emptyset$. So, sticking $A\cap(E\cup F)$ into the key equation we find $\displaystyle\begin{aligned}\mu^*(A\cap(E\cup F))&=\mu^*(A\cap(E\cup F)\cap E\cap F)+\mu^*(A\cap(E\cup F)\cap E\cap F^c)\\&+\mu^*(A\cap(E\cup F)\cap E^c\cap F)+\mu^*(A\cap(E\cup F)\cap E^c\cap F^c)\\&=\mu^*(A\cap E\cap F)+\mu^*(A\cap E\cap F^c)+\mu^*(A\cap E^c\cap F)\end{aligned}$ But the three terms on the right are the first three terms in the key equation. And so we can replace them and write $\displaystyle\begin{aligned}\mu^*(A)&=\mu^*(A\cap(E\cup F))+\mu^*(A\cap E^c\cap F^c)\\&=\mu^*(A\cap(E\cup F))+\mu^*(A\cap(E\cup F)^c)\end{aligned}$ which establishes that $E\cup F$ is $\mu^*$-measurable! Behold, the rabbit! Let’s see if we can do it again. This time, we take $A\cap(E\setminus F)^c=A\cap(E^c\cup F)$ and stick it into the key equation. We find $\displaystyle\begin{aligned}\mu^*(A\cap(E\setminus F)^c)&=\mu^*(A\cap(E^c\cup F)\cap E\cap F)+\mu^*(A\cap(E^c\cup F)\cap E\cap F^c)\\&+\mu^*(A\cap(E^c\cup F)\cap E^c\cap F)+\mu^*(A\cap(E^c\cup F)\cap E^c\cap F^c)\\&=\mu^*(A\cap E\cap F)+\mu^*(A\cap E^c\cap F)+\mu^*(A\cap E^c\cap F^c)\end{aligned}$ Again we can find the three terms on the right of this equation on the right side of the key equation as well. Replacing them in the key equation, we find $\displaystyle\begin{aligned}\mu^*(A)&=\mu^*(A\cap(E\setminus F)^c)+\mu^*(A\cap(E\setminus F)^c)\end{aligned}$ which establishes that $E\setminus F$ is $\mu^*$-measurable as well! ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 12 Comments » 1. That definition is sometimes called “Caratheodory’s condition”. Comment by mattiast | March 29, 2010 | Reply 2. It’s the approach used by Royden in his Real Analysis. Kind of weird, but powerful. Comment by | March 30, 2010 | Reply 3. Indeed, that’s where I first saw it, Zeno. For the moment, though, I’m lifting it from Halmos. Secretly, this is my excuse to read through Halmos, digest, and represent the material (with an algebraist’s eye) in this venue. Comment by | March 30, 2010 | Reply 4. [...] Measurable by an Outer Measure II Yesterday, we showed that — given an outer measure on a hereditary -ring — the collection of -measurable [...] Pingback by | March 30, 2010 | Reply 5. [...] -ring . And then we can restrict this outer measure to get an actual measure on the -ring of -measurable sets. And so we ask: how does the measure relate to the measure [...] Pingback by | March 31, 2010 | Reply 6. If you want to get rid of the magic trick, you could always just draw a venn diagram. You’re only manipulating 3 sets, so it wouldn’t be terribly cluttered at all. Then the proof would just come down to tagging which of the subsets of the whole union are mu-* measurable Comment by | March 31, 2010 | Reply 7. The “magic” is more the way of plugging some equations back into each other in unexpected ways, until the definition we want pops out. Comment by | March 31, 2010 | Reply 8. [...] we’ve got an outer measure on a hereditary -ring — like . We can define the -ring of -measurable sets and restrict to a measure on . And then we can turn around and induce an outer measure on the [...] Pingback by | April 2, 2010 | Reply 9. [...] . But, of course, we actually found that we could restrict the outer measure to the -ring of -measurable sets, which may be larger than . Luckily, we can get this extra ground without having to go through the [...] Pingback by | April 6, 2010 | Reply 10. [...] an outer measure associated with Lebesgue measure , and then there is the collection of sets measurable by . These are the sets so that for any subset we [...] Pingback by | April 23, 2010 | Reply 11. [...] sets” of the measurable space. This is not to insinuate that is the collection of sets measurable by some outer measure , nor even that we can define a nontrivial measure on in the first place. [...] Pingback by | April 26, 2010 | Reply 12. Thankx Comment by Nazia irshad | May 27, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543811082839966, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18734/is-a-proper-quotient-map-closed
Is a proper quotient map closed ? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to produce closed quotient maps, as they allow a good way of creating saturated open sets (as in this question). A map $f:X\rightarrow Y$is called proper, iff preimages of compact sets are compact. It is called quotient map, iff a subset $V\subset Y$ is open, if and only if its preimage $f^{-1}(V)$ is open. And it is called closed, iff it maps closed sets to closed sets. So the question is, whether a proper quotient map is already closed. Note that, I am particular interested in the world of non-Hausdorff spaces. - 1 By your definition, a quotient map does not have to be onto. Is this deliberate? (If so, the answer to your question is “no”.) – Harald Hanche-Olsen Mar 19 2010 at 13:19 Never mind that. See my answer below. – Harald Hanche-Olsen Mar 19 2010 at 13:29 2 Answers No. Let X={1,2,3} and Y={1,2}. Let f map 1 to 1, 2 and 3 to 2. Let the topology on X be {∅,{2},{1,2},{2,3},{1,2,3}} and that on Y be {∅,{2},{1,2}}. f maps the closed set {3} onto the non-closed set {2}. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Answer removed, was same as Harald's -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409489631652832, "perplexity_flag": "middle"}
http://en.wikisource.org/wiki/Some_Emission_Theories_of_Light
# Some Emission Theories of Light From Wikisource Jump to: navigation, search Some Emission Theories of Light  (1912)  by Richard Chace Tolman Physical Review, August 1912, 35 (2): 136-143, Online Some Emission Theories of Light Richard Chace Tolman 1912 THE Einstein theory of relativity assumes as its second postulate, that the velocity of light is independent of the relative motion of the source of light and the observer. It has been suggested in a number of places that all the apparent paradoxes of the Einstein theory might be avoided and at the same time the principle of the relativity of motion retained, if an alternative postulate were true that the velocity of light and the velocity of the source are additive. Relativity theories based on such a postulate may well be called emission theories. All emission theories agree in assuming that light from a moving source has a velocity equal to the vector sum of the velocity of light from a stationary source and the velocity of the source itself at the instant of emission. Thus a source in uniform motion would always remain at the center of the spherical disturbances which it emits, these disturbances moving relative to the source itself with the same velocity c as from a stationary source.[1] Emission theories differ, however, in their assumptions as to the velocity of light after its reflection from a mirror. If an emission theory is accepted, it would seem most natural to assume that the excited portion of a reflecting mirror acts as a new source of light and that reflected light has the same velocity c with respect to the mirror as has original light with respect to its source. The possibility of such an assumption has already been suggested by the writer[2] and apparently disproved by an experiment on the velocity of light from the approaching and receding limbs of the sun. In the present article additional evidence disproving the possibility of the assumption will be presented. According to an emission theory suggested by Stewart[3] light reflected from a mirror acquires a component of velocity equal to the velocity of the mirror image of the original source. Evidence disproving the possibility of such a principle will also be presented in this article. A very complete emission theory of electromagnetism has been presented by Ritz.[4] According to this theory light retains throughout its whole path the component of velocity which it obtained from its original moving source, and after reflection light spreads out in spherical form around a center which moves with the same velocity as the original source. In this article an experiment will be suggested whose performance would permit a decision between the Ritz and Einstein theories of relativity. ## The First Emission Theory. According to the first of the above emission theories, if a source of light is approaching an observer with the velocity v, the emitted light would have the velocity c+v and after reflection from a stationary mirror would have the velocity c. We shall now show that measurements of the Doppler effect (in canal rays) do not agree with this theory.[5] Consider measurements Doppler effect in light from a moving source made with a concave grating arranged as shown in Fig. I. Light from the source (canal rays) enters the slit and falls on the grating which is so mounted that its center of curvature coincides with the position of the line of the spectrum to be photographed at D, Hence the paths BD and CD traversed after reflection by the two rays of light ABD and ACD are equal, and the only difference in length of path occurs, before reflection, i. e., $AB=L_{1}>AC=L_{2}$ Consider first a stationary source, and let τ be the period of the source which produces a bright line at D. For the production of such a line, it is evident that light impulses coming over the two paths ABD and ACD must arrive at D in the same phase. If Δt is the time interval between the departures from the source of two light impulses which arrive simultaneously at D, the condition necessary for their arrival in phase is evidently given by the equation $i\tau=\Delta t=\frac{L_{1}-L_{2}}{c}$ (I) where i is a whole number. (Note that $L_{2}-L_{1}$ with the apparatus as arranged.) Consider now a source of light approaching the slit with the velocity v. If τ' is the period of the source which now produces a bright line at D and Δt' the time interval between departures from the source of two light impulses which now arrive simultaneously at D we evidently have the relation $i\tau'=\Delta t'$. In order to obtain an expression for Δt' in terms of $L_1$ and $L_2$, we must note that the source moves toward the slit the distance vΔt' during the interval of time between the departures of the two light impulses, and hence the difference in path which was $L_{1}-L_{2}$ for a stationary source has now become $L_{1}-L_{2}+v\Delta t'$. Furthermore we must remember that according to the theory which we are investigating the light before reflection will have the velocity c+v,[6] and hence $i\tau'=\Delta t'=\frac{L_{1}-L_{2}+v\Delta t'}{c+v},$ $i\tau'=\frac{L_{1}-L_{2}}{c},$ (2) which by comparison with equation (i) gives us $\tau'=\tau$. In other words if the first of the above emission theories of light is true, both before and after the source of light is set in motion, light produced by the same period of the source gives a bright line at the point D, that is, the expected Doppler effect or shifting of the lines does not occur. In interpreting actual experimental results, it must be borne in mind that the adjustment of the grating was assumed to be such that the reflected light is parallel to the axis of the grating. (Such an adjustment is automatically obtained with the Rowland form of mounting.) If the adjustment of the grating should be such that the difference in path all occurs after reflection it can easily be shown that the first theory would lead to a Doppler effect of the expected magnitude, and for intermediate adjustments to an effect of intermediate magnitude. With regard to actual experimental results obtained with the reflected light parallel to the axis of the grating, the writer quotes from a letter received from Professor Stark. Professor Stark says: "Sowohl in meinen Beobachtungen mit dem Konkav wie mit dem Plangitter (Ann. d. Phys., 28, 974, 1909) waren die gebeugten Strahlen, welche das beobachtete Spektrogram lieferten nicht parallel oder nahezu parallel der Gitteraxe. Doch has Paschen (Ann. d. Phys., 23, 247, 1907), soviel ich sehen kann, den Doppler Effekt bei Kanalstrahlen in der Näe der Axe (Normalen) eines Konkavgitters beobachtet; er hat dabei mit Hilfe eines Objektivs paralleles Licht auf das Gitter fallen lassen. Ein Unterschied zwischen Paschens und meinen Resultaten uber den Doppler-Effekt bei Kanalstrahlen hat sich indes nicht ergeben. Die zwei Methoden (einfallendes Licht parallel der Gitteraxe, gebeugtes Licht parallel dieser) liefern also bei gleicher dispersion ubereinstimmende Doppler-Effekt-Spektrogramme." We thus see that the first of the above emission theories does not seem to accord with experimental facts. ## The Stewart Theory. By considering the same measurements of Doppler effect just described, it can also be shown that the Stewart theory does not agree with experimental facts. Suppose a concave grating, Fig. 2, arranged as before with the center of curvature coinciding with the position of the line of the spectrum to be photographed at D. Consider first a stationary source and let τ be the period of the source which produces a bright line at D, If Δt is the time interval between the departures from the source of two light impulses which after traveling over the two paths ABD and ACD arrive simultaneously at D, it is evident, as in the previous discussion that the condition necessary for their arrival in phase and hence for the production of a bright line is given by the equation $i\tau=\Delta t=\frac{L_{1}-L_{2}}{c}$, (3) where i is a whole number. Consider now a source of light approaching the slit with the velocity v. If τ' is the period of the source which now produces a bright line at D and Δt' the time interval between departure from the source of two light impulses which now arrive simultaneously at D, we evidently have the relation $i\tau'=\Delta t'=\frac{L'-L_{2}+v\Delta t'}{c+v}+\frac{L_{2}}{c+v_{3}}-\frac{L_{4}}{c+v_{4}},$ (4) where c+v[7] in accordance with the Stewart theory is the velocity of the light before reflection and $L_{3}$ and $L_{4}$ are the components which must be added to c to give the velocity of light along the paths $BD=L_{3}$ and $CD=L_{4}$ after its reflection. According to the Stewart theory $v_{3}$ and $v_{4}$ will be equal to the components in the direction BD and CD of the velocities of the mirror images of the original source. An idea of the size of these components is most easily obtained graphically. Considering, for example, the point of reflection C as a portion of a plane mirror EF which is tangent to the concave mirror at C, the position of the image $I_2$ can be found by the usual construction, the line $AI_2$ connecting source and image being perpendicular to EF and the distances AE and $EI_2$ equal. Both the original source and the image will evidently be moving towards the point F with the same velocity v. By a similar construction, which has been omitted to avoid confusion, the image $I_1$ produced by reflection from B is found to be located as shown, and moves also with the velocity v in the direction of the corresponding arrow. It can be seen from the construction that in the arrangement shown the motion of the image $I_1$ and the corresponding reflected ray BD are more nearly parallel than the motion of $I_2$ and the ray CD. Hence from the principle of Stewart the component $v_{3}$ is greater than $v_{4}$. Referring once more to equation (4), since $L_{3}$ and $L_{4}$ are equal and $v_{3}$ is greater than $v_{4}$ we see that the negative term $L_{4}/\left(c+v_{4}\right)$ is numerically greater than $L_{3}/\left(c+v_{3}\right)$ and we may write the inequality $\Delta t'<\frac{L_{1}+L_{2}+v\Delta t}{c+v}.$ Neglecting second order terms this becomes $\Delta t'\left(1-\frac{v}{c}\right)<\frac{L_{1}-L_{2}}{c}-\frac{L_{1}-L_{2}}{c}\frac{v}{c}$ and substituting from equation (3), $\Delta t'\left(1-\frac{v}{c}\right)<\Delta t\left(1-\frac{v}{c}\right),$ $\Delta t'<\Delta t,$ $\tau'<\tau.$ Thus on the basis of the Stewart theory, with an approaching source, a shorter period would produce a bright line at the point D than with a stationary source. In other words the actual bright lines would shift towards the red end of the spectrum when the source is set in motion towards the slit, in contradiction to the actually observed shift towards the violet end of the spectrum. We see that experimental facts do not agree with the Stewart theory. ## The Ritz Theory. According to the Ritz theory of relativity, throughout its whole path, light retains the component of velocity v which it obtained from the original moving source. Thus all the phenomena of optics would occur as though light were propagated by an ether which is stationary with respect to the original source. Light coming from a terrestial source would behave as though propagated by an ether stationary with respect to the earth and light coming from the sun would behave as though propagated by an ether stationary with respect to the sun. Now the Michelson-Morley experiment was devised for detecting the motion of the earth through the ether, and hence if this experiment should be reperformed using light from the sun instead of from a terrestrial source, a positive effect would be expected if the Ritz theory were true. On the other hand if the Einstein theory were true, no effect would be obtained, since according to this theory, all optical phenomena occur as though light were propagated by an ether stationary with respect to the observer. To show in detail the divergence between the two theories consider the diagrammatic representation of a Michelson-Morley apparatus as shown in Fig. 3. Light from the sun which is supposed to be moving relative to the apparatus in the direction AB with the velocity v is thrown with the help of suitable reflectors on to the half silvered mirror at A. The divided beams of light travel to the mirrors B and C and after reflection reunite at D to produce a system of interference fringes. According to the Einstein theory of relativity the velocity of light is the same in all directions with respect to all observers, and hence the velocity along the paths AB and CD would be independent of the orientation of the apparatus and on the basis of this theory no change in the position of the interference fringes would be expected on rotation of the apparatus. According to the Ritz theory, however, the velocity of light in the directions AB and AC would be different and a change in the position of the fringes would be expected on rotating the apparatus through an angle of ninety degrees. It is easy to see that the Ritz theory would lead us to expect c+v or the velocity of light in the direction AB, c - v for the velocity in the opposite direction, and $\sqrt{c^{2}-v^{2}}$ for the velocity in either direction along AC. Assuming for simplicity that $AB=AC=l$ we see that the time required for light to travel along the path ABBAD will be longer than that along the path ACCD by the amount $\frac{l}{c+v}+\frac{l}{c-v}-\frac{2l}{\sqrt{c^{2}-v^{2}}},$ which neglecting terms of higher orders reduces to $lv^{2}/c^{2}$. If the apparatus should be rotated through ninety degrees, it is evident that the longer time would now be required for the light to pass over the path ACCD and we should expect a shift in the position of the fringes corresponding to the time interval $\frac{2lv^{2}}{c^{3}}$ Hence if the Ritz theory should be true, using the sun as source of light we should find on rotating the apparatus a shift in the fringes of the same magnitude as originally predicted for the Michelson-Morley apparatus where a terrestial source was used. If the Einstein theory should be true, we should find no shift in the fringes using any source of light. ## Summary. Experimental evidence has been considered in this article which is apparently sufficient to disprove two of the three emission theories of light which have been proposed, and an experiment has also been suggested for testing the truth of the third emission theory, that of Ritz. A definite experimental decision between the relativity theories of Ritz and Einstein is a matter of the highest importance. The writer wishes to express his gratitude to Dr. P. Ehrenfest for valuable suggestions and criticisms, and to Professor Stark for information concerning the adjustment of his gratings in the measurement of the Stark effect in canal rays. University of California 1. Optical theories in which the velocity of light is assumed to change during the path are not considered in this article. It might be very difficult to test theories in which the velocity of light is assumed to change on passing through narrow slits or near large masses in motion, or to suffer permanent change in velocity on passing through a lense. 2. Stewart, Phys. Rev., 32, 418 (1911). 3. Ritz, Ann. de chim. et phys., 13, I45 (1908); Arch, de Génève, 26, 232 (1908); Scientia, 5 (1909). See also Gessamm. Werke. The Ritz electromagnetic theory does not seem to have received the critical attention which it deserves. It was the earliest systematic attempt to explain the Michelson Morley experiment on the basis of an emission theory and is the only emission theory which has been developed with any completeness. 4. In an earlier article (loc. cit.). the author showed that if an emission theory of light were true, there would be no change in the wave-length of light when the source is set in motion. This undisputed conclusion led the author to believe that with a suitable arrangement of grating no Doppler effect would be detected in light from moving sources if an emission theory should be true. It has been correctly pointed out by Stewart (loc. cit., p. 420), however, that the use of a grating to determine wave-lengths is based on a theory which assumes a stationary medium. Hence grating measurements of the Doppler effect do not afford a general method of testing all emission theories, but such measurements must be subjected to a more complete analysis. As shown in the sequel, however, such an analysis of existing measurements of the Doppler effect is apparently sufficient to disprove the Stewart emission theory. Such measurements are not suitable for deciding between the theories of Ritz and Einstein, however, since in general these two theories would only lead to the expectation of second order differences. 5. The slight difference in direction between the rays AB and AC and the motion of the source may be neglected. 6. See note I. p. 138. This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1948, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 60 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359127879142761, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/83732-how-do-you-graph-function.html
# Thread: 1. ## How do you graph this function? $f(x)=x^2+6x$ 2. Originally Posted by juliajank $f(x)=x^2+6x$ There are alot of ways to do this the most simple is to plot points. So pick some values of x and plug them in to get y values and plot the points... here is one let $x=1 \implies f(1)=1^2+6(1)=7 \implies (1,7)$ 3. Thanks a lot for your answer, but I can't use the table of values anymore Do you have to factor out the common x first to make it x(x+6) and then graph it as a normal line, six units left? or are you supposed to graph it as a quadratic function? :S When you factor the x out, does the make whatever is in the brackets the only important thing now? I am so confused with what I have to do if there is more than one x! 4. Originally Posted by juliajank Thanks a lot for your answer, but I can't use the table of values anymore Do you have to factor out the common x first to make it x(x+6) and then graph it as a normal line, six units left? or are you supposed to graph it as a quadratic function? :S When you factor the x out, does the make whatever is in the brackets the only important thing now? I am so confused with what I have to do if there is more than one x! There are many methods. What method are you supposed to use. Here is one... We could first find the x-intercepts by setting y=0 $0=x^2+6x=x(x+6) \implies x=0 or x=-6$ So the ordered pairs of the x intercepts are (0,0) and (-6,0) Frome here we can find the vertex by using the fact that a parabola is symmetric about its vertex. That means the x coordinate of the vertex is at the mid point of the x intercepts. $x=\frac{0+(-6)}{2}=-3$ Now we can complete the order pair by plugging this back into the equation $y=(-3)^2+6(-3)=9-18=-9$ so the vertex is at (-3,-9) So if we plot these three points we can sketch the parabola. Another methold we be to complete the square... Do any of these sound familar... 5. Yes! Thank you. I would have to solve it by completing the square which is not very hard. It's just figuring out what to do that's trickiest for me. 6. Originally Posted by juliajank It's just figuring out what to do that's trickiest for me. For worked examples of the general methodology, try here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559877514839172, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/261442/non-trivial-93rd-roots-of-unity-in-finite-fields
# Non-trivial 93rd roots of unity in finite fields [duplicate] Possible Duplicate: Finding the values of $n$ for which $\mathbb{F}_{5^{n}}$, the finite field with $5^{n}$ elements, contains a non-trivial $93$rd root of unity For which of the following value of $n$, does the finite field $\Bbb F$, with $5^n$ elements contain a non trivial $93$-rd root of unity? 1. 92 2. 30 3. 15 4. 6 - I totally forgot that this is a dup. Good job spotting that, guys. – Jyrki Lahtonen Dec 19 '12 at 8:09 ## marked as duplicate by Dilip Sarwate, Henry T. Horton, John Wordsworth, Micah, rschwiebDec 19 '12 at 1:31 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer The multiplicative group of a finite field $\Bbb F$ with $\mid \Bbb F\mid=q$ is cyclic and so, by the theory of cyclic groups, it contains a unique subgroup of order $d$ for each divisor $d$ of $q-1$. Thus, the question becomes: for which values of $n$ we have that $93$ divides $5^n-1$? Now: • $5^{92}\equiv67^{23}\equiv40\bmod93$, • $5^{30}\equiv(5^6)^5\equiv1\bmod93$, • $5^{15}\equiv56^3=32\not\equiv1\bmod93$, • $5^6\equiv1\bmod93$. So, of the $n$ listed, the answer is 6 and 30. - 1 What's with the intermediate steps? Why those particular factorisations? – Ben Millwood Dec 18 '12 at 14:00 thanks for ur precious advice. – Alka Goyal Dec 18 '12 at 16:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146751761436462, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Parametric_Equations&diff=7934&oldid=7442
# Parametric Equations ### From Math Images (Difference between revisions) | | | | | |---------|----------------------------|---------|----------------------------| | | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | | |ImageName=Butterfly Curve | | |ImageName=Butterfly Curve | | | |Image=Butterfly1.gif | | |Image=Butterfly1.gif | ## Revision as of 13:36, 9 July 2009 Butterfly Curve Field: Calculus Image Created By: Direct Imaging Website: [1] Butterfly Curve The Butterfly Curve is one of many beautiful images generated using parametric equations. # Basic Description We often graph functions by letting one coordinate be dependent on another. For example, graphing the function $f(x) = y = x^2$ has y values that depend upon x values. However, some complex functions are best described by having the coordinates be described using an equation of a separate independent variable, known as a parameter. Changing the value of the parameter then changes the value of each coordinate variable in the equation. We choose a range of values for the parameter, and the values that our function takes on as the parameter varies traces out a curve, known as a parametrized curve. Parametrization is the process of finding a parametrized version of a function. ### Parametrized Circle One curve that can be easily parametrized is a circle of radius one: We use the variable t as our parameter, and x and y as our normal Cartesian coordinates. We now let $x = cos(t)$ and $y = sin(t)$, and let t take on all values from $0$ to $2\pi$. When $t=0$, the coordinate $(1,0)$ is hit. As t increases, a circle is traced out as x initially decreases, since it is equal to the cosine of t, and y initially increases, since it is equal to the sine of t. The circle continues to be traced until t reaches $2\pi$, which gives the coordinate $(1,0)$ once again. It is also useful to write parametrized curves in vector notation, using a coordinate vector: $\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} cos(t) \\ sin(t)\\ \end{bmatrix}$ The butterfly curve in this page's main image uses more complicated parametric equations as shown below. # A More Mathematical Explanation Note: understanding of this explanation requires: *Linear Algebra [Click to view A More Mathematical Explanation] [[Image:Animated_construction_of_butterfly_curve.gif|thumb|right|500px|Parametric construction of the [...] [Click to hide A More Mathematical Explanation] Parametric construction of the butterfly curve Sometimes curves which would be very difficult or even impossible to graph in terms of elementary functions of x and y can be graphed using a parameter. One example is the butterfly curve, as shown in this page's main image. This curve uses the following parametrization: $\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} \sin(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right) \\ \cos(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right)\\ \end{bmatrix}$ ### Parametrized Surfaces The surface of a sphere can be graphed using two parameters. In the above cases only one independent variable was used, creating a parametrized curve. We can use more than one independent variable to create other graphs, including graphs of surfaces. For example, using parameters s and t, the surface of a sphere can be parametrized as follows: $\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} sin(t)cos(s) \\ sin(t)sin(s) \\cos(t) \end{bmatrix}$ ### Parametrized Manifolds While two parameters are sufficient to parametrize a surface, objects of more than two dimensions, such as a three dimensional solid, will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so cannot always be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8306665420532227, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/138341/how-to-avoid-arithmetic-mistakes/138421
# How to avoid arithmetic mistakes? When dealing with several numbers and long equations, it's common to make careless arithmetic mistakes that give the wrong answer. I was wondering if anyone had tips to catch these mistakes, or even better avoid them more often. (aside from the obvious checking your work- that's a must) - 4 More practice and calm state of mind. – Tomarinator Apr 29 '12 at 7:11 1 Start with positive thinking: it's not common to make careless arithmetic mistakes that give the wrong answer. It works. :) – sai Apr 29 '12 at 7:21 2 Also, apparently 'don't do arithmetic before you've had your morning coffee'... – Steven Stadnicki Apr 29 '12 at 17:09 4 Just another tiny tip: if the things you're performing arithmetic on have units, keep the units on, and make sure the units of the result you get do make sense. So, you can't add feet and meters directly, and if you're expecting a result in mass units and you get something in square meters per kilogram, something went quite wrong. – J. M. Apr 29 '12 at 17:37 4 Neatness helps a lot. Write clearly. Keep things in columns. Don't try to cram too much onto the page. Don't try to do too much in your head (i.e. write out each step). As you use terms to get to the next line, check them off or underline them. – Tpofofn Apr 29 '12 at 22:24 show 4 more comments ## 13 Answers There are certain quick methods called sanity checks which will catch most (but not all) arithmetic errors. One common one is to replace each number with the sum of its digits, which is the "casting out nines" method mentioned in Robert Israel's answer. To check a computation, say $567\times 894=506,898$, we replace $567$ with $5+6+7=18$ and $894$ with $8+9+4=21$, and then replace each of these with the sum of their digits to get $9$ and $3$ (in general we keep doing this until we get down to $1$ digit), while on the other side we get $5+0+6+8+9+8=36$ and then $3+6=9$. We then check that $9\times 3=9$ after casting out nines on both sides, and so our answer is probably right (though not necessarily). This method is called "casting out nines" because it ensures that whatever answer you are checking differs from the actual answer by a multiple of nine (hopefully $0\times 9$). However, this method has a serious drawback: if the answer you are checking is correct except for having the digits switched around (a relatively common error) the method will not catch the error. A remedy for this is to use "casting out elevens" where you take the alternating sums of the digits instead of the sums, such that the last digit is always added rather than subtracted. In our previous example, this becomes $5-6+7=6$, $8-9+4=3$ and $5+0-6+8-9+8=-4$. Here we have to be a little careful: we want to take the equation $6\times 3-(-4)=0$, cast out elevens (take the alternating sum) and verify that the the resulting equation holds, which in this case it does. We move everything to one side so that we don't have to work with numbers of different signs ($6\times 3=-4$ is true $\bmod 11$, which is what matters, but it is not obvious how to cast out elevens to show this). This ensures that whatever answer you are checking differs from the actual answer by a multiple of eleven (hopefully $0\times 11$), hence the name. Edit: These methods can both be made rigrous with modular arithmetic. The first simple checks that an equation holds $\bmod 9$, and adding the digits comes from the fact that $$\begin{eqnarray} d_n\cdots d_1d_0 &=& \sum\limits_{i=1}^n 10^id_i\\ &\equiv& \sum\limits_{i=1}^n 1^id_i (\bmod 9)\\ &=&\sum\limits_{i=1}^n d_i \end{eqnarray}$$ while the second checks that an equation holds $\bmod 11$, and the alternating sum of the digits comes from the fact that $$\begin{eqnarray} d_n\cdots d_1d_0 &=& \sum\limits_{i=1}^n 10^id_i\\ &\equiv& \sum\limits_{i=1}^n (-1)^id_i (\bmod 11) \end{eqnarray}$$ Credit where credit is due: I believe I read about this years ago in a question to Dr. Math from an elementary school teacher who had been teaching the method and wanted to know how it worked. - 1 5 + 6 + 7 = 18, so I'm not really sure how that affects the rest of the first half – DHall Apr 29 '12 at 14:24 @DHall - It still works, because 5+6+7=18 and 18 ≡ 0 mod 9, and 0 x 3 = 0 and 9 ≡ 0 mod 9. You should recognize that casting out nines will silently work with an arithmetic mistake 1/9 times. – dr jimbob Apr 29 '12 at 16:01 @DHall It seems even with this method, I make arithmetic mistakes. – Alex Becker Apr 29 '12 at 17:31 There are several types of arithmetic mistake where casting out nines is particularly unhelpful - especially, a digit transposition (very common type of error), or when multiplying by 9 or 3. – Ronald Apr 29 '12 at 20:52 2 @qwertymk If you're familiar with modular arithmetic, it is much better explained as: check your calculation $\mod 11$, which is easy to do by alternating sums. It certainly seems artificial if you haven't seen modular arithmetic. – Alex Becker Apr 30 '12 at 4:36 show 5 more comments In my experience, the best way to avoid computational errors is to avoid computation. Develop general algorithms for whatever quantity that you are looking for and then proceed to "plug and chug" as the last step. Mathematics requires precision, however, and you often cannot avoid having to comb over your work tediously. - Definitely the best trick... +1! Even great mathematicians make computational mistakes ; the brain is just never going to be as efficient as a computer. – Patrick Da Silva Apr 29 '12 at 7:26 16 @Ronald: please don't do that. I see way too many students that are lost without a calculator, even for simple matters. – nico Apr 29 '12 at 13:36 5 There's something a bit sad about seeing a teener reaching for a calculator for $8\times 3$... – J. M. Apr 29 '12 at 17:32 1 In an exam/pressure situation, where you have a calculator available to you, it's really unquestionably the best strategy to avoid mental arithmetic. I make that clear to students, and I wouldn't be doing my job properly if I didn't. Sorry. They can all do $8\times3$, but they shouldn't be risking an arithmetic mistake during an examination on e.g. linear algebra. Mathematics is not arithmetic. @J.M. Actually $8\times3$ is not really a problem - I'm usually more concerned about $1\times1$ and $1\times0$. – Ronald Apr 29 '12 at 19:48 5 Relevant anecdote: an economics professor I knew used to keep track of the stupidest things students asked for a calculator for. His best was $120/1$. – Alex Becker Apr 29 '12 at 22:12 show 2 more comments I have found the best way to avoid these types of computational mistakes is: 1. To have extremely neat and clear handwriting. 2. To effectively use the space on the page to organize the work in a logical manner. 3. Use a pencil. Never cross things out, but erase them instead. 4. Always take the time to think things through slowly and carefully. 5. Keep your desk very neat and well organized. I believe that if you write things down in a clear way, then you will think in a clear way; and if you write in a sloppy way, you will think in a sloppy way. - 2 Interesting: instead of 4, I recommend its direct opposite: use a pen, and if something is wrong, cross it out with a light line, in case it should turn out to have been right after all. I also follow the rule: never “cancel”, but just write a little more. – Lubin Apr 29 '12 at 17:29 1 The good thing about using a pen is that 1. you might have been right all along, and you wouldn't see that if you've erased the thing; and 2. you can look at it again when all is said and done, and note what not to do the next time. It keeps you honest, too, knowing that you did struggle a bit for solving it, and it wasn't as sleek as it otherwise would have looked. – J. M. Apr 29 '12 at 17:35 4 On an exam, one of my students wrote a full-page answer to a question, then (apparently) realized something was wrong, and very carefully erased the whole page. Unfortunately, by that time the exam was over... – Robert Israel Apr 29 '12 at 18:41 2 Agree with other commenters that erasing is not necessarily a good idea - a single, neat line is a better option. – Ronald Apr 29 '12 at 20:49 In my opinion, for research, use a computer. Your brain is just never going to be perfect. For exams, though, go through the common computations lots of times before an exam, for instance if you have a linear algebra exam, it is wise to compute matrix inverses a few times (say like 20-40 times depending on your brain's capacity to compute) to be at ease with the algorithmic details and be able to focus on the numbers more easily during computation. But then again, even if you've practiced for a week, a month later, I've already lost the habit of computing the thing in question and start abusing my brain like nuts to compute... Hope that helps! - One simple tool that catches many arithmetic mistakes is "casting out nines". See http://en.wikipedia.org/wiki/Casting_out_nines - 1 It seems like its more likely to make a mistake while casting out nines. ;) – Tomarinator Apr 29 '12 at 7:16 2 Personally I don't like to verify if my solution is okay by looking at a quotient of my solution... I might be off from the solution by a $9$-multiple! – Patrick Da Silva Apr 29 '12 at 7:25 One way to check arithmetic calculations in a ring is to map the computation homomorphically into rings where calculation is easier. For example, many rings have parity, i.e. have $\mathbb Z/2$ as an image, and mapping the arithmetic mod $2$ yields a simple parity check that often catches errors. Casting nines is another example of a modular arithmetic check (which also works for fractions whose denominator is coprime to $9$). More generally one can verify equalities using a sufficient number of modular checks, by employing CRT (Chinese Remainder). For polynomial rings one can similarly apply evaluation maps as checks. Again, with enough evaluations, one can verify equalities (which here is CRT = Lagrange interpolation). Like CRT, such factorizations or decompositions of an algebraic structure into simpler structures is a powerful problem-solving technique, applicable not only to checking arithmetic, but also quite generally. It is the algebraists way to divide-and-conquer. When combined with a little logic this yields even greater power. One nontrivial example is the model-theoretic proof of Jacobson's theorem, that rings satisfying the identity $\rm\:x^m = x \:$ are commutative. This proceeds by a certain type of ring factorization, which reduces the problem to the (subdirectly) irreducible rings satisfying the identity. These turn out to be certain finite fields, which are commutative, as desired. In a sense, the proof works by exploiting the fact that the statement need only be checked on a certain set of simpler cases (finite fields), where the verification is much easier. Thus this can be seen as a grand generalization of the ideas employed in the more elementary cases above. - As mentioned in other answers computers are very good at arithmetic. If such devices are not available try to calculate the answer in several different ways or at least do the steps in a different order. If you always get the same answer it is likely to be correct. If the answers are different investigate the source of the difference. Casting out nines will not catch transposition errors but it will catch carrying errors. After writing the above I recalled this graphical method of multiplication: Multiplication by counting points . - I often apply a simple intuitive check at the end of my calculations to make sure that at the coarsest level the answer makes sense. For example: • Am I expecting the answer to be a larger or smaller number than the input values? by how many orders of magnitude? • Should the answer be positive or negative? • Does it makes sense that the answer is between 0 and 1 (or greater than 100, etc...)? While these checks are not as sensitive as some of the other examples, they can catch a lot of careless mistakes and have the advantage of forcing one to think about and understand what the calculations are doing and what type of answer should make sense. - You should also check out Vedic maths - You can always use all kinds of computer applications, but those only help you if you have them around. I suppose that you're talking about making calculations on paper or in head, and I would have one suggestion here, which proved to be more than excellent for me. It's quite simple: concentration, concentration, concentration. You have to think about what you're computing, and nothing else. It may sound stupid, but you will be suprised about your own mental power if you stay in absolute focus. Good luck! - I use checksums for memory operations. A simple example is whenever my wife gives me a long verbal list of groceries to buy, I count the total. If a big number say > 20 items, then split into 3 categories. If I pretend a simple sum is easy to remember for short term memory, then I can accumulate or count down until the process ( in this case; simple remember, search and fetch) as long as I remember the total number of items, I rarely forget one. For fun I may remember the word I can spell with the 1st letter of each item. This gives more redundancy and checksum info in an easy way to remember. The challenge is to count brackets, variables & transforms and visualize the process as your checksums. - One technique that's under-hyped is to simply redo the calculation on a separate sheet of paper without looking at your prior calculations. You are still vulnerable to committing the same error but it'll catch frivolous one's quite well. - Absolutely! It’s happened all too frequently that I just could not find my error, even though I knew there was one, and I turned it up by just this method. – Lubin May 10 '12 at 3:02 When doing algebraic computations, for example when finding the partial fraction decomposition of a certain expression, something I find helpful is to substitute a certain number and see if the two expressions match. For example to "verify" $\frac{1}{(1+x)(5+x)}=\frac{1}{4(x+1)}-\frac{1}{4(x+5)}$, you may want to substitute in some value of x, say $x=9$, and see that both sides become $1/140$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948474109172821, "perplexity_flag": "middle"}