url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/74191/what-is-the-degree-of-a-symmetric-boolean-function
## What is the degree of a symmetric boolean function? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (previous title " Zero sum of binomials coefficients - a stronger version ") This is a stronger version of another question. Is there an $N\in \mathbb N$ and a sequence of non-constant functions $\left\{ p_n:[n] \to \{ 1,-1 \} \right\}_{n=N} ^{\infty}$ such that for all $n>N$ we have: $$\sum _{i=0} ^{n} (-1)^{i} p_n(i) \cdot \binom {n} {i} = 0$$ For instance, for all odd values of n, we may choose $p_{n}(i)=\begin{cases} (-1)^{i} & i\leq\frac{n-1}{2}\\ (-1)^{i+1} & i\geq\frac{n+1}{2}\end{cases}$. This simply means we sum the first half of the binomial coefficients and subtract the second half. The fact that for odd values we can partition the set of binomial coefficients evenly allows us to do that, so I don't see how the same trick may be applied for even values. To my understanding, the methods that solved the previous question (for which I thank darij grinberg and Mikael de la Salle) are not applicable here. My guess, as before, is that there is no such sequence (in which all functions are non-constant), any ideas on how to prove it? (a counter example would surprise me, but is of interest as well) - What would you suggest for p_1 and p_2? Gerhard "They Look Constant To Me" Paseman, 2011.08.31 – Gerhard Paseman Aug 31 2011 at 19:33 I just noticed the big N. Never miNd. Gerhard "Time To Get Different Glasses" Paseman, 2011.08.31 – Gerhard Paseman Aug 31 2011 at 19:58 For odd n the relation is satisfied if $p_n(i)+p_n(n-i)=0$ for all $i$. In this case, there are at least $\frac{n+1}{2}$ choices for pn. So your question concerns even $n$ only, right? – Pietro Majer Aug 31 2011 at 20:00 (with - instead of + I guess) – Pietro Majer Aug 31 2011 at 20:05 2 On the $14$-th line of the Pascal triangle, we have $1-14+91-364-1001+2002-3003-3432+3003+2002+1001-364+91-14+1=0$. This leads to a nonconstant $p_{14}$. I am not sure whether this is a sporadic or a recurring phenomenon. Anyway I propose tagging the question "additive-number-theory". – darij grinberg Sep 1 2011 at 8:00 show 10 more comments ## 1 Answer It was already pointed out in the comments that determining for which $n$ one can find a non-constant $p_n$ is an open problem. I thought I'd give a bit of context and my understanding on what is known so far. The problem as stated has a negative answer because when $n+1$ is prime, $p_n$ must be constant. The sums $\sum_{i=0}^n \varepsilon_i \binom{n}{i}$ are the leading coefficients of the polynomials we get from Lagrange interpolation on points $(i,\alpha(i))$ where $0\le i\le n$ and $\alpha(i)\in \lbrace 0,1\rbrace$. So the question is equivalent to Is there a polynomial that sends $\lbrace 0,1,\dots,n\rbrace$ to $\lbrace 0,1\rbrace$, that is not constant but has degree $\le n-1$? Let us denote the number of such polynomials by $\mathcal B(n)$. Some examples are given by $\varepsilon_i=(-1)^i$ when $n$ is even and $\varepsilon_i=-\varepsilon_{n-i}$ for odd $n$. This implies that $\mathcal B(n)\geq 2$ when $n$ is even and $\mathcal B(n)\geq 2^{\frac{n+1}{2}}$ when $n$ is odd. Finding non-trivial solutions to the problem implies improving on these lower bounds. Here is a simple argument, that when $p$ is an odd prime $\mathcal B(p-1)=2$, so there are no non-trivial solutions. This is because $\binom{p-1}{i}\equiv (-1)^i\pmod{p}$ so the only way for the sum to be divisible by $p$ is if the sequence $(-1)^i\varepsilon_i$ is constant. This includes the examples $n=16,18$ that you confirmed with a computer search. However there are even values of $n$ for which $k(n)\geq 3$. The first example is $$\binom{8}{0}-\binom{8}{1}-\binom{8}{2}+\binom{8}{3}+\binom{8}{4}-\binom{8}{5}-\binom{8}{6}-\binom{8}{7}+\binom{8}{8}=0$$ and the next one is the one given by Darij in the comments for $n=14$. The even values of $n$ for which $\mathcal B(n)\geq 3$ and $n\le 128$ were found in J. von zur Gathen and J. Roche, “Polynomials with two values”, Combinatorica 17, no. 3 (1997), 345–362. The sequence is $\lbrace 24,34,48,54\rbrace$ and numbers $2\pmod{6}$. Your question is really about proving that $\mathcal B(n)=2$ for infinitely many $n$, and it is an open to determine such $n$ besides the values found in the von Zur Gathen-Roche paper. As I mentioned above it is equivalent to proving for such $n$ that the minimum degree of a non-constant polynomial sending $\lbrace 0,1,\dots,n\rbrace\to \lbrace 0,1\rbrace$ is $n$. The best results known so far are that the degree is $n-o(n)$, where the $o(n)$ comes from the gaps in consecutive primes (so one can take $O(n^{.525})$), but conjecturally this can be improved to $n-O(1)$. One thing that is surprising is the following threshold phenomenon. If we look at non-constant polynomials sending $\lbrace 0,1,\dots,n\rbrace\to \lbrace 0,1,\dots,n\rbrace$, the minimum degree is $1$ (for instance $f(x)=x$), but as soon as we look at $\lbrace 0,1,\dots,n\rbrace\to \lbrace 0,1,\dots,n-1\rbrace$ the degree is at least $n-o(n)$. The current methods don't seem to make use of the fact that in the boolean case the range is simply $\lbrace 0,1\rbrace$, as they give the same bound for larger ranges. A recent paper on the topic is "On the Degree of Univariate Polynomials Over the Integers" by G. Cohen, A. Shpilka and A. Tal. - Thanks. A useful reference is the survey of Buhrman and de Wolf on complexity measures of boolean functions, where current results and some proofs appear. – Shir Sep 1 2011 at 19:29 When looking at the Cohen-Shpilka-Tal paper I mentioned above you will see that the relevant results in Buhrman and de Wolf hold for a slightly more general setting, and they emphasize the point that current lower bounds on the degree come from mod p considerations and no technique is known that distinguishes between a very small range and a comparably sized range. – Gjergji Zaimi Sep 1 2011 at 20:12 1 Your "simple argument" that there are no nontrivial ways to assign the signs for $n=p-1$, $p$ prime, solves the question as stated, since it shows that there is no $N$ so that there are nontrivial solutions for $n\gt N$. – Douglas Zare Sep 2 2011 at 0:11 I agree with Douglas, why isn't your "simple argument" in fact showing that, as you wrote, B(n)=2 for infinitely many n? – Shir Sep 2 2011 at 5:21 Great, I edited the answer to reflect that. – Gjergji Zaimi Sep 2 2011 at 7:09 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372997283935547, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/78283/list
## Return to Answer 2 added 39 characters in body; [made Community Wiki] EDIT: According to François's comment below, this answer only works for finite fields instead of fields of positive characteristic, as I believe originally intended. I would like to leave it for a while in community wiki and see if someone else's can use these ideas to give a further step. END EDIT Here is another proof that the ultrafilter lemma theorem is also enough to deduce statement (D) for vector spaces over finite fieldsof positive characteristic, which generalizes the answer given by François. The idea is to use partial functionals defined on finite-dimensional subspaces and with values in $\mathbb{F}_p$ (if the field has characteristic $p$, then the vector space can be considered as an $\mathbb{F}_p$-vector space by restriction) and use a consistency principle to deduce the existence of a functional defined in the whole space. This can be done by using the following theorem, which is equivalent to the prime ideal theorem (and hence to the ultrafilter lemma): THEOREM: Suppose for each finite $W \subset I$ there is a nonempty set $H_W$ of partial functions on I whose domains include $W$ and such that $W_1 \subseteq W_2$ implies $H_{W_2} \subseteq H_{W_1}$. Suppose also that, for each $v \in I$, {$h(v): h \in H_{\emptyset}$} is a finite set. Then there exists a function $g$, with domain $I$, such that for any finite $W$ there exists $h \in H_W$ with $g|_W \subseteq h$. This is theorem 1 in this paper by Cowen, where he proves the equivalence with the prime ideal theorem (a simple proof using compactness for propositional logic is given by the end of the paper, and is close to what François had in mind). This is essentially also the "Consistency principle" as appearing in Jech's "The axiom of choice", pp. 17, since although Jech's formulation uses only two-valued functions, the proof he gives there, through the ultrafilter lemma, actually works when the functions are $n$-valued. Now, for an infinite-dimensional vector space $V$ of characteristic $p$, over a finite field, fix a nonzero $v_0 \in V$ and define the sets $H_W$ for $W \subset V$ as follows: if $W \subseteq U$ for a finite $U$, consider the set $S_{U}$ of all functionals defined on the (finite-dimensional) subspace generated by $U$, with values in $\mathbb{F}_p$, U$and such that$v_0 \in U \implies f(v_0)=1$. Then$H_W$is the union of all$S_U$for finite$U \supseteq W$. By the previous theorem, we have a function$f: V \to \mathbb{F}_p$, mathbb{F}$, and the restriction property shows that $f$ is linear and $f(v_0)=1$. 1 I believe that the ultrafilter lemma is also enough to deduce statement (D) for vector spaces over fields of positive characteristic, which generalizes the answer given by François. The idea is to use partial functionals defined on finite-dimensional subspaces and with values in $\mathbb{F}_p$ (if the field has characteristic $p$, then the vector space can be considered as an $\mathbb{F}_p$-vector space by restriction) and use a consistency principle to deduce the existence of a functional defined in the whole space. This can be done by using the following theorem, which is equivalent to the prime ideal theorem (and hence to the ultrafilter lemma): THEOREM: Suppose for each finite $W \subset I$ there is a nonempty set $H_W$ of partial functions on I whose domains include $W$ and such that $W_1 \subseteq W_2$ implies $H_{W_2} \subseteq H_{W_1}$. Suppose also that, for each $v \in I$, {$h(v): h \in H_{\emptyset}$} is a finite set. Then there exists a function $g$, with domain $I$, such that for any finite $W$ there exists $h \in H_W$ with $g|_W \subseteq h$. This is theorem 1 in this paper by Cowen, where he proves the equivalence with the prime ideal theorem (a simple proof using compactness for propositional logic is given by the end of the paper, and is close to what François had in mind). This is essentially also the "Consistency principle" as appearing in Jech's "The axiom of choice", pp. 17, since although Jech's formulation uses only two-valued functions, the proof he gives there, through the ultrafilter lemma, actually works when the functions are $n$-valued. Now, for an infinite-dimensional vector space $V$ of characteristic $p$, fix a nonzero $v_0 \in V$ and define the sets $H_W$ for $W \subset V$ as follows: if $W \subseteq U$ for a finite $U$, consider the set $S_{U}$ of all functionals defined on the (finite-dimensional) subspace generated by $U$, with values in $\mathbb{F}_p$, and such that $v_0 \in U \implies f(v_0)=1$. Then $H_W$ is the union of all $S_U$ for finite $U \supseteq W$. By the previous theorem, we have a function $f: V \to \mathbb{F}_p$, and the restriction property shows that $f$ is linear and $f(v_0)=1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374148845672607, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/vision?sort=active&pagesize=50
# Tagged Questions Physical processes involved when seeing, and comparisons between with other light detection systems. Includes questions about the eye, optical nerve, brain, corrective lenses, etc. 2answers 109 views ### Can I see the light from 10 million km away? If light is switched ON, only for a second, and the distance between the observer and the light source is 10 million kilometers, can I still see the light spark? For example, let's assume that the ... 2answers 47 views ### Hyperopia, Far Sightedness With hyperopia, the focal point is behind the retina, shouldn't this mean that the image is flipped on the retina itself from what is usual? I must be drawing my ray diagrams wrong. A little ... 2answers 265 views ### Why does light of high frequency appear violet? When people are asked to match monchromatic violet light with an additive mix of basic colours, they (paradoxically) mix in red. In fact, the CIE 1931 color space chromaticity diagram shows this ... 0answers 9 views ### Why are trichromat cone cells unable to sense ultraviolet and infrared radiation? [migrated] I understand that cone cells vary in the color they sense, is this because of wavelength, frequency, something else, or a combination of the previous? I also understand that tetrachromats can see an ... 3answers 184 views ### Why we see upright images? Since it's convex lens there in our eyes so image formed on our retina is inverted, so how come that we see upright images? 0answers 74 views ### What gives observable light its colors? [migrated] I know that difference between different colors of light is difference between their wave length but I don't know what gives beautiful colors (like rainbow colors) to different wave length of ... 2answers 177 views ### How photons represent colors that you see? Right now, my understanding is that, a mixture of photons of many different frequencies is perceived as white by your eye. While no photons at all, is perceived as black. And photons with the blue ... 1answer 128 views ### Can an invisible light source cast shadows? Let's assume that we have a mechanism for producing EM radiation suspended in the air, and that that mechanism itself is invisible to the naked eye (e.g. a microscopic light bulb on a microscopic wire ... 1answer 2k views ### Why can you see virtual images? In optics it is widely mentioned real images are projectable onto screens whereas virtual ones can only be seen by a person. Isn't that contradictory? I mean in order to see the virtual image it has ... 4answers 2k views ### Can someone explain the color Pink to me? I just finished watching this interesting video: http://youtu.be/S9dqJRyk0YM It does a very quick explanation of how pink light doesn't exist, and that the concept of pink is our brain's attempt at ... 4answers 228 views ### Eye sensitivity & Danger signal Why are danger signal in red, when the eye is most sensitive to yellow-green? You can check luminosity function for more details... 1answer 72 views ### The color purple in a rainbow In a rainbow the colors order is red then orange (made from red and yellow, thus making sense that it appears in between them) the yellow followed by green after which comes blue (again green formed ... 1answer 1k views ### How much sky do we see at any one moment? When we look at any particular point the sky, what percentage of the celestial sphere do we see? This question arises from the notion that on average there passes one meteor per hour overhead. So ... 3answers 13k views ### Why does the moon sometimes appear giant and a orange red color near the horizon? I've read various ideas about why the moon looks larger on the horizon. The most reasonable one in my opinion is that it is due to how our brain calculates (perceives) distance, with objects high ... 2answers 131 views ### What is the minimum optical power detectable by human eye? If one is in complete darkness, what is the minimum optical power that the eye can "see" (let's say in 500-600 nm range). I found that for 510 nm, 90 photons can be detected ... 10answers 4k views ### Is it possible that there is a color our human eye can't see? Is it possible that there's a color that our eye couldn't see? Like all of us are color blind to it. If there is, is it possible to detect/identify it? 1answer 30 views ### Refraction and scattered light for NLCs For helping with judging NLC candidates (are they NLC or not) I have a set of formulas to calculate the minimum altitude (in km) of the candidate given an observed altitude (in degrees) of the ... 1answer 155 views ### Why both yellow and purple light could be made by a mix of red, green and blue? We see the mix of red light and green light as yellow light (#FFFF00). The wavelength of yellow light lies between red and green. But the wavelength of purple light lies outside of red and blue. ... 0answers 61 views ### How to convert a hologram into an image? Suppose one knows in full detail the phase and intensity of monochromatic light in a plane. This is basically what a hologram records, at least for some section of a plane. By using this as the ... 1answer 178 views ### How can you test what color different people perceive? [closed] If I would show someone a yellow object and ask them, "is this object yellow?" That person would say "yes". But I could never know if my perception of the color yellow is the same as that other ... 2answers 140 views ### Is Invisibility possible according to physics? Is Invisibility possible according to physics? Is there any backing theory to prove it true or false? 2answers 71 views ### Is there a good explanation for the observation of Martian canals? Martian "canals" have been observed by independent observers after their first description. Now, they are attributed to "optical illusion", but I think that this is not a good choice of word, because ... 3answers 638 views ### Are dangerous rays emitted during Solar Eclipse? It is said one should avoid staring at Sun as it can damage the eyes, but it is also said that one should not come out in sun during eclipse as it emits dangerous rays. Is that true? If yes, why? 2answers 176 views ### Blue-shifting as opposed to violet-shifting A recent XKCD comic implies that the sky is blue as opposed to violet due to human physiology, and that animals more sensitive to shorter wavelengths will perceive the Earth's sky as the shortest ... 1answer 31 views ### Vision vs. limiting magnitude Does anyone know how the acuity of your vision translates to a difference in limiting magnitude? e.g., the kind of answer I'm looking for would be "For each factor of 2 improvement in your vision ... 0answers 475 views ### Why does the moon look bigger at the horizon? [duplicate] Possible Duplicate: Why does the moon sometimes appear giant and a orange red color near the horizon? Why does the moon look bigger at horizon or skyline than at other times e.g. at ... 1answer 72 views ### Human eyes vs aberrations There are no perfect lenses in nature. Aberrations of some kind will always be there. But why healthy human eyes circumvent this issue? Or, are there any aberrations we don't "see?" 1answer 126 views ### How do photo mosaic work from eye and image processing perspective? [closed] Hello fellow investigators I have two question about optical illusions 1) A photo mosaic is something like this http://i.stack.imgur.com/Pzplp.png What are the optical principles behind our eye ... 2answers 444 views ### Why do green lasers appear brighter and stronger than red and blue lasers? This is mostly for my own personal illumination, and isn't directly related to any school or work projects. I just picked up a trio of laser pointers (red, green, and blue), and I notice that when I ... 4answers 1k views ### Is it only red, green and blue that can make up any color through additive mixture? I'm reading about color vision and have some trouble understanding the motivation for why the trichromatic theory was suggested in the first place. The book I'm reading ("Psychgology: The science of ... 4answers 2k views ### Why is there a difference between additive and subtractive trichromatic color theories? Helmholtz distinguished between additive and subtractive trichromatic color theories. Additive theories concern optical combinations of colored light sources and are usually modelled on RGB while ... 5answers 772 views ### Why is Light invisible? Why can't we see light? The thing which makes everything visible is itself invisible. Why is it so? 5answers 2k views ### Eyes open under water Yesterday I looked underwater with my eyes open (and no goggles) and I realized I can't see anything clearly. Everything looks very, very blurry. My guess is that the eye needs direct contact with air ... 1answer 791 views ### Purple doesn't occur in rainbow - or does it? Usually, when asked whether the purple color exists rainbows, an answer similar to this is given: The purple color is perceived by human eyes via the activation of both red-sensitive and ... 3answers 376 views ### What colour is nothing? To me this is very confusing, but I hope we can discuss it and find a solid answer to the question. If you were somewhere where there was absolutely nothing, what colour would your eyes see? 4answers 546 views ### Explanation about black color, and hence color I'm bit confused about 'black' as a color. As per my knowledge, it is not given in visible color spectra like other colors for example red, violet etc. Also I'm confused with definition of color--does ... 1answer 69 views ### What is the new distance for resolution of the images? [closed] The taillights of an automobile are $1.25\:\rm{m}$ apart. Assume the pupil of a person's eye has a diameter of $5\:\rm{mm}$ and the light has an average wavelength of $604\:\rm{mm}$. At night, on a ... 1answer 567 views ### How do we see different colours? Why do different wave lengths cause electrons to behave(?) differently, causing us to see different colors? What is happening at the quantum level which causes the colour black to absorb all of the ... 2answers 923 views ### Are regular light bulbs better for the eyes than CFLs or “tube lights”? I've heard that regular light bulbs with a filament are better for the eyes. Is the spectrum of one worse than the other? If so, are there any regulations for their use in industrial settings for ... 1answer 68 views ### How to determine the angle a camera has to be at to image a desired scene, given the camera parameters? Reviewing old material I learned years ago, having a hard time rewrapping my head around the relationships in capturing images from a camera. Imagine a scenario with the camera placed on the ground ... 6answers 2k views ### Limit of human eye flicker perception? I am designing a LED dimmer using software-controlled Pulse Width Modulation, and want to know the minimum PWM frequency that I must reach to make that LED dimming method indistinguishable from ... 1answer 502 views ### Effects of high frequency lighting on human vision? I have a couple of different LED flashlights. One of them has three different "modes" of brightness, and the way it controls it is via pulse width modulation (PWM). Here is a picture that illustrates ... 1answer 216 views ### How to determine the required resolution to view an image at a certain size? If I have several camera specifics, can I determine the required resolution to view an image at the size I want? For example, let's say I want a picture of a square painting to be at least ... 1answer 115 views ### How does the focal length of the lens change the distance from the camera to an object? I am reviewing some old image processing notes and have the following problem: I have a camera with a projection plane of $2cm\ by\ 2cm$ and a $15mm$ lens. If you want an $1.5m$ object to take ... 1answer 544 views ### How to determine the field of view for a pinhole camera with given parameters? I am reviewing some old material I learned years ago and am having trouble figuring this one out. Can somebody confirm that I have done my math correctly, and tell me how I can recover the field of ... 2answers 545 views ### Infrared remote flashes blue light in camera I know that if you held an infrared remote in front of a digital camera, it'll flash a blue/purplish light when you press the buttons. Why? 4answers 464 views ### Why are color values stored as Red, Green, Blue? I learned in elementary school that you could get green by mixing blue with yellow. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367917776107788, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/211336/sets-forming-orthonormal-basis/211344
# Sets forming orthonormal basis So the question is: Which of the following sets of vectors form an orthonormal basis for $\mathbb{R}^2$ $(a) \{(1,0)^T, (0,1)^T\}$ $(b) \{(\frac{3}{4},\frac{4}{5})^T,(\frac{5}{13},\frac{12}{13})^T\}$ I know that a is a basis and b is not. The thing is, I just don't know why. I know the definition of orthonormal is if two vectors are perpendicular and of unit length. But I don't understand how to prove or even find for that matter, the orthonormal basis of a space. Any ideas? Thanks - ## 2 Answers Two vectors $x,y \in \mathbb{R}^n$ are orthogonal if their dot product equals zero; that is $$x \cdot y = x_1 y_1 + \ldots + x_n y_n = 0,$$ where $x = (x_1, \ldots, x_n)$, $y = (y_1, \ldots, y_n)$. A vector $x$ is of unit length if $x \cdot x = 1$. A set is called orthonormal if every vector has unit length and any two different vectors are orthogonal. We also know the following: Theorem. If a set is orthonormal, then it is linearly independent. So we can just compute the dot products between all the vectors in your examples, and check whether they equal 0 when dotting two different vectors, and equals 1 when dotting a vector with itself. Then, using the theorem, you'll automatically know whether it's a orthonormal basis or not. - Yah i just figured this out at the same time, thank you for the reply. – Charlie Yabben Oct 11 '12 at 23:08 but how can you ensure that it spans the vector space?? – TheJoker Oct 11 '12 at 23:18 @TheJoker, well if you know the set if orthonormal, then it's linearly independent, so you just count how many elements there are. – Christopher A. Wong Oct 11 '12 at 23:31 Proving something to be orthonormal basis is simple. First prove the set of vector to be a basis (set is linear independent and set spans the vector space, here $\mathbb R^2$). Then check for orthogonal prperty, i.e dot product is $0$ and vectors are of unit length).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904942512512207, "perplexity_flag": "head"}
http://www.reference.com/browse/wiki/Antenna_gain
Definitions # Antenna gain Antenna Gain is defined as the ratio of the radiation intensity of an antenna in a given direction, to the intensity of the same antenna as it radiates in all directions (isotropically). Since the radiation intensity of an isotropically radiated power is equal to the power into the antenna divided by 4п (360 degrees) we can express the following equation: $Gain = 4pileft\left(frac\left\{mbox\left\{Radiation Intensity\right\}\right\}\left\{mbox\left\{Antenna Input Power\right\}\right\}right\right)$ $Gain = 4pileft\left(frac\left\{mbox\left\{U\right\}left \left(theta,phiright\right)\right\}\left\{mbox\left\{Pin\right\}\right\}right\right) qquadqquad mbox\left\{Dimensionless Units\right\}.$ Although the gain of an antenna is directly related to its directivity, it is important to note that the antenna gain is a measure that takes into account the efficiency of the antenna as well as its directional capabilities. In contrast, directivity is defined as a measure that takes into account only the directional properties of the antenna and therefore it is only influenced by the antenna pattern. However, if we assumed an ideal antenna without losses then Antenna Gain will equal directivity as the antenna efficiency factor equals 1 (100% efficiency). In practice, the gain of an antenna is always less than its directivity. $D\left(theta,phi\right) = 4pileft\left(frac\left\{mbox\left\{U\right\}left \left(theta,phiright\right)\right\}\left\{mbox\left\{Prad\right\}\right\}right\right)$ $D\left(theta,phi\right) = epsilon_\left\{cd\right\}left \left(4pifrac\left\{mbox\left\{U\right\}left \left(theta,phiright\right)\right\}\left\{mbox\left\{Prad\right\}\right\}right\right)$ $D\left(theta,phi\right) = epsilon_\left\{cd\right\}left \left(D\left(theta,phi\right)right\right)$ The formulas above show the relationship between antenna gain and directivity, where $epsilon_\left\{cd\right\}$ is the antenna efficiency factor, D the directivity of the antenna and G the antenna gain. In the antenna world, we usually deal with a “relative gain” which is defined as the power gain ratio in a specific direction of the antenna, to the power gain ratio of a reference antenna in the same direction. The input power must be the same for both antennas while performing this type of measurement. The reference antenna is usually a dipole, horn or any other type of antenna whose power gain is already calculated or known. $Gain = mbox\left\{G\left(ref ant\right)\right\}left\left(frac\left\{mbox\left\{Pmax\left(AUT\right)\right\}\right\}\left\{mbox\left\{Pmax\left(ref ant\right)\right\}\right\}right\right)$ In the case that the direction of radiation is not stated, the power gain is always calculated in the direction of maximum radiation. The maximum directivity of an actual antenna can vary from 1.76 dB for a short dipole, to as much as 50 dB for a large dish antenna. The maximum gain of a real antenna has no lower bound, and is often -10 dB or less for electrically small antennas. Taking into consideration the radiation efficiency of an antenna, we can express a relationship between the antenna’s total radiated power and the total power input as: $Power Radiated = mbox\left\{\left(Antenna Radiation Efficiency\right)\right\}left\left(mbox\left\{Power Input\right\}right\right)$ It is important to note that in the above formula, antenna radiation efficiency only includes conduction efficiency and dielectric efficiency and does not include reflection efficiency as part of the total efficiency factor. Moreover, the IEEE standards state that “gain does not include losses arising from impedance mismatches and polarization mismatches”. Antenna Absolute Gain is another definition for antenna gain. However, Absolute Gain does include the reflection or mismatch losses. $G_\left\{abs\right\}\left(theta,phi\right)= epsilon_\left\{refl\right\}G\left(theta,phi\right) = \left(1-Gamma^2\right)left\left(G\left(theta,phi\right)right\right)$ $= epsilon_\left\{refl\right\}epsilon_\left\{cd\right\}D\left(theta,phi\right) = epsilon_\left\{eff\right\}left\left(D\left(theta,phi\right)right\right)$ In this equation, $epsilon_\left\{refl\right\}$ is the reflection efficiency, and $epsilon_\left\{cd\right\}$ includes the dielectric and conduction efficiency. The term $epsilon_\left\{eff\right\}$ is the total antenna efficiency factor. Taking into account polarization effects in the antenna, we can also define the partial gain of an antenna for a given polarization as that part of the radiation intensity corresponding to a given polarization divided by the total radiation intensity of an isotropic antenna. As a result of this definition for the partial gain in a given direction, we can conclude that the total gain of an antenna is the sum of partial gains for any two orthogonal polarizations. $G_\left\{total\right\} = G_\left\{theta\right\} + G_\left\{phi\right\}$ $G_\left\{theta\right\} = 4pileft\left(frac\left\{U_theta\right\}\left\{mbox\left\{Pin\right\}\right\}right\right)$ $G_\left\{phi\right\} = 4pileft\left(frac\left\{U_phi\right\}\left\{mbox\left\{Pin\right\}\right\}right\right)$ The terms $U_\left\{theta\right\}$ and $U_\left\{phi\right\}$ represent the radiation intensity in a given direction contained in their respective E field component. Commonly, the gain of an antenna is expressed in terms of decibels instead of dimensionless quantities. The formula to convert dimensionless units to dB is given below: $G_\left\{dB\right\} = 10log_\left\{10\right\}\left(epsilon_\left\{cd\right\}D_\left\{dimmensionless\right\}\right)$ $G_\left\{dB\right\} = 10log_\left\{10\right\}\left(G_\left\{dimmensionless\right\}\right)$ Example calculating antenna gain: A lossless resonant half-wavelength dipole antenna, with input impedance of 80 ohms, is connected to a transmission line whose characteristic impedance is 50 ohms. Assuming that the pattern of the antenna is given approximately by: $U = B_0sin^3\left(theta\right)$ Find the maximum absolute gain of this antenna. Solution: First computing maximum directivity of antenna: $B_0 = U_\left\{max\right\}$ $Prad = int_0^\left\{2pi\right\}int_0^\left\{pi\right\}U\left(theta,phi\right)sin\left(theta\right)d\left(theta\right)d\left(phi\right) = 2\left(pi\right)B_0int_0^\left\{pi\right\}sin^4\left(theta\right)d\left(theta\right) = B_0\left(frac\left\{3pi^2\right\}\left\{4\right\}\right)$ $D = 4pi\left(frac\left\{Umax\right\}\left\{Prad\right\}\right) = 4pi\left(frac\left\{B_0\right\}\left\{B_0\left(frac\left\{3pi^2\right\}\left\{4\right\}\right)\right\}\right) = frac\left\{16\right\}\left\{3pi\right\} = 1.698$ Since antenna is mentioned to be lossless the radiation efficiency is 1. Then maximum gain is equal to: $G = epsilon_\left\{cd\right\}D = \left(1\right)\left(1.698\right) = 1.698$ $G_\left\{dB\right\} = 10log_\left\{10\right\}\left(1.698\right) = 2.299$ Taking into account reflection efficiency due to mismatch loss: $epsilon_r = \left(1-|Gamma|^2\right) = left\left(1-|frac\left\{80-50\right\}\left\{80+50\right\}|^2right\right)= 0.947$ $epsilon_\left\{r\left(dB\right)\right\} = 10log_\left\{10\right\}\left(0.947\right) = -0.237$ Then the overall efficiency becomes: $epsilon_\left\{total\right\} = epsilon_repsilon_\left\{cd\right\} = 0.947$ $epsilon_\left\{total\left(dB\right)\right\} = -0.237$ Absolute gain is calculated as: $G_\left\{absolute\right\} = epsilon_\left\{total\right\}D = \left(0.947\right)\left(1.698\right) = 1.608$ $G_\left\{absolute\left(dB\right)\right\} = 10log_\left\{10\right\}\left(1.608\right) = 2.063$ Antenna Efficiency: The total antenna efficiency takes into account all loses in the antenna such as reflections due to mismatch between transmission lines and the antenna, conduction and dielectric loses. $epsilon_\left\{total\right\} = epsilon_repsilon_cepsilon_d$ Where $epsilon_\left\{total\right\}$ is the total efficiency of the antenna, $epsilon_\left\{r\right\}$ is the efficiency due to mismatch losses, $epsilon_\left\{c\right\}$ is the efficiency due to conduction losses, $epsilon_\left\{d\right\}$ is the efficiency due to dielectric losses. Usually conduction and dielectric efficiency are determined experimentally since they are very difficult to calculate. In fact, they cannot be separated when measured and therefore, it is more helpful to rewrite the equation as: $epsilon_\left\{total\right\} = epsilon_repsilon_\left\{cd\right\} = \left(1-|Gamma|^2\right)epsilon_\left\{cd\right\}$ Where $Gamma$ is the voltage reflection coefficient and , $epsilon_\left\{cd\right\}$ or $\left(epsilon_cepsilon_d\right)$ is the antenna radiation efficiency which is commonly used to relate the gain and directivity in the antenna. Antenna Directivity: Directivity is defined as the ratio of the radiation intensity of an antenna in a given direction to the radiation intensity averaged over all directions. $D = 4pifrac\left\{U\right\}\left\{Prad\right\}$ A more general expression of directivity includes sources with radiation patterns as functions of spherical coordinate angles $theta$ and $phi$. $D = frac\left\{4pi\right\}\left\{frac\left\{int_0^\left\{2pi\right\}int_0^\left\{pi\right\}F\left(theta,phi\right)sin\left(theta\right)d\left(theta\right)d\left(phi\right)\right\}\left\{F\left(theta,phi\right)|_\left\{max\right\}\right\}\right\} = frac\left\{4pi\right\}\left\{Omega_A\right\}$ Where $Omega_A$ is the beam solid angle and is defined as the solid angle in which if the antenna radiation intensity is constant (and maximum value), all power would flow through it. In the case of antennas with one narrow major lobe and very negligible minor lobes, the beam solid angle can be approximated as the product of the half-power beamwidths in 2 perpendicular planes. $Omega_A = Theta_\left\{1r\right\}Theta_\left\{2r\right\}$ Where, $Omega_\left\{1r\right\}$ is the half-power beamwidth in one plane (radians) and $Omega_\left\{2r\right\}$ is the half-power beamwidth in a plane at a right angle to the other (radians). The same approximation can used for angles given in degrees as follows: $D approx 4pifrac\left\{\left(frac\left\{180\right\}\left\{pi\right\}\right)^2\right\}\left\{Theta_\left\{1d\right\}Theta_\left\{2d\right\}\right\} = frac\left\{41253\right\}\left\{Theta_\left\{1d\right\}Theta_\left\{2d\right\}\right\}$ Where, $Omega_\left\{1d\right\}$ is the half-power beamwidth in one plane (degrees) and $Omega_\left\{2d\right\}$ is the half-power beamwidth in a plane at a right angle to the other (degrees).In planar arrays, a better approximation is: $D approx frac\left\{32400\right\}\left\{Theta_\left\{1d\right\}Theta_\left\{2d\right\}\right\}$ Most of the time, it is desirable to express directivity in decibels instead of dimensionless quantities. Therefore: $D_\left\{dB\right\} = 10log_\left\{10\right\}\left(D_\left\{dimenssionless\right\}\right)$ ## References • Antenna Theory (3rd edition), by C. Balanis, Wiley, 2005, ISBN 0-471-66782-X • Antenna for all applications (3rd edition), by John de Kraus, Ronald J. Marhefka, 2002, ISBN 0-07-232103-2 Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 54, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296680688858032, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/117604/a-non-commutative-ring-from-su2
A non-commutative ring from SU(2) Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) $SU(2)$, which will be regarded here as the group of unit quaternions under multiplication, has 3 conjugacy classes of finite subgroups which don't have cyclic subgroups of index 1 or 2. They are: of order $24$, the binary tetrahedral group $T^{ * } \cong SL(2,3)$ of order $120$, the binary icosahedral group $I^{ * } \cong SL(2,5)$ of order $48$, the binary octahedral group $O^{ * }$, which doubly covers $S_{4}$ but is not isomorphic to $SL(2, \mathbb{Z}/(4))$ or $GL(2,3)$ The ring of all sums (using quaternion addition) of elements of $T^{ * }$ is the Hurwitz ring of integral quaternions. The ring of all sums of elements of $I^{ * }$ is the icosian ring, which was studied by Hamilton and can be identified with the $E_{8}$ root lattice. My questions are about the ring of sums of elements of $O^{ * }$. I did not see references to it in SPLAG (which even comes close when discussing lattices over the subring $\mathbb{Z}[e^{\frac{2\pi i}{8}}]$, and discusses both the $E_{8}$ and Leech lattices as icosian lattices), and it's hard to look up a structure whose name I don't know. Does anyone know of a reference that explores this ring in non-trivial depth? Does this ring have a standard name? - Typos corrected:It might help to note that the binary octahedral group is isoclinic to ${\rm GL}(2,3)$, which is clear since they are both double covers of $S_{4}$, and both genuine 2-dimensional representations of these groups yield the same projective (in Schur's sense) representation of $S_{4}$. To be specific, ${\rm SL}(2,3)$ embeds the same way in each of these groups, and if we take an involution $t \in {\rm GL}(2,3) \backslash {\rm SL}(2,3)$, we may replace $t$ by $it$, and we now generate the binary octahedral group. – Geoff Robinson Dec 30 at 21:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382244944572449, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/288738/local-max-min-critical-points-of-integral
# Local Max/Min, Critical points of integral Given $$f(t) = \int_0^t \frac{x^2+14x+45}{1+\cos^2(x)}dx$$ I need to find the local max of f(t). Well here using the fundamental theorem of calculus, I know I can just replace the $x$ with $t$. But I do not remember how to find the local max/min and if I remember correctly critical points were in the same context, So some insight on critical points would be good too. Thank you. - Hint: What is $f'(t)$? When is it zero? – Alexander Thumm Jan 28 at 7:33 ## 2 Answers Just find $f''(t)$ and then see the sign of $f''(t)$ at the critical points. You should get $f''(-5)>0$ which tells you $x=-5$ is a minima and $f''(-9)<0$ which tells you $x=-9$ is a maxima. See second derivative test. - I occasionally, use this test in the class. +1 – Babak S. Jan 28 at 8:13 @BabakSorouh: Thank you. – Mhenni Benghorbal Jan 28 at 8:26 The minimum, maximum and inflection points will be at the points in which the derivative, in your case the integrand is equal to zero. In your case, these are simply the solutions to: $$x^2+14x+45=0$$ Namely, $x=-9, -5$. To find which is a minimum / maximum, I would just evaluate the integrand at some sample points such as $x=0,-2\pi,-3\pi$. You get that for instance: $$f'(0) = \frac{45}{2} >0$$ And that: $$f'(-2\pi) = \frac{4\pi^2-28\pi+45}{2} <0$$ This means the point $x=-5$ is a minimum, since the derivative is increasing at between $-2\pi$ and $0$. A similar calculation follows for the point $x=-9$. - @BabakSorouh - actually I'm having doubts. The integrand isn't defined at $x=\pi(n-1/2)$.. – nbubis Jan 28 at 7:43 So, those points would be regarded as critical points. – Babak S. Jan 28 at 7:44 @BabakSorouh - the OP changed the question, and everything shold be defined properly. – nbubis Jan 28 at 7:49 1 Confused: I need to find where my solution = 0 ? where did $x = -7 +-2sqrt(6)$ come from ? – Reza M. Jan 28 at 7:50 1 @RezaM. - you need to find where the derivative, in this case the integrand is zero. the solution come from equating the integrand, and therefore the parabola to zero. Also fixed the solution according to 45 and not 25.. – nbubis Jan 28 at 7:51 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215037226676941, "perplexity_flag": "head"}
http://cms.math.ca/Events/winter10/abs/mnd.html
2010 CMS Winter Meeting Coast Plaza Hotel and Suites, Vancouver, December 4 - 6, 2010 www.cms.math.ca//Events/winter10 Methods in Nonlinear Dynamics Org: George Patrick (Saskatchewan) and Cristina Stoica (Wilfrid Laurier) [PDF] JUAN ARTES, Universitat Autònoma de Barcelona ON COPPEL’S DREAM  [PDF] In 1966, Coppel published A survey on Quadratic Systems'', JDE . He conjectured there that Ideally one might hope to characterize the phase portraits of quadratic systems by means of algebraic inequalities on the coefficients.'' This was proved false some years later when several results by Dumortier and Roussarie showed that phenomena like connections of separatrices or the born of semiestable limit cycles could not be ruled by means of algebraic terms. Anyway, there are still many things that can be ruled by means of algebraic tools and during the last years great advance has been achieved. We are going to present here some of these tools and results. LENNARD BAKKER, Brigham Young University Periodic SBC Orbits in the Planar Pairwise Symmetric Problem  [PDF] We prove the analytic existence of a symmetric periodic simultaneous binary collision orbit in a regularized planar pairwise symmetric equal mass four-body problem. We provide some analytic and numerical evidence for this periodic orbit to be linearly stable. We then use a continuation method to numerically find symmetric periodic simultaneous binary collision orbits in a regularized planar pairwise symmetric 1, m, 1, m four-body problem for $m$ between $0$ and $1$. We numerically investigate the linear stability of these periodic orbits through long-term integration of the regularized equations, showing that linear stability occurs when $0.538\leq m\leq 1$, and instability occurs when $0<m\leq0.537$ with spectral stability for $m\approx 0.537$. PIETRO LUCIANO BUONO, University of Ontario Institute of Technology Symmetry-breaking bifurcations of the Hip-Hop orbit  [PDF] In this talk, I will present recent results on the classification of symmetry-breaking bifurcations of the reduced Hip-Hop orbit of the 4-body problem obtained by Chenciner and Venturelli (2000). These are obtained by using results of Lamb, Melbourne and Wulff on bifurcations from discrete rotating waves with time-reversing symmetries and by looking at Maslov-type indices of symplectic matrices in $\mbox{Sp}(4)$. Minimization properties of the bifurcating solutions will also be discussed. Numerical Poincar\'e maps are also computed and show the sequence of bifurcations as the energy is varied. This is joint work with Mitchell Kovacic (B.Sc, UOIT). FLORIN DIACU, University of Victoria Singularities of the curved n-body problem  [PDF] For the n-body problem in spaces of non-zero constant curvature k, we analyze the singularities of the equations of motion and several types of singular solutions. We show that, for k larger than 0, the equations of motion encounter non-collision singularities, which occur when two or more bodies are antipodal. This conclusion leads, on one hand, to hybrid solution singularities for as few as 3 bodies, whose orbits end up in a collision-antipodal configuration in finite time; on the other hand, it produces non-singularity collisions, characterized by finite velocities and forces at the collision instant. RAZVAN FETECAU, Dept. of Mathematics, Simon Fraser University Nonlocal PDE models for self-organization of biological groups  [PDF] We introduce and study two new PDE models for the formation and movement of animal aggregations. The models extend the one-dimensional hyperbolic model from Eftimie {\em et al.}, Bull. Math. Biol. 69 (5) [2007]. Their main novel approach concerns the turning rates of individuals, which are assumed to depend in a nonlocal fashion on the population density. Our first model assumes in addition that the nonlocal interactions between individuals can also influence the speed of the group members. We investigate the local/ global existence and uniqueness of solutions and we illustrate numerically the various patterns displayed by the model: dispersive aggregations, finite-size groups and blow-up patterns. The second model extends the approach from Eftimie {\em et al.} [2007] to two dimensions. We show that the resulting integro-differential kinetic equation with nonlocal terms has a unique classical solution, globally in time. We also present numerical results to illustrate various types of group formations that we obtained with the two-dimensional model, starting from random initial conditions: (i) swarms (aggregation into a group, with no preferred direction of motion), and (ii) parallel/ translational motion in a certain preferred direction with (a) uniform spatial density and (b) aggregation into groups. JUAN LUIS GARCÍA GUIRAO, Universidad Politécnica de Cartagena, Spain $\mathcal{C}^{1}$ self--maps with all their periodic orbits hyperbolic  [PDF] The aim of this talk is to study in its homological class the periodic structure of the $\mathcal{C}^{1}$ self--maps on the manifolds $\mathbb{S}^{n}$ (the $n$--dimensional sphere), $\mathbb{S}^{n}\times \mathbb{S}^{m}$ (the product space of the $n$--dimensional with the $m$--dimensional spheres), $\mathbb{C}$P$^{n}$ (the $n$--dimensional complex projective space) and $\mathbb{H}$P$^{n}$ (the $n$--dimensional quaternion projective space), having all their periodic orbits hyperbolic. This is a joint work with Professor Jaume Llibre from Universidad Autónoma de Barcelona in Spain. ANTONIO HERNANDEZ-GARDUNO, Universidad Autónoma Metropolitana (Iztapalapa) Bifurcation and stability of Lagrangian relative equilibria in a generalized three-body problem  [PDF] Consider three bodies, two of them point masses and the third an spheroid symmetric with respect to its rotational axis which is perpendicular to the plane of the centers of mass. In this talk we will describe the Lagrangian relative equilibria that arise as we let the characteristic flatness of the spheroid to deviate from zero. We will discuss the induced pitchfork bifurcation and the linear stability analysis based on the reduced energy momentum method. (This work is a joint collaboration with C. Stoica, WLU.) EDUARDO GOES LEANDRO, Universidade Federal de Pernambuco Applications of Group Theory to the Linear Stability Analysis of Relative Equilibria  [PDF] The theory of representations of finite groups has been successfully applied to solve problems in Chemistry and Physics where symmetries are present. A particularly interesting application involves symmetric equilibria, i.e, equilibria possessing a nontrivial group of symmetries, and equivariant linear mappings arising from a physical problem where such symmetric equilibria appear. We present an overview of the theory followed by applications to the stability problem of relative equilibria in Celestial Mechanics. JAUME LLIBRE, Universitat Autònoma de Barcelona On the integrability of surface vector fields and of polynomial vector fields in $\mathbb{R}^n$ or $\mathbb{C}^n$  [PDF] The talk will be a survey on some recent results about the integrability of the differential equations or vector fields. We will put special emphasis, first on the integrability of the vector fields on surfaces, and second on the Darboux theory of integrability for the polynomial differential equations in $\mathbb{R}^n$ or $\mathbb{C}^n$. Finally we shall present some applications of the integrability. Automatically generated variational integrators  [PDF] Many fundamental physical systems have variational formulations, such as mechanical systems in their Lagrangian formulation. Discretization of the variational principles leads to (implicit) symplectic and momentum preserving one step integration methods. However, such methods can be very complicated. I will describe some advances in the basic theory of variational integrators, and a software system called AUDELSI, which converts any ordinary one step method into a variational integrator of the same order. MANUELE SANTOPRETE, Wilfrid Laurier University Title: On the topology of the double spherical pendulum  [PDF] In this talk we will describe the topology of the level sets of the energy of the double spherical pendulum, and we will discuss its dynamical consequences. This seems to be a first step toward describing the topology of the common level sets of the integrals of motion (i.e. the integral manifolds). The study of the integral manifolds is very important since a crude, but important, invariant of the orbits of a dynamical system is given by the topological type of the integral manifolds on which they lie. TANYA SCHMAH, University of Toronto Diffeomorphic Image Matching  [PDF] We consider the problem of diffeomorphically deforming an image or shape to approximately match another given image or shape. This and related problems arise frequently in medical imaging. We give an overview of a family of approaches involving geodesic flow in diffeomorphism groups, momentum maps and stochastic processes. SEBASTIAN WALCHER, Lehrstuhl A Mathematik, RWTH Aachen Quasi-Steady State and Tikhonov's Theorem  [PDF] The quasi-steady state assumption is frequently used in the analysis of differential equations for reacting systems in (bio-) chemistry, to reduce the dimension of the problem. As it turns out, the ad-hoc reduction method can be properly cast (and modified) in the framework of Tikhonov's and Fenichel's classical theorems in singular perturbation theory. Remarkably, this input of more theory yields reduced differential equations with a simpler appearance: In contrast to the ad-hoc method, the reduced differential equations will always have rational right-hand side. Some relevant examples are discussed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8844590187072754, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/55204/what-are-the-cases-of-not-using-countable-induction
# What are the cases of not using (countable) induction? In countably infinite union of countably infinite sets is countable the proof has been given, but when as a student I attempted the question, I tried using induction ( later to found it to be wrong way of going about that problem) but the reasoning was quite simple. I.Union of 1,2 countable set is counatble ( obvious ) II.Suppose I is true up some n. III.For case n+1 we get union of all the sets up to n, which was true by II , so we get back to case of union of two countable sets which is true. I couldn't see any flows with the above, but it was wrong, I think it was due to the fact that proving something holds for all finite n, is not the same as proving it for the infinite case. My question is, why proving something for all finite n ( after all any nu,ber that can be picked is finite), is not same as proving the infinite case ? Is there any other example induction failing for infinite clause? Another question is : Is uncountable ( infinite of course :) union of countable ( finite or infinite) is countable? (The wrong induction method I used can't even be used for this one.) Thank you PS : Modified the title as it was sugeestive of induction failing, where is the case is it is being misused. - 4 For every $n$, a set with exactly $n$ elements is finite. However you're generalizing to infinite cases, this is going to fail. – Chris Eagle Aug 2 '11 at 21:31 2 On your last question: every set is the union of its singleton subsets. So every uncountable set is an uncountable union of one-element sets. – Chris Eagle Aug 2 '11 at 21:36 ## 3 Answers Countable induction doesn't fail; it just has an extra hypothesis that you aren't checking. Here's what the principle of induction looks like in general. Suppose $P_{\alpha}$ is a collection of statements indexed by ordinals $\alpha$. Further suppose that the following condition holds: if $P_{\alpha}$ is true for all $\alpha < \beta$, then $P_{\beta}$ is true. Finally, suppose that $P_0$ is true. Then $P_{\alpha}$ is true for all $\alpha$. (Proof: suppose otherwise. Then there is a least ordinal $\beta$ such that $P_{\beta}$ isn't true. But $P_{\alpha}$ is true for all $\alpha < \beta$ by minimality, and this contradicts the condition.) If instead the collection is indexed by the set of all ordinals less than $\beta$, then we can conclude that $P_{\alpha}$ is true for all $\alpha < \beta$. But the details of the inductive step are important. For $\beta = \omega + 1$, the inductive step requires that $P_1, P_2, ...$ being true imply that $P_{\omega}$ is true, and this condition doesn't hold in general. (A basic counterexample, as Chris says, is the collection of statements $P_n : n \text{ is finite}$.) - Don't we need a well-ordering property to guarantee that the set of propositions {$P_{\beta}$} has a least element? – gary Aug 3 '11 at 0:42 In what sense do we not have such a property? – Qiaochu Yuan Aug 3 '11 at 0:43 Never mind; I realized that ordinals are the order types of well-ordered sets. – gary Aug 3 '11 at 0:53 ## Did you find this question interesting? Try our newsletter email address First, it’s simply not true that any number that can be picked is finite. For example, $\omega$, the smallest infinite cardinal (and ordinal) number is clearly not finite, and I just picked it. There is no reason to think that something that is true of every non-negative integer $n$ is also true of $1/2$: after all, $1/2$ isn’t a non-negative integer. Similarly, there is no reason to think that something that is true of every non-negative integer $n$ is also true of $\omega$: $\omega$ isn’t a non-negative integer. In particular, the fact that the union of $n$ countable sets is countable provides no guarantee that the union of $\omega$ countable sets is countable. Your induction argument doesn’t provide that guarantee either, because it doesn’t contain any reasoning to bridge the gap between finite and infinite unions. It may be easier to see this with an example in which the gap cannot be bridged. Let $P(n)$ be the statement every set of cardinality n has a finite number of subsets. Clearly $P(0)$ is true, since the empty set has only one subset. It’s also easy to see that $P(n) \Rightarrow P(n+1)$: If $S$ is a set of $n+1$ objects, pick one, $s_0$, and let $S' = S \setminus \{s_0\}$. $S'$ has only $n$ elements, so it has a finite number of subsets, say $A_1,\dots,A_m$. But then the subsets of $S$ are the $2m$ sets $$A_1,A_1\cup\{s_0\},A_2,A_2\cup\{s_0\}\dots,A_m,A_m\cup\{s_0\},$$ so $S$ also has a finite number of subsets. By induction, therefore, every finite set has a finite number of subsets. But you wouldn’t be tempted to conclude from this that a countably infinite set -- $\mathbb{N}$, for instance -- has a finite number of subsets: nothing in the argument makes the jump from finite to infinite, and besides, it’s obvious that an infinite set must have an infinite number of subsets. To answer your last question, an uncountable union of pairwise disjoint non-empty sets is always uncountable, even if each of the sets has only one element. - Maybe this will help. For each positive integer $n$, let $P_n$ be a "mathematically meaningful" statement that can be true or false. Then mathematical induction provides a means to prove each of these statements is true, but mathematical induction does not say anything about "statement $P_{\infty}$". In fact, even if $P_{\infty}$ seems meaningful in some sense (and often it isn't), mathematical induction doesn't see statement $P_{\infty}$. For any experts reading, I'm of course talking about ordinary school mathematical induction, not transfinite induction or still more general notions (e.g. "Elementary Induction on Abstract Structures" by Moschovakis). Also, notice what happens with your I, II, and III when "countable" is changed to "finite". The union of two finite sets is finite, if the union of $n$ finite sets is finite then the union of $n+1$ finite sets is finite, so the union of countably many finite sets is finite ... oops! Finally, an uncountable union of countable sets can be countable or uncountable, depending on how much overlap there is among the sets. Each uncountable set is the uncountable union of its singleton (1-element) subsets, and the union of $A_x$ over all real numbers $x$, where $A_x$ consists of the rational numbers belonging to $(-\infty,x)$, is the set of rational numbers and hence countable. - As you said, induction only guarantees that P(n) is true _for all n in $\mathbb N$ , and $\infty$ is not in $\mathbb N$. Maybe the set of almost-disjoint sequences would be a good example of uncountable union of countables being countable. – gary Aug 3 '11 at 1:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547257423400879, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/216779/prove-that-the-cartesian-coordinates-of-a-rayleigh-distribution-are-independent
# Prove that the Cartesian coordinates of a Rayleigh distribution are independent I'm stuck on a question with a weird twist on the usual Rayleigh distribution. Instead of assuming that the components of the distribution are independent, we are given alternative conditions and asked to show independence. The setup is as follows: • Let $R \geq 0$ be a random variable with density $f_R(r) = re^-\frac{r^2}{2}$ • Let $\Theta$ be uniformly distirbuted on $[-\pi,\pi]$ so that $f_\Theta(\theta) = \frac{1}{2\pi}$ • Show that the variables $X = R\cos(\theta)$ and $Y = R\sin(\theta)$ are independent and have density $f_X(t) = f_Y(t) = \frac{e^{-\frac{t^2}{2}}}{\sqrt{2\pi}}$ My current approach is to show that $f_{XY}(x,y) = f_X(x)f_y(y)$. I have used the multivariate change of density formula to obtain the correct distribution for $f_{XY}(x,y)$, which is $\frac{1}{2\pi}e^{-\frac{x^2+y^2}{2}}$. However I cannot figure out how to obtain expressions for $f_X(x)$ and $f_Y(x)$. Please note The question is actually designed to show that $\int_{-\infty}^\infty e^{-\frac{t^2}{2}} = \sqrt{2\pi}$, so we cannot assume that this is true. This effectively means that we cannot integrate $f_{XY}$ over the range of Y to get $f_X$. - ## 1 Answer All you need is that the joint density that you've calculated correctly (except for a typo in the exponent) factorizes. That's the definition of independence. In the present case you can argue from symmetry that the factor $\frac1{2\pi}$ has to be split between $x$ and $y$ so the marginal densities must be $\frac1{\sqrt{2\pi}}\mathrm e^{-x^2/2}$ and $\frac1{\sqrt{2\pi}}\mathrm e^{-y^2/2}$, but even if you couldn't, you could still argue that the joint density factorizes even without knowing how to split the normalization constant between the two factors. - Thanks for this! I am wondering: does this mean that a pair of random variables $X$ and $Y$ are independent if I can express their joint PDF as the product of two functions that only depend on $X$ and $Y$? – Elements Oct 19 '12 at 17:30 @Elements: Yes, that's what it means. If you have $p(x,y)=f(x)g(y)$, then integrating shows that the marginal distributions are $p(x)=f(x)G$ and $p(y)=g(y)F$, where $F$ and $G$ are the integrals of $f$ and $g$, respectively; so $p(x,y)=p(x)p(y)/(GF)$, but $GF=1$ by normalization, so $p(x,y)=p(x)p(y)$. – joriki Oct 19 '12 at 19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524642825126648, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27334?sort=newest
## Existence of weak limits ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Background: Three months ago, I asked this question, which is a bit related to the following (if the answer to it was Yes, then answer to this one would be Yes too, but since that was a No, it still may be that this one has a positive answer). Question: does existence of RHS imply existence of LHS in this formula: $$w.\lim_{\lambda\to\lambda_{0}}f_\lambda(t)=w.\lim_{\Delta t\to 0}\frac{1}{2\Delta t}\left(w.\lim_{\lambda\to\lambda0}\int_{t-\Delta t}^{t+\Delta t}f_\lambda(\tau)d\tau\right)$$ where $w.\lim$ is meant to denote the limit in Schwartz distributions space (i.e. weak star limit on test function space), and $f_\lambda$ are generally distributions from the Schwartz space, parametrized with a real number $\lambda$ - family of Schwartz distributions. In case that the claim does not hold for all Schwartz distributions, it would be interesting to know whether it holds for locally integrable functions. Proof that the existence of LHS implies existence of RHS is simple, since the integral of a test function is again a test function. EDIT: As Pietro noted, integral represents the convolution of f with the characteristic function of an interval which is shrinking - so I believe the question can be reformulated in the spirit of my previous question, which was It is provable that `$f_\lambda\to f\Rightarrow f_\lambda*g\to f*g$` if $g$ has a compact support (shown in my textbook). In my particular case, `$g=u(t+\triangle t)-u(t-\triangle t)$`. Does for that particular case, `$f_\lambda*g\to f*g\Rightarrow f_\lambda\to f$`? but now, we say that $\Delta t$ is not fixed (like in that previous question). Is that right? Dec. 2011. EDIT: After a while, I returned to this question and felt the need to explain it a bit more. Zen Harper's answer shows that RHS is well-defined, but it doesn't seem to answer the question whether it's possible to have a family $f_\lambda$ which doesn't have a limit for $\lambda\to\lambda_0$ (i.e. LHS doesn't exist) but for which the iterated limit on the right has a value, i.e. RHS exists. It appears to me it would be enough to show that if the inner limit on the right exists for all $\Delta t$ in some neighbourhood of $0$, then the LHS limit exists. Would it be enough? - Wwhere do those $f_\lambda$ live, and what is the dependence on $\lambda$. – Pietro Majer Jun 7 2010 at 10:49 $f_\lambda$ are generally distributions from the Schwartz space, parametrized with a real number $\lambda$ - family of Schwartz distributions. In case that the claim does not hold for all Schwartz distributions, it would be interesting to know whether it holds for locally integrable functions. – Harun Šiljak Jun 7 2010 at 11:01 1 The integral in the RHS doesn't make sense for general distributions so the assumption that the $f_\lambda$ are locally integrable should definitively be added. – Johannes Hahn Jun 7 2010 at 12:39 1 @Johannes Hahn: why do we need local integrability and cannot just defined the integral in the standard way on test functions? $$\left\langle\phi,\int_{t-\Delta t}^{t+\Delta t}f(\tau)d\tau\right\rangle=\left\langle\int_{t-\Delta t}^{t+\Delta t}\phi(\tau)d\tau,f\right\rangle$$ – Andrey Rekalo Jun 7 2010 at 13:00 ok, it's the convolution of f with the characteristic function of the interval [-h,h]. (I'd write h or so in place of \Delta t) – Pietro Majer Jun 7 2010 at 15:33 show 1 more comment ## 1 Answer For functions, the answer is NO, i.e. the limit need not be a function. Let $I_h=(2h)^{-1} \chi_{[-h,h]}$, so that $I_h$ approximates the Dirac delta as $h \to 0^+$. You are considering $I_h * f_\lambda$. A digression: similar things are common; e.g. $P_h(y) = \frac{1}{\pi} \frac{h}{h^2 + y^2}$ is the Poisson kernel, related to harmonic functions; $T_t(x) = \frac{1}{\sqrt{4 \pi t}}\exp(-x^2/4t)$ is the heat kernel, related to the heat equation. So questions like this have many connections with PDEs (fundamental solutions, etc.) I would guess that the specific functions $I_h$ don't make much difference here. Anyway: choose $f_\lambda$ so that $\widehat{f_\lambda}(x) = \exp(- \lambda x^2)$ where $\widehat{f}$ denotes Fourier transform; so $f_\lambda$ is a Gaussian. Then, for each fixed test function $\varphi$, $$\langle I_h * f_\lambda, \varphi \rangle = \langle \widehat{I_h} \exp(- \lambda x^2), \widehat{\varphi} \rangle \to \langle \widehat{I_h}, \widehat{\varphi} \rangle = \langle I_h, \varphi \rangle$$ as $\lambda \to 0^+$, depending on the constants in your Fourier transform and scalar product. Thus it is clear that $\langle I_h * f_\lambda, \varphi \rangle \to \langle I_h, \varphi \rangle$. But now $\langle I_h, \varphi \rangle \to \varphi(0)$ as $h \to 0^+$, so $I_h \to \delta_0$ in the weak sense. So in this example, the RHS and LHS limits both exist; but the weak limit is $\delta_0$, the Dirac delta, not a function. Actually this is not surprising, since functions (or just test functions) are dense in the space of all distributions, so you wouldn't expect distributional limits of functions to remain as functions. For general distributions: assume that the inner limit of the RHS exists; so we can define a linear functional $U_h$ on test functions by $$\langle U_h, \varphi \rangle = \lim_{\lambda \to \lambda_0} \langle I_h * f_\lambda, \varphi \rangle.$$ $U_h$ is well-defined by assumption. Now we can define another linear functional $F$ by $$\langle F, \varphi \rangle = \lim_{h \to 0^+} \langle U_h, \varphi \rangle$$ Again this is well-defined by assumption. So $$F = w.\lim_h U_h = w.\lim_h \left( w.\lim_\lambda I_h * f_\lambda \right)$$ So, the answer to your question would be yes, if we knew that $U_h, F$ were distributions. So your question becomes: when can we show that a linear functional is a distribution? I might be wrong, but isn't it consistent with ZF to assume that every linear functional defined on all test functions is a distribution? (But please correct me if I'm wrong; can anyone provide a reference for this result?) So you'll never be able to write down a counterexample without the Axiom of Choice or something similar. So, for all practical purposes, the answer should be yes. (I myself do not believe full AC, so I would accept this as an answer, except that I haven't been able to find a reference for the ZF consistency result. But if you believe AC, then this probably won't satisfy you, sorry; we need some kind of horrible AC black magic.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934813380241394, "perplexity_flag": "head"}
http://mathoverflow.net/questions/17385?sort=votes
## Is there a bipartite analog of graph theory? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I would like to compile a list of questions about graphs that have a non-trivial analogs for bipartite graphs. Let me give the following examples: 1. Cycle vs Even cycle. Most questions about cycles in graphs have analogs in even cycles for bipartite graphs. For instance, it is trivial to show that a bipartite graph on an odd number of vertices cannot have a Hamilton cycle. In such a case the bipartite analog of a Hamilton cycle is a cycle missing exactly one vertex. 2. Minimal Girth. For graphs, 3 is the minimal possible length of a cycle. For bipartite graphs, the analogous number is 4. 3. Triangular vs quadrangular embeddings. In topological graph theory, a triangular embedding of a simple graph determine its genus. For bipartite graphs, the analog is an embedding with quadrangles as faces. - I think there are various interesting examples. I cannot think of them from the top of my head but I plan to return to it. Good question! (I didnt see the point of item 2 though.) – Gil Kalai Mar 8 2010 at 22:23 ## 3 Answers There's a standard combinatorial equivalence between undirected bipartite graphs and general directed graphs: just use the biadjacency matrix of the bipartite graph as an adjacency matrix of the directed graph and vice versa. See: R. A. Brualdi, F. Harary, , and Z. Miller (1980), "Bigraphs versus digraphs via matrices", Journal of Graph Theory 4 (1): 51–73, doi:10.1002/jgt.3190040107, MR558453. But then e.g. a cycle in the bipartite graph turns into a cycle with alternating edge orientations in the directed graph, not exactly what you probably want. - 1 This seems to be related to the Kronecker cover (canonical double cover) of undirected graphs. – Tomaž Pisanski Mar 8 2010 at 9:57 1 I guess the Kronecker cover is what you get if you make an undirected graph directed (by turning each undirected edge into two directed edges) and then looking at the equivalent bipartite graph. But of course the directed-bipartite equivalence works even when the directed graph has edges only in one direction between some pairs of vertices. – David Eppstein Mar 14 2010 at 5:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Some classical theorems involving complete graphs have analogues involving complete bipartite graphs. For example, the complete graph $K_n$ has $n^{n-2}$ spanning trees, while the complete bipartite graph $K_{m,n}$ has $n^{m-1} m^{n-1}$ spanning trees. Finding the largest complete subgraph of a graph is a standard NP-hard problem, and finding the largest (in terms of number of edges) complete bipartite subgraph of a bipartite graph is also an NP-hard problem. But possibly the area of graph theory that has most benefited from the analogy between general graphs and bipartite graphs is matching theory. Matching theory tends to be easier in the bipartite case, but the bipartite case often gives us clues for the non-bipartite case. For example, finding a maximum matching in a bipartite graph is solvable in polynomial time, but nontrivially so, and this leads us to look for a polytime algorithm for maximum matching in a non-bipartite graph (which does exist, but is more complicated than in the bipartite case). Lovász and Plummer's Matching Theory, now back in print thanks to the American Mathematical Society, gives an excellent account of the interplay between bipartite and non-bipartite matching. - The Degree-Diameter Problem vs. the Degree-Diameter Problem For Bipartite Graphs. Given natural numbers d and k, find the largest possible number Nb(d,k) of vertices in a bipartite graph of maximum degree d and diameter k. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9009833335876465, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/157751-sequence-limit.html
# Thread: 1. ## Sequence and limit Let xn be a sequence of real numbers satisfying xn+1≤xn for n=1,2,... Assume there is a constant c such that xn>c-1/n. Show that l=lim(xn) exists and l>c 2. Originally Posted by ashamrock415 Let xn be a sequence of real numbers satisfying xn+1≤xn for n=1,2,... Assume there is a constant c such that xn>c-1/n. Show that l=lim(xn) exists and l>c $\{x_n\}$ is a monotone decreasing sequence, and as $c-\frac{1}{n}\geq c-1$ , we get that this sequence is bounded from below and thus $\lim\limits_{n\to\infty}x_n$ exists. All is left is to show that $c$ is actually this sequence's infimum... Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260182976722717, "perplexity_flag": "middle"}
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevB.78.035104
# Synopsis: Fractals and quantum criticality #### Fermionic quantum criticality and the fractal nodal surface Frank Krüger and Jan Zaanen Published July 3, 2008 Unlike classical transitions, which involve thermal fluctuations, quantum phase transitions are driven solely by quantum fluctuations and occur when a parameter, such as density or magnetic field, is varied at zero temperature. Signatures of this zero-temperature transition—quantum critical behavior—can be seen even at finite temperature. While it is possible to describe quantum critical states of bosons using Monte Carlo methods, a similar approach for fermions is a major challenge. The reason is the famous minus sign that appears in front of the wave function for fermions when the fermions are interchanged. As a result, the amplitudes of these wave functions cannot be interpreted as probabilities—a problem that worsens with decreasing temperature. Writing in Physical Review B, Frank Kruger and Jan Zaanen of the University of Leiden in the Netherlands address the physics of fermionic quantum criticality by applying a path-integral formalism that encodes fermionic statistics as a geometric constraint on a bosonic system. How does a bosonic system store information about the Fermi energy—a quantity that applies uniquely to fermions? The geometric constraint takes the form of a nodal hypersurface that confines particles to pockets whose size is related to the Fermi energy. At a zero-temperature critical point, the size of these pockets vanishes, which, for instance, makes the quasiparticles in a heavy-fermion system infinitely massive, and gives rise to scale invariance—a signature of fractal behavior—of the nodal surface. The authors show how this fractal behavior would influence the thermodynamic response functions of liquid $3He$, a fermionic fluid. The results could potentially be applied to understanding quantum critical behavior in heavy-fermion metals and high-temperature superconductors. - Sarma Kancharla ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8618772029876709, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Delaunay_triangulation
# Delaunay triangulation Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: A Delaunay triangulation in the plane with circumcircles shown In mathematics and computational geometry, a Delaunay triangulation for a set P of points in a plane is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation; they tend to avoid skinny triangles. The triangulation is named after Boris Delaunay for his work on this topic from 1934.[1] For a set of points on the same line there is no Delaunay triangulation (the notion of triangulation is degenerate for this case). For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split the quadrangle into two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors. By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible to metrics other than Euclidean. However in these cases a Delaunay triangulation is not guaranteed to exist or be unique. ## Relationship with the Voronoi diagram[] The Delaunay triangulation of a discrete point set P in general position corresponds to the dual graph of the Voronoi tessellation for P. Special cases include the existence of three points on a line and four points on circle. • The Delaunay triangulation with all the circumcircles and their centers (in red). • Connecting the centers of the circumcircles produces the Voronoi diagram (in red). ## d-dimensional Delaunay[] For a set P of points in the (d-dimensional) Euclidean space, a Delaunay triangulation is a triangulation DT(P) such that no point in P is inside the circum-hypersphere of any simplex in DT(P). It is known[2] that there exists a unique Delaunay triangulation for P, if P is a set of points in general position; that is, there exists no k-flat containing k + 2 points nor a k-sphere containing k + 3 points, for 1 ≤ k ≤ d − 1 (e.g., for a set of points in ℝ3; no three points are on a line, no four on a plane, no four are on a circle, and no five on a sphere). The problem of finding the Delaunay triangulation of a set of points in d-dimensional Euclidean space can be converted to the problem of finding the convex hull of a set of points in (d + 1)-dimensional space, by giving each point p an extra coordinate equal to |p|2, taking the bottom side of the convex hull, and mapping back to d-dimensional space by deleting the last coordinate. As the convex hull is unique, so is the triangulation, assuming all facets of the convex hull are simplices. Nonsimplicial facets only occur when d + 2 of the original points lie on the same d-hypersphere, i.e., the points are not in general position. ## Properties[] Example steps Let n be the number of points and d the number of dimensions. • The union of all simplices in the triangulation is the convex hull of the points. • The Delaunay triangulation contains O(n⌈d / 2⌉) simplices.[3] • In the plane (d = 2), if there are b vertices on the convex hull, then any triangulation of the points has at most 2n − 2 − b triangles, plus one exterior face (see Euler characteristic). • In the plane, each vertex has on average six surrounding triangles. • In the plane, the Delaunay triangulation maximizes the minimum angle. Compared to any other triangulation of the points, the smallest angle in the Delaunay triangulation is at least as large as the smallest angle in any other. However, the Delaunay triangulation does not necessarily minimize the maximum angle. • A circle circumscribing any Delaunay triangle does not contain any other input points in its interior. • If a circle passing through two of the input points doesn't contain any other of them in its interior, then the segment connecting the two points is an edge of a Delaunay triangulation of the given points. • Each triangle of the Delaunay triangulation of a set of points in d-dimensional spaces corresponds to a facet of convex hull of the projection of the points onto a (d + 1)-dimensional paraboloid, and vice versa. • The closest neighbor b to any point p is on an edge bp in the Delaunay triangulation since the nearest neighbor graph is a subgraph of the Delaunay triangulation. • The Delaunay triangulation is a geometric spanner: the shortest path between two vertices, along Delaunay edges, is known to be no longer than $\frac{4\pi}{3\sqrt{3}} \approx 2.418$ times the Euclidean distance between them. ## Visual Delaunay definition: Flipping[] From the above properties an important feature arises: Looking at two triangles ABD and BCD with the common edge BD (see figures), if the sum of the angles α and γ is less than or equal to 180°, the triangles meet the Delaunay condition. This is an important property because it allows the use of a flipping technique. If two triangles do not meet the Delaunay condition, switching the common edge BD for the common edge AC produces two triangles that do meet the Delaunay condition: • This triangulation does not meet the Delaunay condition (the sum of α and γ is bigger than 180°). • This triangulation does not meet the Delaunay condition (the circumcircles contain more than three points). • Flipping the common edge produces a Delaunay triangulation for the four points. ## Algorithms[] Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if point D lies in the circumcircle of A, B, C is to evaluate the determinant:[4] $\begin{vmatrix} A_x & A_y, & A_x^2 + A_y^2, & 1\\[6pt] B_x & B_y, & B_x^2 + B_y^2, & 1\\[6pt] C_x & C_y, & C_x^2 + C_y^2, & 1\\[6pt] D_x & D_y, & D_x^2 + D_y^2, & 1 \end{vmatrix} = \begin{vmatrix} A_x - D_x, & A_y - D_y, & (A_x^2 - D_x^2) + (A_y^2 - D_y^2) \\[6pt] B_x - D_x, & B_y - D_y, & (B_x^2 - D_x^2) + (B_y^2 - D_y^2) \\[6pt] C_x - D_x, & C_y - D_y, & (C_x^2 - D_x^2) + (C_y^2 - D_y^2) \end{vmatrix} > 0$ When A, B and C are sorted in a counterclockwise order, this determinant is positive if and only if D lies inside the circumcircle. ### Flip algorithms[] As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can take O(n2) edge flips, and does not extend to three dimensions or higher.[2] ### Incremental[] The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertex v is added, we split in three the triangle that contains v, then we apply the flip algorithm. Done naively, this will take O(n) time: we search through all the triangles to find the one that contains v, then we potentially flip away every triangle. Then the overall runtime is O(n2). If we insert vertices in random order, it turns out (by a somewhat intricate proof) that each insertion will flip, on average, only O(1) triangles – although sometimes it will flip many more.[5] This still leaves the point location time to improve. We can store the history of the splits and flips performed: each triangle stores a pointer to the two or three triangles that replaced it. To find the triangle that contains v, we start at a root triangle, and follow the pointer that points to a triangle that contains v, until we find a triangle that has not yet been replaced. On average, this will also take O(log n) time. Over all vertices, then, this takes O(n log n) time.[2] While the technique extends to higher dimension (as proved by Edelsbrunner and Shah[6]), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small. The Bowyer–Watson algorithm provides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex. ### Divide and conquer[] A divide and conquer algorithm for triangulations in two dimensions is due to Lee and Schachter which was improved by Guibas and Stolfi[7] and later by Dwyer. In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in time O(n), so the total running time is O(n log n).[8] For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced to O(n log log n) while still maintaining worst-case performance. A divide and conquer paradigm to performing a triangulation in d dimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in Ed" by P. Cignoni, C. Montani, R. Scopigno.[9] Divide and conquer has been shown to be the fastest DT generation technique.[10][11] ### Sweepline[] Fortune's Algorithm uses a sweepline technique to achieve O(n log n) runtime in the planar case. ### Sweephull[] Sweephull[12] is a fast hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull (sequentially created from the radially sorted set of 2D points, giving a non-overlapping triangulation), paired with a final iterative triangle flipping step. An accurate integer arithmetic variant of the algorithm is also presented. Empirical results indicate the algorithm runs in approximately half the time of Qhull, and free implementations in C++ and C# are available.[13] ## Applications[] The Euclidean minimum spanning tree of a set of points is a subset of the Delaunay triangulation of the same points, and this can be exploited to compute it efficiently. For modelling terrain or other objects given a set of sample points, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). See triangulated irregular network. Delaunay triangulations can be used to determine the density or intensity of points samplings by means of the DTFE. The Delaunay triangulation of a random set of 100 points in a plane. Delaunay triangulations are often used to build meshes for space-discretised solvers such as the finite element method and the finite volume method of physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarse simplicial complex; for the mesh to be numerically stable, it must be refined, for instance by using Ruppert's algorithm. The increasing popularity of finite element method and boundary element method techniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodal locations so as to minimize element distortion. The stretched grid method allows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution. ## References[] 1. B. Delaunay: Sur la sphère vide, Izvestia Akademii Nauk SSSR, Otdelenie Matematicheskikh i Estestvennykh Nauk, 7:793–800, 1934 2. ^ a b c de Berg, Mark; Otfried Cheong, Marc van Kreveld, Mark Overmars (2008). Computational Geometry: Algorithms and Applications. Springer-Verlag. ISBN 978-3-540-77973-5 [Amazon-US | Amazon-UK]. 3. Seidel, R. (1995). "The upper bound theorem for polytopes: an easy proof of its asymptotic version". Computational Geometry 5 (2): 115–116. doi:10.1016/0925-7721(95)00013-Y. 4. Guibas, Lenoidas; Stolfi, Jorge (1985-04-01). "Primitives for the manipulation of general subdivisions and the computation of Voronoi". ACM. p. 107. Retrieved 2009-08-01. 5. Guibas, L.; D. Knuth; M. Sharir (1992). "Randomized incremental construction of Delaunay and Voronoi diagrams". Algorithmica 7: 381–413. doi:10.1007/BF01758770. 6. Edelsbrunner, Herbert; Nimish Shah (1996). "Incremental Topological Flipping Works for Regular Triangulations". Algorithmica 15 (3): 223–241. doi:10.1007/BF01975867. 7. Cignoni, P.; C. Montani; R. Scopigno (1998). "DeWall: A fast divide and conquer Delaunay triangulation algorithm in Ed". Computer-Aided Design 30 (5): 333–341. doi:10.1016/S0010-4485(97)00082-1. ## [] Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Delaunay triangulation", available in its original form here: http://en.wikipedia.org/w/index.php?title=Delaunay_triangulation • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8587061762809753, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4040727
Physics Forums ## How exactly to obtain Frenet Frame via Gram-Schmidt process? I have a regular curve, $\underline{a}(s)$, in ℝN (parameterised by its arc length, $s$). To a running point on the curve, I want to attach the (Frenet) frame of orthonormal vectors $\underline{u}_1(s),\underline{u}_2(s),\dots, \underline{u}_N(s)$. However, looking in different books, I find different claims as to how these should be obtained. Specifically, some books suggest that Gram-Schmidt should be applied to:$$\underline{a}^{\prime}(s), \underline{a}^{\prime \prime}(s), \dots , \underline{a}^{(N-1)}(s)$$while another book suggests that $\underline{u}_{k+1}(s)$ is obtained by applying Gram-Schmidt to $\underline{u}_k^{\prime}(s)$. Which should I use? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug To add a little more detail..... Since $s$ is an invariant parameter, I start with:$$\underline{u}_1(s) = \underline{a}^{\prime}(s)$$ Then, using $\underline{a}^{\prime \prime}(s) = \underline{u}_1^{\prime}(s)$ as the next linearly independent vector for Gram-Schmidt gives:$$\underline{u}_2 = \frac{ \underline{u}_1^{\prime} - (\underline{u}_1^T\underline{u}_1^{\prime}) \underline{u}_1}{\Vert numerator \Vert}$$ However, for $\underline{u}_3, \underline{u}_4, \dots$ the two approaches appear to become different. Recognitions: Science Advisor In three dimension a curve parameterized by arc length has acceleration perpendicular to the tangent. The cross product of the unit tangent with the normalized acceleration is perpendicular to both and this gives you the third vector in the frame. I do not believe that the Frenet frame can include other vectors so starting with an arbitrary basis and Gram-Schmiditfying will not work. In higher dimesions there are many frames that extend the unit tangent and normalized acceleration. Gram Schmitt would work on a given basis but I am not sure what its geometric meaning would be. ## How exactly to obtain Frenet Frame via Gram-Schmidt process? Quote by lavinia In higher dimesions there are many frames that extend the unit tangent and normalized acceleration. Could you expand on what you mean? Is it possible/likely that both methods I mentioned are valid? Quote by lavinia Gram Schmitt would work on a given basis but I am not sure what its geometric meaning would be. What I'd like to do is define the Cartan matrix (containing curvature, tortion, ..., general curvatures) and so develop expressions relating the moving frame, $\underline{u}_1(s),\underline{u}_2(s),\dots, \underline{u}_N(s)$, and its derivative, $\underline{u}_1^{ \prime}(s),\underline{u}_2^{ \prime}(s),\dots, \underline{u}_N^{ \prime}(s)$. I find that this is straightforward using the second method I mentioned in my original post ("while another book suggests that..."). However, before I had checked in books, I had thought the first method would be the way to go... Recognitions: Science Advisor Quote by weetabixharry Thanks for your response. Could you expand on what you mean? Is it possible/likely that both methods I mentioned are valid? What I'd like to do is define the Cartan matrix (containing curvature, tortion, ..., general curvatures) and so develop expressions relating the moving frame, $\underline{u}_1(s),\underline{u}_2(s),\dots, \underline{u}_N(s)$, and its derivative, $\underline{u}_1^{ \prime}(s),\underline{u}_2^{ \prime}(s),\dots, \underline{u}_N^{ \prime}(s)$. I find that this is straightforward using the second method I mentioned in my original post ("while another book suggests that..."). However, before I had checked in books, I had thought the first method would be the way to go... The second method seems right because it is defined by the motion along the curve. Each successive vector in the frame points in the direction that the hyperplane spanned by the previously defined vectors is moving. It is possible though that there will be points on the curve where these derivatives are zero. Rather than defining a frame this way in terms of the motion along the curve. one could just pick some basis at every points and Gram Schmitify it to get an orthonormal frame. This would not be the Frenet frame in general. Quote by lavinia Each successive vector in the frame points in the direction that the hyperplane spanned by the previously defined vectors is moving. Ah, this is very insightful. In fact, I hadn't realised that this is probably the whole point of what I'm trying to do. So, going back a few steps, is the following roughly true? If I zoom in super close to the curve, it looks like a straight line pointing in the direction of $\underline{u}_1(s)$. I zoom out a bit, and actually the curve looks like a little circular arc lying in the plane of $\underline{u}_1(s),\underline{u}_2(s)$ and with radius equal to the inverse of the first curvature. Then I zoom out a bit more, and see that the curve actually lifts out from the plane of $\underline{u}_1(s),\underline{u}_2(s)$ in the direction of $\underline{u}_3(s)$... so (locally) the curve looks like a piece of helix (?) (... and so on in N dimensions...) (I may have abused my notation there... I mean in the vicinity of some specific $s$, rather than the general definition of $s$ as arc length) Hmm, it seems like this thread has dried up. Perhaps I can rephrase the original question. Do the following two approaches yield the same result?$$\underline{u}_k(s)=\frac{\underline{a}^{(k)}(s) - \sum\limits_{m=1}^{k-1}\left(\underline{u}_m^T(s)\underline{a}^{(k)}(s)\right) \underline{u}_m(s)}{\Vert numerator \Vert}$$... suggested in, for example, [1, p. 13] (link) and [2] (link). $$\underline{u}_k(s)=\frac{\underline{u}_{k-1}^{\prime}(s) - \sum\limits_{m=1}^{k-1}\left(\underline{u}_m^T(s)\underline{u}_{k-1}^{\prime}(s) \right) \underline{u}_m(s)}{\Vert numerator \Vert}$$... suggested in, for example, [3, p. 159]. In other words, is the subspace spanned by $\left\{\underline{a}^{\prime}, \underline{a}^{ \prime \prime}, \dots, \underline{a}^{(k)}\right\}$ the same as the subspace spanned by $\left\{\underline{u}_1, \underline{u}_2, \dots, \underline{u}_{k-1}, \underline{u}_{k-1}^{\prime} \right\}$? References: [1] W. Kühnel, "Differential Geometry: Curves - Surfaces - Manifolds". [2] Wikipedia, "Frenet–Serret formulas". [3] H. W. Guggenheimer, "Differential Geometry", McGraw Hill (or Dover Edition), 1963 (1977). Thread Tools Similar Threads for: How exactly to obtain Frenet Frame via Gram-Schmidt process? Thread Forum Replies Calculus & Beyond Homework 2 Calculus & Beyond Homework 10 Calculus & Beyond Homework 6 Linear & Abstract Algebra 3 Calculus & Beyond Homework 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475305080413818, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/100651-relateing-surface-area-volume.html
# Thread: 1. ## relateing surface area to volume? I have a problem that I can't quite figure out. If I have a rectangular prism with a length=L width=W and hight=H and L=W, and the volume of this prism is 10,000 then you could write the volume formula as so L(W)H=10,000 X^2(H)=10,000 because L=W and X=L=W X^2=L(W) and surface area of the prism could be writen as so 2LW+2LH+2HW= surface area 2X^2 +2XH+2XH= surface area, Because X^2=L(W) and X=L=H 2X^2+4X= surface area Now I have to find out what the minimum surface area would be. Do I have to use some linear optimization proces on a calculater, If so can somone give me a quick pointer, I have a TI-83. And if not, can somone show me the error of my way? 2. Since L=W, we can write the volume as $L^{2}H=10,000$..[1] Assuming there is a top, the surface area is $S=2L^{2}+4LH$...[2] If we want to minimize the surface area, we can solve the volume formula for H and sub into S. From [1], $H=\frac{10,000}{L^{2}}$...[3] Sub into [2]: $S=2L^{2}+4L(\frac{10,000}{L^{2}})$ $S=2L^{2}+\frac{40,000}{L}$ Differentiate, set to 0 and solve for L. $\frac{dS}{dL}=4L-\frac{40,000}{L^{2}}$ $4L-\frac{40,000}{L^{2}}=0$ $L=10\cdot 10^{\frac{1}{3}}\approx 21.544$ The width is the same. Plug this into [3] to find the height. We get $H=21.544$. It's the same. The minimum surface area occurs when we have a cube. That means the surface area is $S=6L^{2}=2784.953$ Ti-83's do not do derivatives. For that get a TI-89, Voyage 200, etc. 3. what if only the base were square and the prism could not be a cube 4. The point is if the base is square the minimum surface area is achieved when we have a cube. That is, when the height, width, and height are the same.If the base were not square, then that's another matter. Another example is when we want to find the minimum surface area of a cylinder. That is achieved when the height and diameter are equal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269324541091919, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/14698/what-manifold-has-mathbbhpodd-as-a-boundary/15962
## What manifold has $\mathbb{H}P^{odd}$ as a boundary? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is motivated by http://mathoverflow.net/questions/8829/what-manifolds-are-bounded-by-rpodd (as well as a question a fellow grad student asked me) but I can't seem to generalize any of the provided answers to this setting. Allow me to give some background. Take all (co)homology groups with $\mathbb{Z}_2$ coefficients. Given a smooth compact manifold $M^n$, let $w_i = w_i(M)\in H^i(M)$ denote the Stiefel-Whitney classes of (the tangent bundle of) M. Let $[M]\in H_n(M)$ denote the fundamental class (mod 2). Consider the Stiefel-Whitney numbers of $M$, defined as the set of all outputs of $\langle w_{i_1}...w_{i_k} , [M] \rangle$. Of course this is only interesting when $\sum i_{j} = n$. Pontrjagin proved that if $M$ is the boundary of some compact n+1 manifold, then all the Steifel-Whitney numbers are 0. Thom proved the converse - that if all Stiefel-Whitney numbers are 0, then $M$ can be realized as a boundary of some compact n+1 manifold. As a quick aside, the Euler characteristic $\chi(M)$ mod 2 is equal to $w_n$. Hence, we see immediately that if $\chi(M)$ is odd, then $M$ is NOT the boundary of a compact manifold. As an immediate corollary to this, none of $\mathbb{R}P^{even}$, $\mathbb{C}P^{even}$, nor $\mathbb{H}P^{even}$ are boundaries of compact manifolds. Conversely, one can show that all Stiefel-Whitney numbers of $\mathbb{R}P^{odd}$, $\mathbb{C}P^{odd}$ and $\mathbb{H}P^{odd}$ are 0, so these manifolds can all be realized as boundaries. What is an example of a manifold $M$ with $\partial M = \mathbb{H}P^{2n+1}$ (and please assume $n>0$ as $\mathbb{H}P^1 = S^4$ is obviously a boundary)? The question for $\mathbb{R}P^{odd}$ is answered in the link at the top. The question for $\mathbb{C}P^{odd}$ is similar, but slightly harder: Consider the (standard) inclusions $Sp(n)\times S^1\rightarrow Sp(n)\times Sp(1)\rightarrow Sp(n+1)$. The associated homogeneous fibration is given as $$Sp(n)\times Sp(3)/ Sp(n)\times S^1\rightarrow Sp(n+1)/Sp(n)\times S^1\rightarrow Sp(n+1)/Sp(n)\times Sp(1),$$ which is probably better recognized as $$S^2\rightarrow \mathbb{C}P^{2n+1}\rightarrow \mathbb{H}P^{n}.$$ One can "fill in the fibers" - fill the $S^2$ to $D^3$ to get a compact manifold $M$ with boundary equal to $\mathbb{C}P^{2n+1}$. I'd love to see $\mathbb{H}P^{odd}$ described in a similar fashion, but I don't know if this is possible. Assuming it's impossible to describe $\mathbb{H}^{odd}$ as above, I'd still love an answer along the lines of "if you just do this simple process to this often used class of spaces, you get the manifolds you're looking for". Thanks in advance! - ## 2 Answers Jason, this not an answer, just an observation. Using your formula for $p_1$, $< p_1^{2n+1}, [\mathbb{H}P^{2n+1}]> = (2n-2)^{2n+1} < u,[\mathbb{H}P^{2n+1}]> \neq 0$ if $n>1$, so $\mathbb{H}P^{2n+1}$ cannot be the boundary of an oriented manifold, unlike the examples you give for $\mathbb{R}P^{2n+1}$ and $\mathbb{C}P^{2n+1}$. The point is that filling spherical fibres in oriented bundles will not work. By the way, this is my first post in Math Overflow. Yay!!! Note: this post has been edited because the original was very false. I claimed that $\sigma(\mathbb{H}P^{2n+1})=1$ which is silly because the middle cohomology is $H^{4n+2}(\mathbb{H}P^{2n+1}) = 0$. Also the signature being odd would have contradicted the fact that $\chi (\mathbb{H}P^{2n+1})$ is even, which is stated in the question. - Welcome to Mathoverflow, Torcuato. The fact about Pontryagin numbers follows immediately from the fact that $p_1(\mathbb{H}P^n)$ is nontrivial for $n\neq 1$, so I should have picked up on this myself following my comment to Ryan's answer - thanks for pointing it out! – Jason DeVito Feb 21 2010 at 18:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A small note on extending the argument I gave in the previous (linked) thread. You get a free involution on $\mathbb CP^{2n+1}$ by using the fibrewise antipodal map for your bundle $$S^2 \to \mathbb CP^{2n+1} \to \mathbb HP^n$$ so this also gives you $\mathbb CP^{2n+1}$ as the boundary of a mapping cylinder. $\mathbb HP^{2n+1}$ I'm not sure how to deal with analogously. I suppose a place to start would be to try and find a somehow more natural free involution on $\mathbb CP^{2n+1}$. Googling around it's not clear to me whether or not it's known if $\mathbb HP^{2n+1}$ admits a free involution. - 5 $\mathbb{H}P^n$ has no free involutions unless n = 1. This is because the first Pontrjagin class is given as (2(n+1) - 4)u where u is a specific generator of $H^4(\mathbb{H}P^n)$ and any diffeomorphism f will fix this. Hence, $f$ will fix $u^k$ for all $k$, and then, by the Lefshetz fixed point theorem, $f$ will have a fixed point. – Jason DeVito Feb 9 2010 at 1:47 Ah, okay. So that tells us it's not the boundary of an $I$-bundle. – Ryan Budney Feb 9 2010 at 2:21 I've tried responding in the other thread, but the "add comment" button isn't working. Yes, I-bundle means bundle over a space with fiber an interval. – Ryan Budney Feb 9 2010 at 5:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438681602478027, "perplexity_flag": "head"}
http://openglbook.com/the-book/chapter-4-entering-the-third-dimension/
# Chapter 4: Entering the Third Dimension If you’re learning OpenGL, it’s very likely you’re doing so to learn how to render three-dimensional data. In this chapter, we’ll be placing our very first step in the world of three-dimensional computer graphics. We’ll learn: • The mathematics used to describe transformations in a three-dimensional world • What coordinate systems are good for and how to use them • What polygon culling is and why it’s used • How to render a rotating colored cube to the screen • Some new OpenGL function calls As mentioned in the preface, you’ll need some mathematical knowledge in order to understand some of the concepts presented, preferably knowledge of linear algebra. The mathematics in this chapter is as lightweight as possible without sacrificing the integrity of the presented concept. Make sure that you meet the requirements before continuing! Attention You don’t have to copy and paste all of the changes into your source code, these steps are provided to illustrate the code changes. The source code for this chapter can be downloaded at the bottom of this page. OpenGL 3.x / DirectX 10 Level Hardware This chapter is 100% compatible with OpenGL 3.x level hardware by only changing a few lines of code. See the conclusion at the bottom of the page for the modified source files in the “OpenGL 3.3″ sub-directory of the source code listing. # Utilities We use many of the functions from this chapter in the rest of the book, so let’s create a few files that we can carry over from chapter to chapter. Please note that all of the functionality in this book is for demonstration purposes, and not optimized for performance. The functions provided here are verbose by design so that the flow of the code is easy to understand. If you are looking for a professional-grade 3D mathematics library, you can find several excellent open source C++ libraries on the Internet or roll your own using high performance code. However, for this book, we’re going to create a file called `Utils.h`, and add the following lines: ``` #ifndef UTILS_H #define UTILS_H #include <stdlib.h> #include <stdio.h> #include <string.h> #include <math.h> #include <time.h> #include <GL/glew.h> #include <GL/freeglut.h> static const double PI = 3.14159265358979323846; typedef struct Vertex { float Position[4]; float Color[4]; } Vertex; typedef struct Matrix { float m[16]; } Matrix; static const Matrix IDENTITY_MATRIX = { { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 } }; float Cotangent(float angle); float DegreesToRadians(float degrees); float RadiansToDegrees(float radians); Matrix MultiplyMatrices(const Matrix* m1, const Matrix* m2); void RotateAboutX(Matrix* m, float angle); void RotateAboutY(Matrix* m, float angle); void RotateAboutZ(Matrix* m, float angle); void ScaleMatrix(Matrix* m, float x, float y, float z); void TranslateMatrix(Matrix* m, float x, float y, float z); Matrix CreateProjectionMatrix( float fovy, float aspect_ratio, float near_plane, float far_plane ); void ExitOnGLError(const char* error_message); GLuint LoadShader(const char* filename, GLenum shader_type); #endif ``` Now create a second file named `Utils.c` and insert the following lines: ``` #include "Utils.h" float Cotangent(float angle) { return (float)(1.0 / tan(angle)); } float DegreesToRadians(float degrees) { return degrees * (float)(PI / 180); } float RadiansToDegrees(float radians) { return radians * (float)(180 / PI); } Matrix MultiplyMatrices(const Matrix* m1, const Matrix* m2) { Matrix out = IDENTITY_MATRIX; unsigned int row, column, row_offset; for (row = 0, row_offset = row * 4; row < 4; ++row, row_offset = row * 4) for (column = 0; column < 4; ++column) out.m[row_offset + column] = (m1->m[row_offset + 0] * m2->m[column + 0]) + (m1->m[row_offset + 1] * m2->m[column + 4]) + (m1->m[row_offset + 2] * m2->m[column + 8]) + (m1->m[row_offset + 3] * m2->m[column + 12]); return out; } void ScaleMatrix(Matrix* m, float x, float y, float z) { Matrix scale = IDENTITY_MATRIX; scale.m[0] = x; scale.m[5] = y; scale.m[10] = z; memcpy(m->m, MultiplyMatrices(m, &scale).m, sizeof(m->m)); } void TranslateMatrix(Matrix* m, float x, float y, float z) { Matrix translation = IDENTITY_MATRIX; translation.m[12] = x; translation.m[13] = y; translation.m[14] = z; memcpy(m->m, MultiplyMatrices(m, &translation).m, sizeof(m->m)); } void RotateAboutX(Matrix* m, float angle) { Matrix rotation = IDENTITY_MATRIX; float sine = (float)sin(angle); float cosine = (float)cos(angle); rotation.m[5] = cosine; rotation.m[6] = -sine; rotation.m[9] = sine; rotation.m[10] = cosine; memcpy(m->m, MultiplyMatrices(m, &rotation).m, sizeof(m->m)); } void RotateAboutY(Matrix* m, float angle) { Matrix rotation = IDENTITY_MATRIX; float sine = (float)sin(angle); float cosine = (float)cos(angle); rotation.m[0] = cosine; rotation.m[8] = sine; rotation.m[2] = -sine; rotation.m[10] = cosine; memcpy(m->m, MultiplyMatrices(m, &rotation).m, sizeof(m->m)); } void RotateAboutZ(Matrix* m, float angle) { Matrix rotation = IDENTITY_MATRIX; float sine = (float)sin(angle); float cosine = (float)cos(angle); rotation.m[0] = cosine; rotation.m[1] = -sine; rotation.m[4] = sine; rotation.m[5] = cosine; memcpy(m->m, MultiplyMatrices(m, &rotation).m, sizeof(m->m)); } Matrix CreateProjectionMatrix( float fovy, float aspect_ratio, float near_plane, float far_plane ) { Matrix out = { { 0 } }; const float y_scale = Cotangent(DegreesToRadians(fovy / 2)), x_scale = y_scale / aspect_ratio, frustum_length = far_plane - near_plane; out.m[0] = x_scale; out.m[5] = y_scale; out.m[10] = -((far_plane + near_plane) / frustum_length); out.m[11] = -1; out.m[14] = -((2 * near_plane * far_plane) / frustum_length); return out; } void ExitOnGLError(const char* error_message) { const GLenum ErrorValue = glGetError(); if (ErrorValue != GL_NO_ERROR) { const char* APPEND_DETAIL_STRING = ": %s\n"; const size_t APPEND_LENGTH = strlen(APPEND_DETAIL_STRING) + 1; const size_t message_length = strlen(error_message); char* display_message = (char*)malloc(message_length + APPEND_LENGTH); memcpy(display_message, error_message, message_length); memcpy(&display_message[message_length], APPEND_DETAIL_STRING, APPEND_LENGTH); fprintf(stderr, display_message, gluErrorString(ErrorValue)); free(display_message); exit(EXIT_FAILURE); } } GLuint LoadShader(const char* filename, GLenum shader_type) { GLuint shader_id = 0; FILE* file; long file_size = -1; char* glsl_source; if (NULL != (file = fopen(filename, "rb")) && 0 == fseek(file, 0, SEEK_END) && -1 != (file_size = ftell(file))) { rewind(file); if (NULL != (glsl_source = (char*)malloc(file_size + 1))) { if (file_size == (long)fread(glsl_source, sizeof(char), file_size, file)) { glsl_source[file_size] = '\0'; if (0 != (shader_id = glCreateShader(shader_type))) { glShaderSource(shader_id, 1, &glsl_source, NULL); glCompileShader(shader_id); ExitOnGLError("Could not compile a shader"); } else fprintf(stderr, "ERROR: Could not create a shader.\n"); } else fprintf(stderr, "ERROR: Could not read file %s\n", filename); free(glsl_source); } else fprintf(stderr, "ERROR: Could not allocate %i bytes.\n", file_size); fclose(file); } else fprintf(stderr, "ERROR: Could not open file %s\n", filename); return shader_id; } ``` # Step-By-Step: Mathematics If you’re unfamiliar with the basics of linear algebra, much of the code in the listing from the previous section will seem like gibberish, and the rest of this chapter may be hard to follow. A few resources for learning linear algebra are listed in the chapter’s conclusion. In the next sections, we’ll explore what matrices are at a glance, how to use them in three-dimensional computer graphics, and how they are used in the code you’ve just copied. You may ask yourself “how important is it to know all these calculations by heart?” For most of computer graphics, it is okay just to know the applications of the calculations since you’d store them in reusable functions. However, for complex computation, you would have to come up with calculations of your own using matrices, transformations, and other linear algebra. ## The Matrix A matrix is a mathematical concept that describes a grid (or array) of numbers composed of m row vectors and n column vectors. We use matrices to describe transformations from one coordinate space to another, such as rotation, scaling, translation, etc. In our programs, we use one type of matrix, namely the 4×4 square matrix (a matrix is square when n = m). Let’s look at our 4×4 matrix M: \( \mathbf{M} = \begin{bmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13& 14 & 15 & 16 \end{bmatrix}\cdot \) The notation used to access the single value stored in row 2, column 4 is: $$M_{24} = 8$$ In a three-dimensional coordinate system such as ours, a 4×4 matrix contains the transformations for each axis in each column vector, which are the four horizontal columns that make up the matrix: \( \mathbf{M} = \begin{bmatrix} Xx & Yx & Zx & Tx \\ Xy & Yy & Zy & Ty \\ Xz & Yz & Zz & Tz \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) The first three column vectors contain the x-, y-, and z-axes’ transformations respectively, while the last column vector contains the translation, which we’ll explore further below. In our programs, we represent matrices with the structure `Matrix`, which contains an array of 16 floating-point elements, the total amount of elements in a 4×4 matrix. All of the matrix operations in `Utils.h` operate on this structure. ## Matrix Multiplication Before we continue with the explanation of how these transformations work, make sure that you fully understand matrix multiplication since we’ll use it many times in this chapter by using the `MultiplyMatrices` function from the files that we’ve just created. Matrix multiplication is very important since it allows us to transform points from one coordinate system to another. To understand how matrix multiplication works, let’s take the following 2×2 matrices for simplicity’s sake, which we’ll name A and B: \( \begin{aligned} \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}\cdot \\ \mathbf{B} = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}\cdot \end{aligned} \) In order to get the product of A and B, which we’ll name matrix C, we’ll have to multiply each row vector in matrix A with each column vector in matrix B. This means that if we wish to find the value to go into $$C_{11}$$, we’ll have to perform the following calculation: $$C_{11} = A_{11}B_{11} + A_{12}B_{21} = 1\cdot5 + 2\cdot7 = 5+14 = 19$$ This is the same as the dot product of the first row vector of matrix A and the first column vector of matrix B. We repeat this process for the entire matrix, resulting in the following calculation for the 2×2 matrices: \( \mathbf{C} = \begin{bmatrix} (A_{11}B_{11}+A_{12}B_{21}) & (A_{11}B_{12}+A_{12}B_{22}) \\ (A_{21}B_{11}+A_{22}B_{21}) & (A_{21}B_{12}+A_{22}B_{22}) \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix} \cdot \) If we had used a 4×4 matrix multiplication instead of the above 2×2, the calculation would be much more extensive and best handled by a computer unless you enjoy multiplying matrices by hand. Note that matrix multiplication is not commutative, meaning that AB is not BA except when either A or B is an identity matrix (described below). Our `MultiplyMatrices` function multiplies matrices `m1`, the multiplier, and `m2`, the multiplicand, and returns the product as a brand new matrix. ## Identity Matrix An important type of matrix is the identity matrix, which when multiplied by, produces the multiplicand. It is important because it serves as the basis for our transformations: \( \mathbf{I} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \) Identity matrices can only be formed from square matrices (m = n), and are visually distinguishable by a line of number ones along its main diagonal (which runs from top left to bottom right), whereas the rest of the matrix contains zeroes. Another way of looking at an identity matrix is by the fact that it’s composed of unit vectors. The column vector for x points in the x-direction, the column vector for y in the y-direction, the column vector for z in the z-direction, and the column vector used for translation set to zero for no translation. Our own identity matrix is stored in the `IDENTITY_MATRIX` constant, defined in the `Utils.h` file. If you browse through some of the functions in `Utils.c`, you’ll notice that we use an identity matrix for almost every transformation. ## Transformations This brings us to the topic of transformations, which are an important tool in three-dimensional computer graphics. In fact, three-dimensional computer graphics would not be possible were it not for transformations. Transformations allow us to transform a point in space from one location to another using matrix multiplication. There are two transformation types used, affine transformations and projective transformations. Affine transformations allow us to translate, rotate, scale, or shear our points in space relative to an origin using matrices. Most affine transformations do no alter the physical properties of an object, meaning that the distances from one point to another do no change. Projective transformations on the other hand, transform points in order to “project” them onto a flat viewing plane, thus changing the properties of an object significantly. One example of such a projective transformation is the perspective projection matrix, which we will explore further on in this chapter. ### Transformation Pipeline Before we continue describing the various transformations in `Utils.h`, there’s the important subject of coordinate systems. As you know, a coordinate system is a method used to describe the position of a point within a space, which in our case is three-dimensional. In computer graphics, we use several coordinate systems to transform our point to a displayable entity in a process called the transformation pipeline, let’s take a quick look at each of its stages. #### Object Space The transformation pipeline starts in object space, which hosts an object’s local coordinates, called object coordinates. These are the raw vertices provided by the modeling software, or as to relate it to the previous chapter, the points stored in the vertex buffer object. #### World Space To get the objects in a position relative to your world’s origin, we transform its vertices using a modeling transform to bring them into world space. There could be several more steps for objects positioned relative to each other, also called modeling transforms. #### Eye Space Now that the object’s position is relative to the world’s origin, the next step is the view transform, which positions the world relative to the viewer’s position, bringing the object into eye space. The viewer’s position is the camera’s position within the scene, but not the camera’s function. The Model-View Matrix Concept Often in code samples, you’ll notice something called a “model-view” transformation as a single step. This is because when you multiply matrices, transforms combine into a single matrix. While this practice is not wrong, and can actually save a tiny fraction of bandwidth, we don’t use “model-view” matrices in this book to maintain proper coordinate system terminologies and avoid confusion. #### Clip Space The next step is to determine which vertices are actually viewable by the camera though a projection transformation after which the points are in clip space. We discuss the concept of viewing volumes and clipping at a later point in this chapter. #### Normalized Device Space The next step is to perform perspective division (or perspective projection), which brings our vertices into normalized device space. For OpenGL to be able to perform its rasterizing operations on the vertex data provided, the data needs to be in a two-dimensional format, along with a depth value for depth buffering. #### Window Space After the transformation to normalized device space, OpenGL takes over and feeds the vertex data into a process called rasterization, which generates fragments for the triangles, lines, and points that we described with our vertex information. At this point, OpenGL also applies the depth information to determine which fragments to discard due to overlap. After the composition of all the final fragments, the graphics hardware outputs the final image to the screen. ### Translation Matrix To move (or translate) a point from its origin, we must use what’s called a translation matrix. The translation matrix stores the magnitude of the translation for each of the three dimensions in the last column vector of the 4×4 matrix: \( \mathbf{T}=\begin{bmatrix} 1 & 0 & 0 & Tx \\ 0 & 1 & 0 & Ty \\ 0 & 0 & 1 & Tz \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) This matrix looks very similar to the identity matrix described earlier, with the exception of its last column vector. Multiplying by this matrix preserves rotation as well as scaling since the top left part of the matrix, where scaling and rotation values are stored, contains a 3×3 identity matrix. Let’s see how translation works, by translating the following point, represented as a column vector, three units along the y-axis: \( \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 5 \\ 3 \\ 1 \end{bmatrix} \cdot \) This operation is simple enough, and you can easily visualize how the y-component of the column vector increases its magnitude. This same principle applies to matrices when we wish to move an entire coordinate system instead of a single point: \( \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 & 3 & 4\\ 2 & 3 & 4 & 1\\ 3 & 4 & 1 & 2\\ 4 & 1 & 2 & 3 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 3 & 4\\ 14 & 6 & 10 & 10\\ 3 & 4 & 1 & 2\\ 4 & 1 & 2 & 3 \end{bmatrix} \cdot \) Notice how only the y-components of each column vector changes, since it’s the only direction translated. The function that we use to translate matrices is `TranslateMatrix`, which takes in a pointer to the matrix to translate, and x-, y-, and z-components that make up the translation stored in the translation column vector of the matrix. ### Scaling Matrix The matrix used for scaling transformations should look very familiar by now: \( \mathbf{S}=\begin{bmatrix} Sx & 0 & 0 & 0 \\ 0 & Sy & 0 & 0 \\ 0 & 0 & Sz & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) This matrix looks remarkably similar to the identity matrix described earlier, only in this matrix the values on the main diagonal are scaling factors. This means that a scaling matrix with all of its scaling factors set to one is equal to an identity matrix, and will not change its multiplicand. These scaling values may be positive for scaling outward (expansion), or negative for scaling inward (contraction). To scale in a single direction, simply set the directions you do not wish to scale to one (as with an identity matrix) and scale the remaining directions. To scale an entire matrix by a factor of two in all directions, we would apply the following transformation: \( \begin{bmatrix} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 & 3 & 4\\ 2 & 3 & 4 & 1\\ 3 & 4 & 1 & 2\\ 4 & 1 & 2 & 3 \end{bmatrix} = \begin{bmatrix} 2 & 4 & 6 & 8\\ 4 & 6 & 8 & 2\\ 6 & 8 & 2 & 4\\ 4 & 1 & 2 & 3 \end{bmatrix}\cdot \) The function that we use to scale is `ScaleMatrix`, which takes in a pointer to the matrix to scale, and x-, y-, and z-components that define the scaling vector. ### Rotation To rotate, we use three separate matrices, each of which defines a rotation about the x-, y-, or z-axis, respectively. Let’s look at the matrix required to rotate a point about its x-axis: \( R_{x}(\theta)=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) If you look closely, you’ll notice that this transformation does not affect the x-components of the column vectors or the x-column vector itself. This is because we rotate about an axis, visualized by rolling the axis of rotation between your fingers, so only the y- and z-vectors change direction: The same principle applies to the rotation about the y-axis: \( R_{y}(\theta)=\begin{bmatrix} \cos\theta & 0 & \sin\theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin\theta & 0 & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) As well as the z-axis: \( R_{z}(\theta)=\begin{bmatrix} \cos\theta & -\sin\theta & 0 & 0 \\ \sin\theta & \cos\theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) One thing to keep in mind when using rotation matrices is the order in which the rotations are applied to the point. A rotation about the x-axis followed by a rotation about the z-axis is not the same as a rotation about the z-axis followed by a rotation about the x-axis. Take for example the following matrix R: \( \mathbf{R} = \begin{bmatrix} 5 & 0 & 0 & 0 \\ 0 & 6 & 0 & 0 \\ 0 & 0 & 7 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) When we rotate R 45-degrees about the y-axis, followed by a 90-degree rotation about the x-axis, the resulting matrix is: \( \mathbf{R’} = \begin{bmatrix} 3.53 & -3.53 & 1.54 & 0 \\ 0 & 0 & -6 & 0 \\ 4.94 & 4.94 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) Whereas a 90-degree rotation about the x-axis followed by a 45-degree rotation about the y-axis would result in this matrix: \( \mathbf{R’} = \begin{bmatrix} 3.53 & 0 & -3.53 & 0 \\ -4.24 & 0 & -4.24 & 0 \\ 0 & 7 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\cdot \) We represent each of these rotational transformations with its own function in `Utils.h`: `RotateAboutX`, `RotateAboutY`, and `RotateAboutZ`, which all take in a pointer to the matrix to rotate as well as an angle of rotation in radians. ### Projection Matrices The last transformation we’ll discuss is very different from the previously mentioned ones. Its purpose is to project points onto a two-dimensional plane instead of transforming points in a three-dimensional world. Because of this behavior, we call this type of matrix a projection matrix. There are two major types of projections used in three-dimensional computer graphics, namely orthogonal projection and perspective projection. Orthogonal projection (also called parallel projection) doesn’t apply foreshortening to lines, meaning that lines don’t converge towards a point as in real life. With orthogonal projection, parallel lines will remain parallel, and will never intersect. This type of transformation is very useful in three-dimensional modeling programs where many objects at various distances display in a single viewport but require the same vertex transformations executed simultaneously. Perspective projection on the other hand, mimics the real-life visual effect of foreshortening where objects at a distance appear smaller than objects nearby. This means that parallel lines will eventually intersect at a vanishing point, much like train tracks running off towards the horizon. ### Perspective Projection The function we use for perspective projection is `CreateProjectionMatrix`, which takes in the following parameters: • `fovy`, which represents the vertical field-of-view angle in radians, this is the angle between the top plane of the view frustum (see below) and the bottom plane • `aspect_ratio`, which is the ratio of width to the height of the viewport • `near_plane`, which is the distance from the eye to the near plane • `far_plane`, which is the distance from the eye to the far plane If you’ve used OpenGL’s fixed functionality, you’ll probably recognize that this function is very similar to `gluPerspective`, and in fact, `CreateProjectionMatrix` mimics the behavior of this function exactly. We use this single matrix to convert the vertices from eye space to clip space as well as from clip space to normalized device space. The parameters above describe a so-called viewing frustum (also called a viewing volume), used to determine which points to project onto the viewing plane: The frustum's outlines are bold A frustum is a pyramid-like shape with its top cut off. The near and far planes have the same aspect ratio as your viewport. Because of this preservation of aspect ratio, your geometry will no longer look stretched when the window resizes as in the previous chapters. Projective transformation applies only to those points that fall within the viewing frustum, meaning that we clip the points that lie outside of the frustum. To obtain normalized device coordinates, we map the entire viewing frustum to an axis-aligned cube measuring 2x2x2 units, located at the world’s origin. This cube exists in order to facilitate the projection of the vertices onto the projection plane by OpenGL through parallel projection. After the vertices are in normalized device space, OpenGL is ready to use these points for rasterization. Geometrical Concepts When we speak of the viewing frustum and the normalized device coordinates cube, it’s important to note that we don’t generate actual geometry for these shapes. The planes of the frustum and the sides of the cube simply represent minimum and maximum boundaries for the vertices. In most cases, you don’t have to know the specifics of the projection matrix since the implementation rarely ever changes: simply define the code once and copy it into all of your projects. Until the OpenGL 3 version branch, the generation of the perspective projection matrix was part of OpenGL through the `glFrustum` function or GLU’s `gluPerspective`. To apply perspective projection transformations, we use the following matrix: \( \mathbf{P}=\begin{bmatrix} xScale & 0 & 0 & 0\\ 0 & yScale & 0 & 0\\ 0 & 0 & -\frac{zFar+zNear}{zFar – zNear} & -\frac{2 \cdot zNear \cdot zFar}{zFar – zNear} \\ 0 & 0 & -1 & 0 \end{bmatrix}\cdot \) Where $$yScale = \cot (\frac{fovy}{2})$$ and $$xScale = \frac{f}{aspect}$$. See the `CreateProjectionMatrix` function in the file `Utils.c` for implementation details. # Drawing a Cube Now that you have a basic knowledge of transformations, let’s apply them and draw a rotating cube to the screen. Once again, the program that we created in chapter one serves as the basis for this exercise, so copy `Chapter1.c` (or `Chapter1.3.c` if you’re getting it from the source code repository) to a new file called `Chapter4.1.c`. First, remove all of the `#include` directives from the file, and replace them with a single `#include` to `Utils.h`: `#include "Utils.h"` As with each chapter, update `WINDOW_TITLE_PREFIX` to reflect the current chapter: `#define WINDOW_TITLE_PREFIX "Chapter 4"` After the `FrameCount` variable, declare the following block of variables: ```GLuint ProjectionMatrixUniformLocation, ViewMatrixUniformLocation, ModelMatrixUniformLocation, BufferIds[3] = { 0 }, ShaderIds[3] = { 0 }; ``` Underneath that, add the following matrices: ```Matrix ProjectionMatrix, ViewMatrix, ModelMatrix; ``` Right below that, add the following variable declarations: ```float CubeRotation = 0; clock_t LastTime = 0; ``` Underneath the declaration of the `IdleFunction` function, insert the following new function declarations: ```void CreateCube(void); void DestroyCube(void); void DrawCube(void); ``` Inside of the `Initialize` function definition, make the following function call right above the function call to `glClearColor`: ```glGetError(); ``` Then, right underneath the call to `glClearColor`, insert the following lines: ```glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); ExitOnGLError("ERROR: Could not set OpenGL depth testing options"); glEnable(GL_CULL_FACE); glCullFace(GL_BACK); glFrontFace(GL_CCW); ExitOnGLError("ERROR: Could not set OpenGL culling options"); ModeMatrix = IDENTITY_MATRIX; ProjectionMatrix = IDENTITY_MATRIX; ViewMatrix = IDENTITY_MATRIX; TranslateMatrix(&ViewMatrix, 0, 0, -2); CreateCube(); ``` Inside of the `InitWindow` function definition, right after the function call to `glutCloseFunc`, place the following line: ```glutCloseFunc(DestroyCube); ``` Next, inside of the `ResizeFunction` function definition, underneath the call to `glViewport`, insert the following lines: ```ProjectionMatrix = CreateProjectionMatrix( 60, (float)CurrentWidth / CurrentHeight, 1.0f, 100.0f ); glUseProgram(ShaderIds[0]); glUniformMatrix4fv(ProjectionMatrixUniformLocation, 1, GL_FALSE, ProjectionMatrix.m); glUseProgram(0); ``` In the `RenderFunction` function definition, right after the call to `glClear`, place the following function call: ```DrawCube(); ``` We’ll be entering the following function definition piece by piece in logical steps. First, create the following empty function definition: ```void CreateCube(void) { } ``` At the first line of the function, insert the cube’s vertex definitions: ```const Vertex VERTICES[8] = { { { -.5f, -.5f, .5f, 1 }, { 0, 0, 1, 1 } }, { { -.5f, .5f, .5f, 1 }, { 1, 0, 0, 1 } }, { { .5f, .5f, .5f, 1 }, { 0, 1, 0, 1 } }, { { .5f, -.5f, .5f, 1 }, { 1, 1, 0, 1 } }, { { -.5f, -.5f, -.5f, 1 }, { 1, 1, 1, 1 } }, { { -.5f, .5f, -.5f, 1 }, { 1, 0, 0, 1 } }, { { .5f, .5f, -.5f, 1 }, { 1, 0, 1, 1 } }, { { .5f, -.5f, -.5f, 1 }, { 0, 0, 1, 1 } } }; ``` Right after that, insert the cube’s index definitions: ```const GLuint INDICES[36] = { 0,2,1, 0,3,2, 4,3,0, 4,7,3, 4,1,5, 4,0,1, 3,6,2, 3,7,6, 1,6,5, 1,2,6, 7,5,6, 7,4,5 }; ``` After that, place the following shader-program creation code: ```ShaderIds[0] = glCreateProgram(); ExitOnGLError("ERROR: Could not create the shader program"); ``` Immediately underneath that, place the following shader loading and attaching code: ```ShaderIds[1] = LoadShader("SimpleShader.fragment.glsl", GL_FRAGMENT_SHADER); ShaderIds[2] = LoadShader("SimpleShader.vertex.glsl", GL_VERTEX_SHADER); glAttachShader(ShaderIds[0], ShaderIds[1]); glAttachShader(ShaderIds[0], ShaderIds[2]); glLinkProgram(ShaderIds[0]); ExitOnGLError("ERROR: Could not link the shader program"); ``` After that, insert the code to retrieve the shader uniforms: ```ModelMatrixUniformLocation = glGetUniformLocation(ShaderIds[0], "ModelMatrix"); ViewMatrixUniformLocation = glGetUniformLocation(ShaderIds[0], "ViewMatrix"); ProjectionMatrixUniformLocation = glGetUniformLocation(ShaderIds[0], "ProjectionMatrix"); ExitOnGLError("ERROR: Could not get the shader uniform locations"); ``` Insert the following VAO generation and binding code after that: ```glGenVertexArrays(1, &BufferIds[0]); ExitOnGLError("ERROR: Could not generate the VAO"); glBindVertexArray(BufferIds[0]); ExitOnGLError("ERROR: Could not bind the VAO"); ``` After that, enable the following vertex attribute locations: ```glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); ExitOnGLError("ERROR: Could not enable vertex attributes"); ``` Insert the following VBO binding, data uploading, and vertex attribute descriptions after that: ```glBindBuffer(GL_ARRAY_BUFFER, BufferIds[1]); glBufferData(GL_ARRAY_BUFFER, sizeof(VERTICES), VERTICES, GL_STATIC_DRAW); ExitOnGLError("ERROR: Could not bind the VBO to the VAO"); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(VERTICES[0]), (GLvoid*)0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(VERTICES[0]), (GLvoid*)sizeof(VERTICES[0].Position)); ExitOnGLError("ERROR: Could not set VAO attributes"); ``` For the final lines of the function, insert the IBO binding and uploading code: ```glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, BufferIds[2]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(INDICES), INDICES, GL_STATIC_DRAW); ExitOnGLError("ERROR: Could not bind the IBO to the VAO"); glBindVertexArray(0); ``` Immediately after the `CreateCube` function definition, insert the following new function definition: ```void DestroyCube(void) { glDetachShader(ShaderIds[0], ShaderIds[1]); glDetachShader(ShaderIds[0], ShaderIds[2]); glDeleteShader(ShaderIds[1]); glDeleteShader(ShaderIds[2]); glDeleteProgram(ShaderIds[0]); ExitOnGLError("ERROR: Could not destroy the shaders"); glDeleteBuffers(2, &BufferIds[1]); glDeleteVertexArrays(1, &BufferIds[0]); ExitOnGLError("ERROR: Could not destroy the buffer objects"); } ``` The last function we’ll define draws the cube to the screen. Insert the following empty function definition immediately after the `DestroyCube` function definition: ```void DrawCube(void) { } ``` At the first line of the function, insert the following lines used for time-based rotations: ```float CubeAngle; clock_t Now = clock(); if (LastTime == 0) LastTime = Now; CubeRotation += 45.0f * ((float)(Now - LastTime) / CLOCKS_PER_SEC); CubeAngle = DegreesToRadians(CubeRotation); LastTime = Now; ``` After that, insert the following matrix transformations: ```ModelMatrix = IDENTITY_MATRIX; RotateAboutY(&ModelMatrix, CubeAngle); RotateAboutX(&ModelMatrix, CubeAngle); ``` Immediately after that, insert the following shader related function calls: ```glUseProgram(ShaderIds[0]); ExitOnGLError("ERROR: Could not use the shader program"); glUniformMatrix4fv(ModelMatrixUniformLocation, 1, GL_FALSE, ModelMatrix.m); glUniformMatrix4fv(ViewMatrixUniformLocation, 1, GL_FALSE, ViewMatrix.m); ExitOnGLError("ERROR: Could not set the shader uniforms"); ``` Finally, insert the last lines of this function used for drawing purposes: ```glBindVertexArray(BufferIds[0]); ExitOnGLError("ERROR: Could not bind the VAO for drawing purposes"); glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, (GLvoid*)0); ExitOnGLError("ERROR: Could not draw the cube"); glBindVertexArray(0); glUseProgram(0); ``` Next, create a text file called `SimpleShader.vertex.glsl`, and insert the following lines: ```#version 400 layout(location=0) in vec4 in_Position; layout(location=1) in vec4 in_Color; out vec4 ex_Color; uniform mat4 ModelMatrix; uniform mat4 ViewMatrix; uniform mat4 ProjectionMatrix; void main(void) { gl_Position = (ProjectionMatrix * ViewMatrix * ModelMatrix) * in_Position; ex_Color = in_Color; } ``` Lastly, create another text file called `SimpleShader.fragment.glsl` with the following containing the following lines: ```#version 400 in vec4 ex_Color; out vec4 out_Color; void main(void) { out_Color = ex_Color; } ``` After you compile your program, make sure that these GLSL files are in the same directory as your executable before your run the executable. When you do, the output should show a spinning cube on your screen, similar to the following screenshot: ## Step-By-Step There are many changes from the previous chapters in this chapter, so let’s look at what just happened. The first thing we did is the same we do for each chapter, which is to include files and change the window title’s prefix. However, in this chapter, we replaced all of the `#include`s with a single include to `Utils.h`. Before we continue describing the code, let’s introduce a few new concepts. ### GLSL Uniforms In the previous chapters, we’ve learned that we can declare variables in our GLSL shaders, but haven’t actually imported any data besides our usual vertex information. In this chapter, we need a way to transport our matrices to the vertex shader, and we do so by using so-called uniforms. Uniforms are global variables stored inside of the shader program, changeable at any time during the lifecycle of the shader program. Once a uniform is set, it doesn’t change and remains set until another value is set or the program ends. Uniforms can be any basic GLSL data type, in our case they are of type `mat4`, which represents 4×4 matrices. In a future chapter on shaders, we’ll discuss more GLSL data types. #### Retrieving Uniform Locations Each uniform has a specific location, and after a GLSL shader program links, these locations become available to OpenGL. The locations are accessible through their names by calling the OpenGL function `glGetUniformLocation`: `GLint glGetUniformLocation(GLuint program, const GLchar* name);` • The program parameter takes in the shader program’s identifier as generated by the call to glCreateProgram • The name parameter takes in the name of the uniform variable as defined in the GLSL source code as a regular character string The function returns the integer uniform location, which we store in our `ModelMatrixUniformLocation`, `ViewMatrixUniformLocation`, and `ProjectionMatrixUniformLocation` variables. #### Setting Uniform Data Once we have these locations, we can start uploading our matrices to the GPU. To do so, there are many `glUniform` functions, one for each basic GLSL data type. The one we use in our program is `glUniformMatrix4fv`, which allows us to upload a 4×4 floating-point matrix: ```void glUniformMatrix4fv( Glint location, GLsizei count, GLboolean transpose, const GLfloat* value ); ``` • The `location` parameter takes in the location of the uniform as queried by the `glGetUniformLocation` function described above • The `count` parameter takes in the number of matrices passed into this function: 1 for a single matrix, greater than1 for an array of matrices • The `transpose` parameter takes in `GL_FALSE` if the matrix is in column major order (as in our case), and `GL_TRUE` if the matrix is in row major order and requires transposing by OpenGL • The `value` parameter takes in a pointer to the location in memory of the first element of the array to upload to the GPU ### Matrices in OpenGL We represent a matrix as an array of sixteen floating-point numbers. The identity matrix in C looks like the following piece of code: `float Identity[16] = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };` If we lay it out in a bit more readable format (as in `Utils.h`), we can see the layout of the matrix: ```float Identity[16] = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; ``` The matrix’s column vectors occupy contiguous memory, meaning that the x-column vector occupies indices 0 through 3; the y-column vector occupies indices 4 through 7, and so on. In our programs, we’ll be using a structure containing one of these arrays named Matrix, defined in the file `Utils.h`. All of our helper functions operate on this structure, not a raw array of floating point numbers. In this program, we store our matrices in three global variables named `ModelMatrix` for the cube’s local transformations, `ViewMatrix` for the camera/eye transformation, and `ProjectionMatrix` for our perspective projection transformation. These variables have the exact same names as the uniform variables in our vertex shader, described further on in this chapter. ## Step-By-Step Continued… The next set of global variables that we added after our matrices and uniform locations, are a floating-point number named `CubeRotation`, and a `clock_t` variable named `LastTime`. We use both of these variables to rotate the cube in the `DrawCube` function, which we will describe a bit later on in the chapter. We added a few more function declarations, and after that edited the `Initialize` function ### New Initialization Code (`Initialize`) While much of the code in the `Initialize` function remained untouched, we added some crucial new function calls. #### Depth Testing Immediately after the call to `glClearColor`, we called the function `glEnable`, which allows us to enable certain OpenGL capabilities. The capability we wish to enable is the only parameter we pass to this function. To disable the capability, simply call `glDisable` with the same flag, which we do not use in our program. The flag we pass into `glEnable` is `GL_DEPTH_TEST`, which allows OpenGL to compare the depth values of fragments after rasterization. Besides the `GL_DEPTH_TEST` capability enabled, OpenGL also needs to know exactly how to compare the fragments. We do this by a call to `glDepthFunc`, which defines when to store a fragment in the depth buffer, or when to discard by passing in one of the following enumerations: • `GL_NEVER`: a fragment never passes the depth test. • `GL_LESS`: a fragment passes when its depth value is less (closer to the camera) than the fragment currently stored in the depth buffer. We use this enumeration in our program. • `GL_EQUAL`: a fragment passes the depth test when its depth value is equal to the one currently stored in the depth buffer. • `GL_LEQUAL`: a fragment passes the depth test when its depth value is less or equal to the one currently stored in the depth buffer. • `GL_GREATER`: the opposite of `GL_LESS`. • `GL_NOTEQUAL`: the opposite of `GL_EQUAL`. • `GL_GEQUAL`: a fragment passes the depth test when its depth value is greater or equal to the one currently stored in the depth buffer. • `GL_ALWAYS`: the opposite of `GL_NEVER`. #### OpenGL Error Checking After the call to `glDepthFunc`, we call another new function is, namely our custom `ExitOnGLError` function. This function, as its name suggests, checks for an OpenGL error, and if one is present, immediately exits the program. A single parameter, `error_message`, is required that contains the error message to display to the user if the program exits erroneously. #### Polygon Culling This next concept is very important from this chapter onwards. Whereas in previous chapters we didn’t care which way we constructed our vertices, with polygon culling, a method that reduces the amount of polygons to render, it becomes critical. For instance, if we didn’t specify polygon culling in this chapter, rendering also occurs on the insides of the cube, even though they are not visible. The first function call is once again `glEnable`, this time passing in the `GL_CULL_FACE` flag to enable the polygon capability on the graphics hardware. The next function is `glCullFace`, which defines which face of the polygon to cull. We can cull either the front-face of the polygon by supplying the function with the `GL_FRONT` enumeration, the back-face of the polygon with `GL_BACK`, or both faces with `GL_FRONT_AND_BACK`. Yet, how do you determine which face is the front-face and which face is the back-face of a polygon? The answer is by defining in which direction a polygon’s vertices wind, either clockwise or counterclockwise. Vertex winding defines the path OpenGL takes to complete the polygon, starting at the first vertex, then the second vertex, and so on until the polygon is closed. We specify this direction with a call to `glFrontFace`, which defines the direction in which the vertices of a polygon wind. In our case, this is counterclockwise, specified by the `GL_CCW` enumeration, its opposite is `GL_CW` for clockwise winding vertices. #### Matrix Initialization After the OpenGL options are set, we initialize the matrices by setting them to the identity matrix. The view matrix, which describes the eye transformations, is translated two units into the negative z direction (backwards) so that the camera won’t intersect the cube. ### Creating the Cube (`CreateCube`) We didn’t encounter any new concepts in the `CreateCube` function; we still generate a bunch of buffers for vertex and index data and push them to the GPU. The only differences from the previous chapter are the retrieval of the shader uniforms described earlier and a new custom function named `LoadShader`. #### Loading Shaders from Files Previously, we hard-coded our GLSL shaders into our code as one giant constant string, from now on, we’ll use the `LoadShader` function to load them from text files instead, its prototype looks like this: `GLuint LoadShader(const char* filename, GLenum shader_type);` • The `filename` parameter takes in the filename of the GLSL shader to read • The `shader_type` parameter takes in the `GLenum` that we used to pass into `glCreateShader`, in this program we use `GL_FRAGMENT_SHADER` and `GL_VERTEX_SHADER` The function reads the contents from the file, generates a shader identifier, passes the contents of the file into `glShaderSource`, and compiles the shader. The function returns the identifier generated by the call to `glCreateShader`. ### Drawing the Cube (`DrawCube`) The first section of the `DrawCube` function deals with generating a rotation angle based on the amount of time has passed. The rotation we use in this sample is 45 degrees per second, which we achieve by checking the amount of clock ticks passed since the last time the `DrawCube` function executed. The previous clock ticks are stored in the global variable `LastTime`, whereas the current ticks are stored in the local variable named `Now`. The total rotation in degrees is stored in the global variable named `CubeRotation`. However, in order to use this rotation in OpenGL, we need to convert the degrees to radians, done through a function call to our custom function `DegreesToRadians`. This value is then stored in the local `CubeAngle` variable and used to apply rotational transformations to the model matrix by calling `RotateAboutY`, followed by a call to `RotateAboutX`. Unlike in the previous chapters, we enable and disable the shader program as well as the VAO each draw call instead of just once to demonstrate how a scene with multiple objects is usually drawn. We’ll introduce multiple objects to our scene in a future chapter, although there is nothing special about it, and by now, you should be able to work out how to do this from the information presented throughout the chapters. ### Vertex Shader Changes Besides changes to the C code, there were also some minor changes to the vertex shader from the previous chapter to the current one. The first major change is the addition of the uniform variables described earlier in the chapter. In the GLSL source code, they look like this: ```uniform mat4 ModelMatrix; uniform mat4 ViewMatrix; uniform mat4 ProjectionMatrix; ``` As mentioned earlier, the matrices in the shader have the exact same names as the ones in the source code. This is not a requirement, but simply to show the relationship between the two. The final notable change to the vertex shader is the calculation of the final vertex position: `gl_Position = (ProjectionMatrix * ViewMatrix * ModelMatrix) * in_Position;` As you can see, the transformation for the vertex happens by multiplying the matrices together to form one transformation matrix (the multiplication between the parentheses), and multiplying `in_Position` with it to obtain the coordinates passed on to OpenGL. # Conclusion In this chapter, we rendered our first three-dimensional geometry onto the screen and learned about the basic transformations used in computer graphics. If you’re fuzzy on matrix mathematics or linear algebra in general, here are a few resources to get you up to speed since the topic is too broad to handle on this site: • The Khan Academy, tons of great video tutorials on mathematics. • The matrix and quaternions FAQ • Any search result page on Google for “linear algebra,” “matrix math,” or “3D math” will do. Until the next chapter is ready, try the following exercises: • Create a keyboard movable camera by transforming the `ViewMatrix` matrix through keyboard input (see chapter 3 for FreeGLUT keyboard input), copied to the GPU each time `DrawCube` executes. • Work out how to draw multiple cubes onto the screen.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.868848443031311, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/pattern-matching+syntax
# Tagged Questions 2answers 114 views ### Write a function that returns the logarithmic derivative How can we write a function that if we input an expression f, it returns the log derivative $\frac{1}{f} \frac{df}{dx}$. We have to use a conditional or pattern test so that the function accepts any ... 4answers 227 views ### Calling Table with custom iterator I often find myself in situations where I, for example, need to build a table for some expression, but want to set the number of points rather then the step size, so the code ends up looking like this ... 1answer 221 views ### What does the slash-colon symbol do? I came across a bit of code that uses the syntax /: and I don't know what it does. I can't find its documentation, or maybe I'm just not looking properly. The code ... 1answer 250 views ### Path queries for tree-structured data Can anyone suggest documentation or tutorials for developing path queries and indices for (XML-like) tree-structured data? Suppose data is organized hierarchically in key->value pairs, eg: ... 1answer 75 views ### How do you set an Optional parameter with a global variable on a Function defined in a Package In a Package I am writing, I'm trying to define a function with an Optional parameter in it that is set to a global variable. ... 1answer 253 views ### Why doesn't PatternTest work with Composition? While playing around with the solutions to this question, I've found some very strange behaviour: ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8903518915176392, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/132527-segments-posets.html
Thread: 1. Segments, posets Prove that for all $\alpha$ in $\omega_1$, $\text{seg}_{\omega_1}(\alpha) +_o \omega_1 =_o \omega_1$ and $<br /> \text{seg}_{\omega_1}(\alpha) \cdot_o \omega_1 =_o \omega_1$. Notation: $\omega_1$ denotes the first uncountable ordinal. $\text{seg}$ above denotes the initial segment, $\cdot_o$ denotes the product of posets, $+_o$ denotes the sum of posets. $U <_o V \Leftrightarrow (\exists x \in V ) [U =_o \text{seg}_V(x)]$. $=_o$ denotes order isomorphic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.75801020860672, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30106/is-there-any-uncertainty-on-the-free-particle-with-a-definite-momentum-vec-p
is there any uncertainty on the free particle with a definite momentum $\vec p$? The probability amplitude for a free particle with momentum $\ p$ and energy $E$ is the complex wave function: $$\psi_{(\vec x , t)}=e^{i(\vec k\cdot \vec x -\omega t)}$$ is there any uncertainty on the free particle with a definite momentum $\vec p$!? - Uncertainty in what? – Mark Eichenlaub Jun 15 '12 at 0:00 – user8224 Jul 22 '12 at 18:43 2 Answers The question is a bit more subtle than you (probably) think. The wavefunction you've given is for an infinite plane wave with a perfectly defined momentum. Because the wave is infinite the probabilty of finding the particle is the same everywhere, i.e. the uncertainty in the particle position is infinite. This is the same result you get by setting $\sigma_p$ to zero in the uncertainty relation dmckee gave in his answer. If you want a wavefunction for a particle with a position you can can build one by adding up a number of plane waves with different values of momentum, $p$. The example usually given in QM courses is to make the particle wavefunction a gaussian curve. You may think it's hard to make a gaussian by adding up plane waves, but actually it's very easy. There's a mathematical procedure called a Fourier transform that you can use to disassemble a gaussian into it's component plane waves. If you're interested, this calculation is given in detail here. With a gaussian we normally describe it's width by the width at 1/e of the peak height, and this width is known as $\sigma$. If you use a gaussian with a width $\sigma_x$ to describe your particle, then the Fourier transform gives a distribtuion of momenta that is a gaussian of width $\sigma_p$, and as the calculation I linked to shows, $\sigma_x$ and $\sigma_p$ are related by: $$\sigma_p \sigma_x = \frac{\hbar}{2}$$ exactly as the uncertainty relation requires. - Yes there is. If you attempt to compute the expectation value of the particle's position at a particular time you are going to run into big trouble. You'll find it has a infinite range of possible positions. $$\sigma_p \sigma_x \ge \frac{\hbar}{2}$$ If you send $\sigma_p$ toward zero... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228350520133972, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/181694/surface-integral
# Surface integral Without getting into the whole question, I was asked to evaluate a surface integral $\iint\limits_S f(x,y,z) da$ where S is the cylinder $x^2 + y^2 = x$ between $z=a$ and $z=b$ Now normally I would parametize this as a cylinder and it would be easy peasy but I'm worried about the equation of the cylinder, as my normal equation would be more like $x^2 + y^2 = C$ with C being some constant .. Any thoughts ? thanks a lot - $x^2+y^2=x$ is the equation of a circle, get center and radius. – enzotib Aug 12 '12 at 14:48 ## 1 Answer Your cylinder is offset from the axis. $x^2+y^2=x$ becomes $(x-\frac 12)^2+y^2=\frac 14$. You can substitute $u=x-\frac 12$ to get back on axis if you want. - thanks boss .. concise and complete :) – Ronald Aug 12 '12 at 15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944622814655304, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/50137?sort=newest
## Random walks on graphs: Cover time and blanket time ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Winkler and Zuckerman conjectured that the blanket time is within a constant factor of the cover time. The conjecture was recently proved. The cover time $C$ is the expectation of the first time $t$ that the walk has seen every vertex. The blanket time $B_\delta$, where $0<\delta<1$ is some constant, is the expectation of the first time $t$ such that each vertex $v$ has been visited at least $\delta \pi_v t$ times. That is, it is the expected time for all the vertices to have been seen roughly as expected by the stationary distribution. So their now-proven conjecture was that $B_\delta \leq a C$ where $a$ is some constant. One remark in their paper that I can't see the justification of is the claim that this implies that the expectation of the first time that each vertex $v$ has been visited $\pi_vC$ times is $O(C)$. I was wondering if anyone can offer some insight. The remark is near the bottom of page 3 in their paper http://www.cs.utexas.edu/~diz/pubs/blanket.ps For what it's worth, this question is related to another question I asked here http://mathoverflow.net/questions/50110/a-type-of-stochastic-jump-process - That's interesting. Is $a$ independent of $\delta$? – Tom LaGatta Dec 22 2010 at 9:43 No, $a$ is dependent on $\delta$ – Probabilist Dec 22 2010 at 10:37 ## 2 Answers Suppose there exist $a$ and $\delta$ such that $B_\delta\le aC$ for all graphs. Fix a graph and start a random walk at an arbitrary vertex. Repeatedly wait for a $\delta$-blanketting and reset the visit counts until the total time taken exceeds $C/\delta$. The expected time for this is at most $(1/\delta+a)C$ because the initial blanketting phases take in total at most $C/\delta$ steps (deterministically) and the expected time of the final blanketting is (using the strong Markov property) at most $B_\delta$. In particular $T=O(C)$. By construction we have $T\ge C/\delta$. Also since $T$ is obtained by concatenating a sequence of $\delta$-blankettings, it is itself a $\delta$-blanketting. This means that each vertex $v$ is covered at least $\delta\pi_v T\ge \pi_v C$ times. - The problem is that it doesn't seem to be true that the final round of blanketing actually has an expectation $O(C)$. What we are talking about is a series of jumps on the number line until we cross $K=C/\delta$, and we want to know what in the final jump, the value $t-K=O(C)$. However, as can be seen from the second counter example in the link to my previous question - this is not necessarily the case. – Probabilist Dec 22 2010 at 21:37 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. With probability at least $1/2$ the blanket time $B_{1/2}$ is at most $2aC$. It only takes a Geom$(1/2)$ such blocks to get $\pi_v C$ visits to each $v$. - Omer, it takes $Geom(1/2)$ to blanket the graph, but that blanket could be less than $2aC$. – Probabilist Dec 22 2010 at 21:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576865434646606, "perplexity_flag": "head"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_8&diff=31513&oldid=31512
# User:Michiexile/MATH198/Lecture 8 ### From HaskellWiki (Difference between revisions) | | | | | |-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 20: | | Line 20: | | | | :<math>\alpha([]) = u</math> | | :<math>\alpha([]) = u</math> | | | :<math>\alpha([a_1,a_2,\dots,a_n]) = m(a_1,\alpha([a_2,\dots,a_n]))</math> | | :<math>\alpha([a_1,a_2,\dots,a_n]) = m(a_1,\alpha([a_2,\dots,a_n]))</math> | | - | This gives us, indeed, a <math>T<math>-algebra structure on <math>A</math>. Associativity and unity follows from the corresponding properties in the monoid. | + | This gives us, indeed, a <math>T</math>-algebra structure on <math>A</math>. Associativity and unity follows from the corresponding properties in the monoid. | | | | | | | | On the other hand, if we have a <math>T</math>-algebra structure on <math>A</math>, we can construct a monoid structure by setting | | On the other hand, if we have a <math>T</math>-algebra structure on <math>A</math>, we can construct a monoid structure by setting | | Line 162: | | Line 162: | | | | | | | | | In order to really nail down what we are doing here, we need to define what we mean by predicates in a strict manner. There is a way to do this using ''fibrations'', but this reaches far outside the scope of this course. For the really interested reader, I'll refer to [http://www.springerlink.com/content/d022nlv03n26nm03/]. | | In order to really nail down what we are doing here, we need to define what we mean by predicates in a strict manner. There is a way to do this using ''fibrations'', but this reaches far outside the scope of this course. For the really interested reader, I'll refer to [http://www.springerlink.com/content/d022nlv03n26nm03/]. | | | | + | | | | | + | Another way to do this is to introduce a ''topos'', and work it all out in terms of its ''internal logic'', but again, this reaches outside the scope of this course. | | | | | | | | ====Lambek's lemma==== | | ====Lambek's lemma==== | ## Revision as of 02:00, 11 November 2009 IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. ## Contents ### 1 Algebras over monads We recall from the last lecture the definition of an Eilenberg-Moore algebra over a monad T = (T,η,μ): Definition An algebra over a monad T in a category C (a T-algebra) is a morphism $\alpha\in C(TA, A)$, such that the diagrams below both commute: While a monad corresponds to the imposition of some structure on the objects in a category, an algebra over that monad corresponds to some evaluation of that structure. #### 1.1 Example: monoids Let T be the Kleene star monad - the one we get from the adjunction of free and forgetful functors between Monoids and Sets. Then a T-algebra on a set A is equivalent to a monoid structure on A. Indeed, if we have a monoid structure on A, given by $m:A^2\to A$ and $u:1\to A$, we can construct a T-algebra by α([]) = u $\alpha([a_1,a_2,\dots,a_n]) = m(a_1,\alpha([a_2,\dots,a_n]))$ This gives us, indeed, a T-algebra structure on A. Associativity and unity follows from the corresponding properties in the monoid. On the other hand, if we have a T-algebra structure on A, we can construct a monoid structure by setting u = α([]) m(a,b) = α([a,b]) It is clear that associativity of m follows from the associativity of α, and unitality of u follows from the unitality of α. (((potential headache: shouldn't η show up more? Instead of the empty list, maybe?))) #### 1.2 Example: Vector spaces We have free and forgetful functors $Set \to^{free} k-Vect \to^{forgetful} Set$ forming an adjoint pair; where the free functor takes a set S and returns the vector space with basis S; while the forgetful functor takes a vector space and returns the set of all its elements. The composition of these yields a monad T in Set taking a set S to the set of all formal linear combinations of elements in S. The monad multiplication takes formal linear combinations of formal linear combinations and multiplies them out: 3(2v + 5w) − 5(3v + 2w) = 6v + 15w − 15v − 10w = − 9v + 5w A T-algebra is a map $\alpha: TA\to A$ that acts like a vector space in the sense that $\alpha(\sum\alpha_i(\sum\beta_jv_j)) = \alpha(\sum\alpha_i\beta_jv_j)$. We can define $\lambda\cdot v = \alpha(\lambda v)$ and v + w = α(v + w). The operations thus defined are associative, distributive, commutative, and everything else we could wish for in order to define a vector space - precisely because the operations inside TA are, and α is associative. The moral behind these examples is that using monads and monad algebras, we have significant power in defining and studying algebraic structures with categorical and algebraic tools. This paradigm ties in closely with the theory of operads - which has its origins in topology, but has come to good use within certain branches of universal algebra. An (non-symmetric) operad is a graded set $O = \bigoplus_i O_i$ equipped with composition operations $\circ_i: O_n\oplus O_m\to O_{n+m-1}$ that obey certain unity and associativity conditions. As it turns out, non-symmetric operads correspond to the summands in a monad with polynomial underlying functor, and from a non-symmetric operad we can construct a corresponding monad. The designator non-symmetric floats in this text o avoid dealing with the slightly more general theory of symmetric operads - which allow us to resort the input arguments, thus including the symmetrizer of a symmetric monoidal category in the entire definition. To read more about these correspondences, I can recommend you start with: the blog posts Monads in Mathematics here: [1] ### 2 Algebras over endofunctors Suppose we started out with an endofunctor that is not the underlying functor of a monad - or an endofunctor for which we don't want to settle on a monadic structure. We can still do a lot of the Eilenberg-Moore machinery on this endofunctor - but we don't get quite the power of algebraic specification that monads offer us. At the core, here, lies the lack of associativity for a generic endofunctor - and algebras over endofunctors, once defined, will correspond to non-associative correspondences to their monadic counterparts. Definition For an endofunctor $P:C\to C$, we define a P-algebra to be an arrow $\alpha\in C(PA,A)$. A homomorphism of P-algebras $\alpha\to\beta$ is some arrow $f:A\to B$ such that the diagram below commutes: This homomorphism definition does not need much work to apply to the monadic case as well. #### 2.1 Example: Groups A group is a set G with operations $u: 1\to G, i: G\to G, m: G\times G\to G$, such that u is a unit for m, m is associative, and i is an inverse. Ignoring for a moment the properties, the theory of groups is captured by these three maps, or by a diagram We can summarize the diagram as $1+G+G\times G \mapsto^{[u,i,m]} G$ and thus recognize that groups are some equationally defined subcategory of the category of T-algebras for the polynomial functor $T(X) = 1 + X + X\times X$. The subcategory is full, since if we have two algebras $\gamma: T(G)\to G$ and $\eta: T(H)\to H$, that both lie within the subcategory that fulfills all the additional axioms, then certainly any morphism $\gamma\to\eta$ will be compatible with the structure maps, and thus will be a group homomorphism. We shall denote the category of P-algebras in a category C by P − Alg(C), or just P − Alg if the category is implicitly understood. This category is wider than the corresponding concept for a monad. We don't require the kind of associativity we would for a monad - we just lock down the underlying structure. This distinction is best understood with an example: The free monoids monad has monoids for its algebras. On the other hand, we can pick out the underlying functor of that monad, forgetting about the unit and multiplication. An algebra over this structure is a slightly more general object: we no longer require $(a\cdot b)\cdot c = a\cdot (b\cdot c)$, and thus, the theory we get is that of a magma. We have concatenation, but we can't drop the brackets, and so we get something more reminiscent of a binary tree. ### 3 Initial P-algebras and recursion Consider the polynomical functor P(X) = 1 + X on the category of sets. It's algebras form a category, by the definitions above - and an algebra needs to pick out one special element 0, and one endomorphism T, for a given set. What would an initial object in this category of P-algebras look like? It would be an object I equipped with maps $1 \to^o I \leftarrow^n I$. For any other pair of maps $a: 1\to X, s: X\to X$, we'd have a unique arrow $u: I\to X$ such that commutes, or in equations such that u(o) = a u(n(x)) = s(u(x)) Now, unwrapping the definitions in place, we notice that we will have elements $o, n(o), n(n(o), \dots$ in I, and the initiality will force us to not have any other elements floating around. Also, intiality will prevent us from having any elements not in this minimally forced list. We can rename the elements to form something more recognizable - by equating an element in I with the number of applications of n to o. This yields, for us, elements $0, 1, 2, \dots$ with one function that picks out the 0, and another that gives us the cussessor. This should be recognizable as exactly the natural numbers; with just enough structure on them to make the principle of mathematical induction work: suppose we can prove some statement P(0), and we can extend a proof of P(n) to P(n + 1). Then induction tells us that the statement holds for all P(n). More importantly, recursive definitions of functions from natural numbers can be performed here by choosing an appropriate algebra mapping to. This correspondence between the initial object of P(X) = 1 + X is the reason such an initial object in a category with coproducts and terminal objects is called a natural numbers object. For another example, we consider the functor $P(X) = 1 + X\times X$. Pop Quiz Can you think of a structure with this as underlying defining functor? An initial $1+X\times X$-algebra would be some diagram $1 \to^o I \leftarrow^m I\times I$ such that for any other such diagram $1 \to^a X \leftarrow^* X\times X$ we have a unique arrow $u:I\to X$ such that commutes. Unwrapping the definition, working over Sets again, we find we are forced to have some element * , the image of o. Any two elements S,T in the set give rise to some (S,T), which we can view as being the binary tree The same way that we could construct induction as an algebra map from a natural numbers object, we can use this object to construct a tree-shaped induction; and similarily, we can develop what amounts to the theory of structural induction using these more general approaches to induction. #### 3.1 Example of structural induction Using the structure of $1+X\times X$-algebras we shall prove the following statement: Proposition The number of leaves in a binary tree is one more than the number of internal nodes. Proof We write down the actual Haskell data type for the binary tree initial algebra. ```data Tree = Leaf | Node Tree Tree   nLeaves Leaf = 1 nLeaves (Node s t) = nLeaves s + nLeaves t   nNodes Leaf = 0 nNodes (Node s t) = 1 + nNodes s + nNodes t``` Now, it is clear, as a base case, that for the no-nodes tree Leaf : `nLeaves Leaf = 1 + nNodes Leaf` For the structural induction, now, we consider some binary tree, where we assume the statement to be known for each of the two subtrees. Hence, we have ```tree = Node s t   nLeaves s = 1 + nNodes s nLeaves t = 1 + nNodes t``` and we may compute ```nLeaves tree = nLeaves s + nLeaves t = 1 + nNodes s + 1 + nNodes t = 2 + nNodes s + nNodes t   nNodes tree = 1 + nNodes s + nNodes t``` Now, since the statement is proven for each of the cases in the structural description of the data, it follows form the principle of structural induction that the proof is finished. In order to really nail down what we are doing here, we need to define what we mean by predicates in a strict manner. There is a way to do this using fibrations, but this reaches far outside the scope of this course. For the really interested reader, I'll refer to [2]. Another way to do this is to introduce a topos, and work it all out in terms of its internal logic, but again, this reaches outside the scope of this course. #### 3.2 Lambek's lemma What we do when we write a recursive data type definition in Haskell really to some extent is to define a data type as the initial algebra of the corresponding functor. This intuitive equivalence is vindicated by the following Lemma Lambek If $P: C\to C$ has an initial algebra I, then P(I) = I. Thus, by Lambek's lemma we know that if $P_A(X) = 1 + A\times X$ then for that PA, the initial algebra - should it exist - will fulfill$I = 1 + A\times I$, which in turn is exactly what we write, defining this, in Haskell code: `List a = Nil | Cons a List` #### 3.3 Recursive definitions with the unique maps from the initial algebra Consider the following PA(X)-algebra structure $l: P_A(\mathbb N)\to\mathbb N$ on the natural numbers: ```l(*) = 0 l(a,n) = 1 + n``` We get a unique map f from the initial algebra for PA(X) (lists of elements of type A) to $\mathbb N$ from this definition. This map will fulfill: ```f(Nil) = l(*) = 0 f(Cons a xs) = l(a,f(xs)) = 1 + f(xs)``` which starts taking on the shape of the usual definition of the length of a list: ```length(Nil) = 0 length(Cons a xs) = 1 + length(xs)``` And thus, the machinery of endofunctor algebras gives us a strong method for doing recursive definitions in a theoretically sound manner. ### 4 Homework 1. Find a monad whose algebras are associative algebras: vector spaces with a binary, associative, unitary operation (multiplication) defined on them. Factorize the monad into a free/forgetful adjoint pair. 2. Find an endofunctor of Hask whose initial object describes trees that are either binary of ternary at each point, carrying values from some A in the leaves. 3. Write an implementation of the monad of vector spaces in Haskell. If this is tricky, restrict the domain of the monad to, say, a 3-element set, and implement the specific example of a 3-dimensional vector space as a monad. Hint: [3] has written about this approach.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8971272110939026, "perplexity_flag": "head"}
http://mathoverflow.net/questions/13251?sort=newest
## Endofunctors of CRing which give schemes when composed with schemes? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Background Yet another homework inspired question: A scheme is reduced if no section of the structure sheaf is nilpotent. To every scheme $X$ there is a scheme $X_{red}$ and a morphism $i: X_{red} \rightarrow X$ such that every morphism from a reduced scheme into $X$ factors through $X_{red}$. Hartshorne Ex. 2.2.6 guides you through the construction of this scheme. Basically you leave the topological space alone, but you mod out the nilpotents in the structure sheaf. This gives you a presheaf, which you then sheafify to get the structure sheaf of $X_{red}$. I have been trying to run through all of the constructions in Hartshorne from a functor of points perspective in addition to the "standard" approach, so naturally I was interested in seeing this construction as well. From this perspective $X$ is a functor $CRing \rightarrow Sets$, namely $Hom(Spec(-),X)$ if you were using the standard definition of schemes. The functor from $F: CRing \rightarrow CRing$ taking $A$ to $A/nil(A)$ seems relevant here, so it seems natural to ask if $X \circ F: CRing \rightarrow Sets$ is the reduced scheme associated to $X$. I won't spell out the details, but this actually turns out to be true (I think at least!). This brings me to my questions. Questions Did I mess up, or does the construction above pan out? Is there a nice characterization of the functors $CRing \rightarrow Cring$ which will give a scheme when composed with any scheme $CRing \rightarrow Sets$? Even if there is no simple characterization of all such functors, is there a large class of such functors which is nice? Do you have any other examples of standard constructions in algebraic geometry which are of this form? - I would suggest you read Demazure-Gabriel for a much better description of this approach, as it doesn't really make sense without a description of the global Zariski site. – Harry Gindi Feb 3 2010 at 12:08 Yoneda's lemma doesn't make sense without a site? The fact that schemes are sheaves on the Zariski site is important, but it is not important for posing my question. – Steven Gubkin Feb 3 2010 at 14:15 ## 3 Answers Here is a nontrivial example I like. Let $W:\mathrm{Rings}\to\mathrm{Rings}$ denote the Witt vector functor of some fixed finite length. (You can consider the $p$-typical Witt vectors, for some prime $p$, but everything works with the other standard flavors.) Then the functor $W_*(-)=-\circ W$ is an endofunctor of the category of functors $\mathrm{Rings}\to\mathrm{Sets}$, and it takes schemes to schemes. The scheme `$W_*(X)=X\circ W$` is the so-called arithmetic jet space of $X$, extensively studied by Buium in the case of $p$-adic formal schemes. The fiber over $p$ is the Greenberg transform. There is a standard method for proving `$W_*$` takes schemes to schemes (or rather for proving almost that), though it probably doesn't work for every functor $F:\mathrm{Rings}\to\mathrm{Rings}$ such that `$F_*$` takes schemes to schemes. First you show that the category of sheaves of sets on the category of affine schemes w.r.t. the etale topology is stable under `$W_*$`. This is true since $W$ takes etale covers of rings to the same and also takes cocartesian squares of etale rings maps to the same. (These properties of $W$ are not obvious.) Then you show `$W_*$` takes sheaf epimorphisms to the same, and etale maps of sheaves to the same. (These properties are much easier.) Therefore any etale equivalence relation on an affine scheme is sent to an etale equivalence relation on an affine scheme, and the quotient of the first is sent to the quotient of the second. Therefore the category of quasi-compact quasi-separated algebraic spaces is stable under $W_*$. I have no doubt you could find reasonable abstract properties on the endofunctor $W$ of Rings that allow this argument to go through. Showing `$W_*$` takes schemes to schemes is a bit subtler. You have to deal with quasi-compactness issues (as a right adjoint, `$W_*$` doesn't necessarily behave well w.r.t. disjoint unions) and also the fact that it's harder to tell whether a functor is represented by a scheme than by an algebraic space. But as I said, in the Witt vector example above, it is true. Presumably the same argument works, and is much easier, for the functor $F$ defined by $F(R)=R[t]/(t^{n+1})$. Then $F_*(X)$ should be the usual jet space functor of length $n$. The case $n=1$ should give the total space of the tangent bundle, at least when $X$ is smooth. Edit: I'm reminded below that this example is just a particular case of the representability of the Weil restriction of scalars for a finite flat map $A\to B$. There you consider the endofunctor of the category of $A$-algebras given by $F(R)=B\otimes_A R$. In particular, it's reasonable to view `$W_*$` as a generalized Weil resitrction of scalars. - How about a 'disconnected' version of your last paragraph: $F(R)=R\times R$? Works just as well, $F_*(X)=X^2$. It seems that if one works with schemes over a field and $A$ is any finite-dimensional algebra, $F(R)=R\otimes A$ works. Then $F_*(X)=Hom(Spec(A),X)$ (which needs to be defined as a scheme first, of course). – t3suji Feb 3 2010 at 14:03 Great answer! This gives me a lot to think about. – Steven Gubkin Feb 3 2010 at 14:18 @t3suji: Good point! I had forgotten about that. I'll add a few words about that. – James Borger Feb 3 2010 at 21:02 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It seems to me that you want to get the functor $X \mapsto X_{red}$ (schemes) from a "functor of points"-approach out of the functor $A \to A_{red}$ (rings). This is not possible in a simple formal way (see VBL's answer), but the concept may be generalized as follows: Let $P$ be a full subcategory of $Ring$, which is local in the sense that for every partition of unity of a ring $A=(f_1,...,f_n)$, all the localizations $A_{f_i}$ belong to $P$ if and only if $A$ belongs to $P$. Now you can endow $P$ with the Zariski topology and define $P$-schemes as sheaves on $P$, which are covered by representable ones. Let $Sch_P$ denote the category of $P$-schemes. My point is: If $P$ is a reflective subcategory of $Ring$, then also $Sch_P$ is a reflective subcategory of $Sch$. If $F : Ring \to P$ is a retraction, then we may extend this retraction on representable functors and then, by gluing, to $Sch \to Sch_P$. Of course, you could also formulate this construction in the language of affine schemes or locally ringed spaces. I don't think that this matters here. The functor of points approach does not help you to extend a retraction $Ring \to P$ to $Sch \to Sch_P$ in a formal way, gluing is necessary. In particular, the construction of the reduced scheme structure does have nothing to do with precomposing endofunctors of rings. - Actually $X$ is identified with the covriant functor $CRing \to Sets$ $A\mapsto Hom(Spec(A),X)$. The functor $X\circ F$ is usually called the de Rham space of $X$. This is not the reduced scheme associated to $X$ because you mod out by nilpotents on the source instead of $X$: for $X=Sepc(B)$ you get $Hom(Spec(A);Spec(B)\circ F) = Hom(B,A/nil(A))$ instead $Hom(Spec(A);Spec(B)_red) = Hom(B/nil(B),A)$. - Thanks for the catch about covariance, I edited my question. I am still not convinced by the second part of your answer. How does it match up with the definition here: ncatlab.org/nlab/show/de+Rham+space ? I will have to think some more. – Steven Gubkin Jan 28 2010 at 13:23 OK, I see how my functor should not be X_red. I still don't know about the connection to the de Rham space. Is the de Rham Space a scheme? – Steven Gubkin Jan 28 2010 at 14:04 You should look at mathoverflow.net/questions/10556/…. – YBL Jan 28 2010 at 16:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395191073417664, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/9299/asymptotic-notations-with-two-exponents
# asymptotic notations with two exponents I am familiar with asymptotic notations like Big-O ,little-o. But while I am reading some papers people are using the notations like $O(\epsilon^{1/2^d})$, $O(d)^d$ etc. I couldn't understand these notations properly. Is there any way (Lecture notes or video lectures with examples) to understand these things clearly. Thank You. - 1 – Ran G. Jan 30 at 7:35 – Rishi Prakash Jan 30 at 10:45 3 $O(d)^d$ is a weird one... – Raphael♦ Jan 30 at 12:41 @Raphael $O(\epsilon^{1/2^n})=O(1)$, so it is weird (but correct) as well. – Khaur Jan 30 at 15:14 @Khaur In that case, it is (without context) not even clear whether $\varepsilon \to \infty$ or $d \to \infty$, or maybe even $\varepsilon \to 0$? – Raphael♦ Jan 30 at 17:16 ## 1 Answer That syntax is actually rather questionable formally, since it treats the big-O as a function, which it is not. But with a bit of slop the rule to interpret these is rather simple. The big-O stands for some function with the asymptotic behaviour given by the big-O. $$f(n) = O(g(n))$$ means that $$\exists C~\exists n_0~\forall n > n_0: f(n) \leq Cg(n).$$ So when we write "complexity is $O(n)$", we are really saying complexity is $c(n)$ for which $c(n) = O(n)$. Alternatively we can say $O(g(n))$ is a set of functions defined as $$O(g(n)) = \{ f(n) | \exists C~\exists n_0~\forall n>n_0: f(n) \leq Cg(n) \}.$$ And write $f(n) \in O(g(n))$ instead. While this definition is more logical, it seems to be less used in textbooks. Now if we say that complexity is $2^{O(n)}$, we can't expand it simply as $c(n) = 2^{O(n)}$, because we didn't define that. So instead we replace the big-O with a function that conforms to the big-O. Like this: $$c(n) = 2^{f(n)},\ f(n) = O(n)$$ And you can expand any other expression containing big-O. The approach applies to any expression having big-O as subexpression, so $O(n)^n$ is just $$c(n) = f(n)^n,\ f(n) = O(n)$$ In the set notation it makes even more sense, because $$g(F) = \{ g \circ f | f \ in F \}$$ where $g$ is a function and $f$ is a set of functions with one argument is the only logical definition of expression involving set of functions. So than: $$2^{O(n)} = \{ 2^{f(n)} | f(n) \in O(n) \}$$ and $$O(n)^n = \{ f(n)^n | f(n) \in O(n) \}.$$ As for $O(\epsilon^{1/2^n})$, that's just standard big-O notation: $$f(n) = O(\epsilon^{1/2^n})$$ $$\exists C~\exists n_o~\forall n>n_0: f(n) < C\epsilon^{1/2^n}$$ (which tends to $1$ as $n$ tends to $\infty$). The $\epsilon$ would usually be some tunable parameter in the algorithm. - If you want to be formal on notation, remember that $O(g(n))$ is a class of functions, therefore $f(n)\in O(g(n))$, not $=$. Also $\epsilon^{1/2^n}$ tends to $1$ as $n$ tends to $\infty$. – Khaur Jan 30 at 15:04 @Khaur: Well, that's the thing; most people write $f(n) = O(g(n))$ though using $\in$ is indeed more appropriate. Fixing the value. – Jan Hudec Jan 30 at 19:40 1 Most people do, but you are trying to explain how the formalities work here, so you should probably also use $\in$. – Raphael♦ Jan 30 at 20:41 @Raphael: I'll add it; but usually the texts that use $=$, and it seems to be the more common form, define the expression as special notation and don't talk about class of functions at all. – Jan Hudec Jan 31 at 6:47 @JanHudec Yes, but that routinely causes problems for beginners, because they have been conditioned to view $=$ as an equivalence relation (among other things). Just browse all the questions about asymptotics... – Raphael♦ Jan 31 at 9:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378015398979187, "perplexity_flag": "middle"}
http://datacommunitydc.org/blog/2013/01/better-science-of-viral-marketing-part-2/
Building a world class community of big data practitioners and scientists. # Better Science of Viral Marketing: Part 2 Posted on January 24, 2013 by In part 1, I discussed the faulty assumptions in the current models of viral marketing for the business community and how these models fail to reflect real world examples. So how can the business community build a more realistic model of viral marketing? How do you know which factors (e.g. viral coefficient, time scale, churn, market size) are most important? Fortunately, there is a rich history of literature on mathematical models for viral growth (and decline), dating all the way back to 1927. These models rigorously treat viral spread, churn, market size, and even the change in the market size and the possibility that former customers return. Obviously, nobody was thinking of making a YouTube video or an iPhone app go viral back when phones didn’t even have rotary dials. These models are of the viral spread of … viruses! The Model: The classic SIR model of the spread of disease is by Kermack and McKendrick. (Sorry I couldn’t link to the original paper. You can buy it for \$54.50 here — blame the academic publishing industry). I’ve applied this model to viral marketing by drawing analogies between a disease and a product. The desired outcomes are very different, but the math is the same. Kermack and McKendrick divide the total population of the market, $$N$$, into three subpopulations. • $$S$$ – The number of people susceptible to the disease (potential customers) • $$I$$ – The number of people who are infected with the disease (current customers) • $$R$$ – The number of people who have recovered from the disease (former customers). These three subpopulations change in number over time. The population of potential customers become current customers as a result of successful invitations. Current customers become former customers if they decide to stop using the product. For simplicity, I’ll treat the total market size, $$N = S + I + R$$, as static and former customers as immune (for now). The parameters that govern spread of disease are: • $$β$$ – The infection rate (sharing rate) • $$γ$$ – The recovery rate (churn rate) Assume that current customers, $$I$$, and potential customers, $$S$$, communicate with each other at an average rate that is proportional to their numbers (as governed by the Law of Mass Action). This gives $$βSI$$ as the number of new customers, per unit time, due to word of mouth or online sharing. As the number of new customers grows by $$βSI$$, the number of potential customers shrinks by the same number. This plays the same role that the “viral coefficient” does in Skok’s model, but accounts for the fact that conversion rates on sharing slow down when the fraction of people who have already tried the product gets large. It also does away with the concept of “cycle time”. Instead, it accounts for the average time it takes to share something and the average frequency at which people share by putting a unit of time into the denominator of $$β$$. Thus, $$β$$ represents the number of successful invitations per current customer per potential customer per unit time (i.e. hour, day, week). I propose that this is a more robust definition of viral coefficient than the one used by Ries and Skok because modeling viral sharing as an average rate accounts for the following realities: • Customers do not share in synchronous batches. • Each user has a different timeframe for trying a product, learning to love it, and sharing it with friends. Rather than assuming that they all have the same cycle time, $$β$$ represents an average rate of sharing. • Users might invite others when first trying a product or after they’ve used it for quite a while. In this model, current customers become former customers at a rate defined by the parameter $$γ$$. That is, $$γ$$ is the fraction of current customers who become former customers in a unit of time. It has the dimensions of inverse time $$(1/t)$$, and $$1/γ$$ represents the average time a user remains a user. So, if $$γ = 1\%$$ of users lost per day, then the average length of time a user remains active is 100 days. The differential equations governing viral spread are: • $$dS/dt = -βSI$$ • $$dI/dt = βSI – γI$$ • $$dR/dt = γI$$ Examining the Equations: These are non-linear differential equations that cannot be solved to produce convenient, insight yielding formulas for $$S(t)$$, $$I(t)$$, and $$R(t)$$. What they lack in convenient formulas, they make up for with more interesting dynamics (especially when considering changing market sizes and returning customers). You can still learn a lot by examining them and integrating them numerically. Let’s assume that $$t=0$$, represents the launch of a new product. Initially, at least the founding team uses the product and represent the initial customer base, $$I(0)$$. The initial number of former customers, $$R(0)$$, is zero and the rest of the people in the market are potential customers, $$S(0)$$. The first thing to note is that there will be a growing customer base $$(dI/dt > 0)$$ as long as: $$βS/γ > 1$$ That is, viral growth will occur as long as the addressable market size, $$S(0)$$, and sharing rate, $$β$$, are sufficiently large compared to the churn, $$γ$$. This model shows that with a big enough market, you can go viral even with a small $$β$$ as long as your churn is also small enough (consistent with the Pinterest example described in part 1). This model also shows that the effects of churn cannot be ignored, even in very early viral growth. If at $$t=0$$, $$S$$ is very close to $$N$$, then $$βS/γ$$ is approximately $$βN/γ$$. Thus, if $$βN/γ > 1$$, initial growth will occur and if $$βN/γ < 1$$, the customer base will not grow. This is sometimes called the “basic reproductive number” in epidemiology literature. It is essentially what Eric Ries calls the “viral coefficient” although it depends on market size and churn as well as the viral sharing rate. It is approximately the average number of new customers each early customer will invite during the entire time that they remain a customer, which is $$1/γ$$. However, in the case that viral growth does occur, $$βN/γ$$ rapidly ceases to represent the number of customers that each customer invites. Another thing you can see by examining the equations is that if you ignore the change in the market size (an approximation that makes sense for short lived virality, such as with a YouTube video), the customer base always goes to zero at long times unless you have zero churn. Once the number of current customers reaches a peak where $$dI/dt = 0$$ at $$I = N – γ/β$$, the rate of change in the number of current customers becomes negative and the number of customers eventually reaches zero. This is consistent with the data provided in the Mashable post on the half-lives of Twitter vs. YouTube content. Again, note the key role that churn has in determining the peak number of customers. Examples: We can gain more insight from these equations by numerically integrating them. For these examples, the unit of time used to define $$β$$ and $$γ$$ is one day, though the choice is arbitrary. I’ve given values of $$β$$ as $$βN$$ to create better correspondence with Ries’ concept of viral coefficient — If at $$t=0$$, $$S(0)$$ is approximately $$N$$, $$βN$$ is approximately the number of new customers each existing customer begets per day. With the parameters: • $$N = 1$$ million people in the market • $$βN = 10$$ invites per current user per day • $$γ = 50\%$$ of customers lost per day • $$I(0) = 10$$ current customers numerically integrating the equations given above yields the following for how the number of customers changes for the first 30 days: This shows a traffic pattern similar to that of a popular Twitter link where traffic quickly spikes and then dies down as people tire of looking at it. (In the case of visiting a webpage, a “customer” can be defined as a visitor). For a smaller churn rate, $$γ = 1\%$$ of customers lost per day, we see the following for the growth and decline in the number of customers over 300 days: This shows how even for low values of churn, without new potential customers joining the market, or former customers returning, the customer base always diminishes after reaching it’s peak. Also note how a smaller churn rate allows us to reach a higher peak in traffic. So how can viral growth be sustained? For that, you need to consider the change in the market size, which I’ll examine in Part 3. TLDR: A better definition of “viral coefficient” is successful invitations per existing user per potential user per unit time. But market size and customer churn are just as, if not more, important than viral coefficient. Viral growth in a static market is unsustainable unless you have absolutely zero churn. #### Valerie Coffman Founder and CEO at Feastie I'm a physicist turned data scientist and entrepreneur. Founder of Feastie -- search and analytics for the foodie blogsphere. Board member at Data Community DC. #### Latest posts by Valerie Coffman • How to Create an Infographic with Keynote - April 3, 2013 • Using Data to Create Viral Content. [INFOGRAPHIC] - April 2, 2013 • Better Science of Viral Marketing: Part 3 - February 7, 2013 This entry was posted in Projects and tagged analytics, business. Bookmark the permalink. • Pingback: Better Science of Viral Marketing : Part 1 • Pingback: The 4 Big Holes in Your Viral Marketing Campaign • Pingback: Quora • Pingback: Better Science of Viral Marketing: Part 3 • Pingback: A Better Viral Marketing Model (and How to Use it) • Pingback: Math for Marketers: How Changing Market Size Affects Virality
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 59, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417620897293091, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/82694/if-the-goldbach-conjecture-is-true-does-it-make-it-easier-to-find-large-primes
If the Goldbach Conjecture is True, does it make it easier to find large primes? I was just reading Is every positive nonprime number at equal distance between two prime numbers? (current hot topic) and was reflecting on the fact that computing security (cryptography) is based around the use of large prime numbers (see: Why are very large prime numbers important in cryptography?). The answer for the above-mentioned question suggests that the Goldbach Conjecture says that every non-prime positive number is positioned equidistant between two prime numbers. For the purposes of this question, I'll assume that statement is true (I have no prior knowledge of Goldbach or his/her conjecture). If the Goldbach Conjecture is true, does it make it easier to find large prime numbers? For example, I could take any very large number at random and then look at every number below it, find a prime and then work out of the opposite number is also a prime (or something along those lines). In my mind, it's almost as though the assumption would give me a starting point to find an even larger prime... I expect I'm not the first person to ask this (and if I am, I've probably missed something somewhere..), but I can't find a similar question here :) Thanks in advance - 2 How is that going to be efficient? Given number $n$, you'd have to walk through the primes less than $n$, which, when $n$ is, say, $100$ digits, is going to be too many primes to check. And you are effectively skipping candidates, because Goldbach doesn't say that $n-k$ is prime if and only if $n+k$ is prime. – Thomas Andrews Nov 16 '11 at 14:54 3 Answers Finding primes of the size wanted for cryptography isn't hard. The prime number theorem says that a "random" number $n$ has one chance in $\ln n$ of being prime. For a $1000$ bit number, this is about $1$ in $700$. If you only try numbers congruent to $1$ or $5 \pmod {6}$, you get another factor $3$, so you only have to try a few hundred before you find one. How to check is described here. The celebrated prime numbers that are found are much larger. - So in short "No" :) Thanks @rossmillikan – LordScree Nov 17 '11 at 8:51 Technically it could, in some exceptionally unlikely scenario were you have a large even number $2n$, and you find that all $2n-p_i$ are all composite with small prime-factors except for one small $i$. But seriously the only information you gain is that atleast one of $2n-p_i$ is prime, but heuristically this is to be expected 99.999% anyways so you gain really nothing except if above magic conspiracy took place. - Yes. $2n-p=q$, where $p<q$, and $q$ is the larger "flank" of $n$ as $n$ is midway between $p$ and $q$. You'd probably only have to test the 8 primes less than (but closest to) $p$ to find $q$. Let's say those primes are called $a,b,c,d,e,f,g,h$. Then $q$ might be $2n-a$ or $2n-b$ or $2n-c$ or $2n-d$ or $2n-e \ldots$ you get the idea. Try it with primes you already know. What if you thought 181 was the largest known prime. Look for 8 closest primes less than 181, and use $n=200$ (or any number $n$ that is at least 10% greater than your largest known prime). Find $2n-a$. Find $2n-b$. Find $2n-c$. And so on... Did you have to go through all 8 to find a prime larger than 181? - 1 I didn't downvote; I just want to explain some problems with your answer. This doesn't show that Goldbach's conjecture helps us find large primes; you're just testing a handful of odd numbers larger than 181. If you randomly select large odd numbers to test, eventually you'll find a prime. There's no evidence choosing numbers of the form $2n-p$ speeds up this process. Also, the average number of random guesses required to find a prime increases as we search for larger primes (see the Prime Number Theorem). If you want 100- or 200-digit primes, you will usually need more than 8 guesses. – Jonas Kibelbek Apr 18 '12 at 15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653904438018799, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/03/11/atlases-refining-covers-part-2/?like=1&_wpnonce=cb13afa090
# The Unapologetic Mathematician ## Atlases Refining Covers, part 2 Again with the late posts… Now, armed with our two new technical assumptions, we can prove the existence of the refining covers we asserted yesterday. Since $M$ is (now) known to be locally compact, Hausdorff, and second countable, there must exist a countable basis $\{Z_k\}$ for the topology of $M$ with each closure $\bar{Z}_k$ compact. Basically we can start with a neighborhood of each point that has compact closure and whittle it down to a countable basis, using the Hausdorff property to make sure we keep compact closure. We will construct a sequence of compact sets inductively. Let $A_1=\bar{Z}_1$, which is compact by assumption. Given $A_i$ already defined, let $j$ be the first index for which $A_i\subseteq Z_1\cup\dots\cup Z_j$, and define $A_{i+1}=\bar{Z}_1\cup\dots\cup\bar{Z}_j\cup\bar{Z}_{j+1}$. Then $\{A_k\}$ is a sequence of compact sets with $A_k\subseteq\mathrm{int}(A_{k+1})$, and whose union is all of $M$. Define $A_0$ to be the empty set. Now, we can write $\displaystyle M=\bigcup\limits_{i\geq0}\left(A_{i+1}\setminus\mathrm{int}(A_i)\right)$ so for every point $p$ we can find a chart $(V_p,\phi_p)$ sending $p$ to $0\in\mathbb{R}^n$ and with $\phi_p(V_p)=B_{3n}(0)$, $V_p\subseteq U_\alpha$ for some $\alpha$, and $V_p\subseteq\mathrm{int}(A_{i+2})\setminus A_{i-1}=W_i$ for some $i$. Indeed, we can surely find some chart around $p$, and intersecting it with some open $U_\alpha$ — which should contain $p$ — and with the open $W_i$ — likewise — still gives us a chart. We can subtract off whatever offset we need to make sure that this chart sends $p$ to $0$. Then we can take a ball of some radius around $0$ and let $V_p$ be its preimage. Scaling up the coordinate map lets us expand this ball until its radius is ${3n}$. Messy, no? So now the collection of all the preimages $\phi_p^{-1}(B_1(0))$ as $p$ runs over $A_{i+1}\setminus\mathrm{int}(A_i)$ is an open cover of this compact set, and thus it contains a finite subcover, which we write as $P_i$. Taking the union of all of the $P_i$ gives a countable cover $V_k$ of $M$ refining $U_\alpha$. Each $V_k$ is the domain of a chart with $\phi_k(V_k)=B_{3n}(0)$, and the collection of preimages $\phi_k^{-1}(B_1(0))$ covers $M$, as asserted. The only thing we haven’t shown here is that $V_k$ is locally finite. But since each point $p\in M$ must lie in one of the $A_{i+1}\setminus\mathrm{int}(A_i)$, so $W_i$ is an open neighborhood of $p$ that intersects at most finitely many $A_i$, and each $A_i$ can intersect at most finitely many $V_k$, so $W_i$ touches at most finitely many of them itself. Got all that? We’re not out of the woods yet… ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 5 Comments » 1. Not sure why, but for me the LaTeX V_k is rendering as “f(z) = z(z-1)(z-2) = z^3 -z” in both this post and the last. Comment by Robert | March 11, 2011 | Reply 2. clear cache? I’m not seeing it… Comment by | March 11, 2011 | Reply 3. Can you help me a bit? Why A_k subset int(A_k+1) holds? I can’t figure it out… Comment by NikosP | April 8, 2012 | Reply 4. Does it help if I remind you that the sets $Z_k$ in a basis are open? Comment by | April 8, 2012 | Reply 5. Hello John. I do not see how you obtain a countable basis for which each subset $Z_k$ has compact closure. Could you please explain the process that you use to “whittle it down”? Besides, don’t you simply need $\sigma$-compactness in order to build the sequence $A_n$? This seems to be much simpler to obtain. Comment by | March 1, 2013 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415386915206909, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/6646/physics-of-simple-collisions?answertab=active
# Physics of simple collisions I'm building a physics simulator for a graphics course, and so far I have it implementing gravitational and Coulomb forces. I want to add collisions next, but I'm not exactly sure how to go about doing it. A quick summary of how this is working so far is: All objects are spheres of a set radius, mass and charge. The mass's and charges of the spheres are treated as point charge/mass for the calculation. Every step in time (about 1/50th of a second) the forces acting on each object are calculated in a nice big nested for loop that figures out the coloumb and gravitational force between 2 objects, for every set of 2 objects, and then they are summed together. I use this net force to determine the acceleration, and the rest is fairly obvious from there. What I want to add in is collisions. I can deteremine pretty easily if a collision is happening (if the distance between them <= radius of one + radius of other), what I am not so certain of is how I should add in the collision force (and would need to do it component wise). I want elastic collisions, though for now I'd just be happy with getting conservation of momentum The information I have easily available are: the velocity of each object (and I mean velocity, speed and direction), the position of each object, the mass/charge (charge obviously not needed here) and the Net force calculated so far for each object for the next step in time. I dont need exact formulas (would prefer if they arent exact), just need a nudge in the right direction. - 1 What about spin? Do your objects have spin; do you consider them point-like? – Eelvex Mar 10 '11 at 20:04 1 Then, just apply the momentum conservation + kinetic energy conservation to the axis connecting your objects (i.e the axis connecting their centers). – Eelvex Mar 10 '11 at 20:13 1 Elastic collision = kinetic energy conserved. Momentum is always conserved. – Eelvex Mar 10 '11 at 20:17 2 – Eelvex Mar 10 '11 at 20:20 1 @Eelvex @CJB Be careful with that animation. It shows the balls coming off at right angles, which holds only when the masses are equal. – Mark Eichenlaub Mar 10 '11 at 20:46 show 5 more comments ## 2 Answers Mark Eichenlaub's answer is 100% correct, but you can also do it without changing reference frames, and I think it's probably easier that way. Here's how I would set it up. Suppose that you've determined that two objects are going to collide within the next time step. Determine the positions of the two objects at the moment of collision. Draw a line connecting their two centers. Determine a unit vector $\hat n$ pointing from the center of object 1 to the center of object 2. During the collision, the force will be in this direction, so only this component of the momentum will change. Let the unknown change in momentum be $\vec q$. This vector points in the $\hat n$ direction: $\vec q=q\hat n$. To be specific, if $\hat n$ points from object 1 to object 2, then $\vec q$ is the momentum gained by object 2, and $-\vec q$ is the momentum gained by object 1. Now let $\vec p_1,\vec p_2$ be the initial momenta of the two objects, so that the final momenta are $\vec p_1-\vec q,\vec p_2+\vec q$. Energy is related to momentum via the rule $E=p^2/2m$, so conservation of energy says $${p_1^2\over 2m_1}+{p_2^2\over 2m_2}={(\vec p_1-\vec q)^2\over 2m_1}+{(\vec p_2+\vec q)^2\over 2m_2}$$ In this expression, you know everything except the magnitude $q$ of $\vec q$. So you can solve this equation for the unknown $q$. Then you can compute the new momenta, and you're done. The equation you solve ends up being a quadratic, but with no constant term -- that is, it's of the form $$aq^2+bq=0.$$ One solution is $q=0$, of course, but you don't want that one: it corresponds to the two objects passing right through each other. The nonzero one's the one you want. - In an elastic collision between spheres, we can use conservation of energy and momentum, but this alone is not enough. Start by looking in a frame where one ball is stationary (the target) and the other one is coming in from the side (the striker). The problem is to find the final momenta of the two balls. Assuming everything stays in a plane, there are 4 degrees of freedom - two for each momentum. Conservation of momentum has two constraints and conservation of energy has one, so applying these still leaves one degree of freedom; one extra parameter is needed to describe the collision. One choice is the precise point of the collision. Because the force is being applied at this point and is only a contact force normal to the surface, the target ball will shoot off in the direction of the normal to the surface at the collision point. One way to summarize the rules is: 1. Boost to a reference frame where the target is initially stationary. 2. Make the final momentum of the target point in the direction of the vector between the balls' centers at the moment of impact. 3. Apply conservation of kinetic energy. 4. Apply conservation of momentum in both dimensions. 5. Boost back to desired original frame. This should uniquely specify where the target and striking balls go after the collision, and the answer is independent of which ball you choose to be the target. If you're doing the calculation in 3D, you will have to choose the correct plane. It's the plane spanned by the displacement vector between the balls and their relative velocity. (If these vectors are dependent, the motion is only on a line.) - 1 As I wrote in my answer, I think it's easier to do it in the "lab" frame rather than boosting. But if you're going to boost, surely you're much better off boosting to the center-of-mass frame rather than the stationary-target frame. – Ted Bunn Mar 10 '11 at 20:58 though correct and nice, you make it sound much more complicated than it really is. Also, momentum changes only in one dimension: that of the joining axis. – Eelvex Mar 10 '11 at 21:05 @Ted I guess I agree with you. I chose that frame because you can find the direction of the target ball easily, but reading your answer it's clear it would be easier to do it the way you described. – Mark Eichenlaub Mar 10 '11 at 21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334290027618408, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/tagged/basic-concepts+mean
Tagged Questions 1answer 61 views Interpretation of mean in this example I recently presented a national test and the company in charge of preparing the test then does a standardization to provide the final scores for each person. These are the values they gave at the end ... 1answer 115 views An analytical framework for considering the geometric mean Is there an analytical method of looking at the geometric mean that will allow one to break it down to its various components? The focus of the question is more for financially related returns, but I ... 3answers 3k views Difference between standard error and standard deviation I'm a beginner in statistics. I'm struggling to understand the difference between the standard error and the standard deviation. How are they different and why do you need to measure standard error? ... 2answers 2k views How should one interpret the comparison of means from different sample sizes? Take the case of book ratings on a website. Book A is rated by 10,000 people with an average rating of 4.25 and the variance $\sigma = 0.5$. Similarly Book B is rated by 100 people and has a rating of ... 7answers 3k views If mean is so sensitive, why use it in the first place? It is a known fact that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place? One thing I can think of perhaps is to understand the presence of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559583067893982, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/119317?sort=votes
## holomorphic covering between points in Teichmuller space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have the following questiom: let $X$ and $Y$ be two different points (represented by Riemann surfaces) in the Teichmuller space $T_g$ of genus $g \geq 2$ Riemann surfaces. Then of course $X$ and $Y$ are homeomorphic and not bi-holomorphically equivalent. My question is, whether there exists a holomorphic covering from $X$ to $Y.$ Namely, is there a topological covering $p: X \to Y$ which is holomorphic with respec to the complex structures of $X$ and $Y$? Why or why not? Thanks in advance! - @silktomath: If you like one of the answers, you should click the "accept" button. If you like several of the answers, you should click the earliest one that you like. – Lee Mosher Jan 19 at 15:32 Lee, there is a very good chance I first heard about Teichmuller Space from you. – Adam Epstein Jan 19 at 16:52 Could be.. Could be. :-) – Lee Mosher Jan 20 at 0:52 ## 3 Answers As $g\geq 2$, it follows by Riemann-Hurwitz that any topological covering $X\rightarrow Y$ is a homeomorphism, and any holomorphic homeomorphism is biholomorphic. - Thanks a lot for this! – silktomath Jan 19 at 12:02 1 To express this a little differently, this holds for $X,Y$ if and only if they are in the same orbit of the mapping class group on $T_g$. – Lee Mosher Jan 19 at 13:30 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As Adam said in his answer, by Riemann Hurwitz, if the genus is greater than 1, every holomorphic covering must be biholomorphic. But biholomorphic maps certainly exist. (Contrary to what is stated in the question, different points of the Teichmuller space can be bihilomorphically equivalent). They form a group acting on the Teichmuller space called the Modular group, which acts on the Teichmuller space (and in general has fixed points). The factor over this group is called the moduli space, and the group itself has been very well studied. - That is just what I said in my comment to Adam Epstein's answer. – Lee Mosher Jan 19 at 15:27 Moreover, by the same reason there is no ramified covering $f\colon X\to Y$ either. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285560250282288, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/54907-direct-products-problem.html
# Thread: 1. ## direct products problem I'm stuck at this question for more than 24 hours. This question is from "THE THEORY OF GROUPS: An Introduction" by Joseph J. ROTMAN, Exercise 4.11. Here is the question: If $G$ is a group with normal subgroups $H_{1},H_{2}, ... ,H_{m}$, then $G=\prod_{i=1}^{m} H_{i}$(internal) if and only if $G=\langle \, \bigcup_{i=1}^{m} H_{i} \rangle$ and, for all $j$, $H_{j} \bigcap \, \langle \, \bigcup_{i \ne j} H_{i} \rangle = {\{1\}}$. 2. Originally Posted by deniselim17 I'm stuck at this question for more than 24 hours. This question is from "THE THEORY OF GROUPS: An Introduction" by Joseph J. ROTMAN, Exercise 4.11. Here is the question: If $G$ is a group with normal subgroups $H_{1},H_{2}, ... ,H_{m}$, then $G=\prod_{i=1}^{m} H_{i}$(internal) if and only if $G=\langle \, \bigcup_{i=1}^{m} H_{i} \rangle$ and, for all $j$, $H_{j} \bigcap \, \langle \, \bigcup_{i \ne j} H_{i} \rangle = {\{1\}}$. i don't have the book right now but i think Rotman's definition is this: we say $G$ is the internal direct product of some normal subgroups $H_i, \ i=1, \cdots , m,$ if $G=H_1 H_2 \cdots H_m,$ and for all $j: \ \ H_j \cap (H_1 \cdots H_{j-1}H_{j+1} \cdots H_m)=\{1\}.$ so the only thing you need to prove is that if we have a bunch of normal subgroups $H_i, \ 1 \leq i \leq m,$ of a group G, then we will have: $H_1H_2 \cdots H_m = \langle \, \bigcup_{i=1}^{m} H_{i} \rangle,$ which is very easy to prove: since $H_j$ are normal, $H_1H_2 \cdots H_m$ is a subgroup, which contains every $H_j.$ hence it contains the union of them. but by definition $\langle \, \bigcup_{i=1}^{m} H_{i} \rangle$ is the smallest subgroup which contains the union of $H_j.$ hence $\langle \, \bigcup_{i=1}^{m} H_{i} \rangle \subseteq H_1H_2 \cdots H_m.$ on the other hand, clearly every $H_j$ is contained in $\bigcup_{i=1}^m H_i \subseteq \langle \, \bigcup_{i=1}^{m} H_{i} \rangle.$ since $\langle \, \bigcup_{i=1}^{m} H_{i} \rangle$ is a subgroup, it must also contain the product of $H_j$, i.e. $H_1H_2 \cdots H_m \subseteq \langle \, \bigcup_{i=1}^{m} H_{i} \rangle.$ this completes the proof.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353926181793213, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/44557/analysis-of-the-impulse-of-2-colliding-carts-under-the-effect-of-magnetic-repuls/44561
# Analysis of the impulse of 2 colliding carts under the effect of magnetic repulsion Hi there! I have a question about an experiment that was conducted. It is related to momentum. 2 carts were put on a track on opposite sides. They were then propelled towards one another at approximately the same speed.Please note that on each end of each cart, there was placed a negative magnet, so as soon as they came close to each other, repulsion commenced. Two sensors were set on a frequency of 20 Hz to record their motion. After collecting the data, the data for the difference in momentum was graphed through the equation m2v2 - m1v1. The group did this to calculate the approximate average frictional force of the system by producing a linear fit. The slope of this line proved to be the average friction of the system. This was an extremely simple experiment and did not prove at all difficult. However, there is one question I have been asking myself that is driving me crazy. I can't seem to find the answer (Please note that our instructor did not ask us for this). Why are there spikes in the graph of Δp (change in momentum) very near the moment of collision (actually there is no "collision" because of the magnetic repulsion). Is it because the frequency was set only to 20Hz and thus since the instance of collision was so tiny, the sensors couldnt pick up data correctly? Is it because for the brief moment that the carts cease to move friction with the rail is static instead of kinetic and thus requires more force? Some useful information: mass of cart1: 255.9g mass of cart2: 251.1g Please note the equation: Δp= Δt*Favg and p = mv An answer would be greatly appreciated as this question has been driving me crazy. - Is the vertical scale of the graph position, or what? (There is no hope that you will get help without being clear about such things.) If it is, why do you think that it is a good idea to fit a single line to the before data before and after the interaction? – dmckee♦ Nov 19 '12 at 0:16 Please read the question very carefully before accusing me of being unclear. I mentioned twice what the y-axis represents: "the data for the difference in momentum was graphed through the equation m2v2 - m1v1" and "Why are there spikes in the graph of Δp (change in momentum)" implying it's clearly a Δp-t graph. Also please look at the title which clearly states IMPULSE. you missed it three times. – Outlier Nov 19 '12 at 0:31 If you are serious about that then you seem to have an error in your analysis up to this point: that quantity should change signs during the interaction (which is why I was confused). Were you careful about the sign of the velocities? – dmckee♦ Nov 19 '12 at 0:42 yes. please note that this is the graph of change in momentum. this is not the graph of velocities, in which case, yes the signs of the velocities would change. – Outlier Nov 19 '12 at 1:27 ## 1 Answer Let's consider this a bit. Label the carts with 1,2, and let the initial velocities be $v_{1,2}$ and the final velocities be $v'_{1,2}$. So, the impulse delivered to cart 1 is $$I_1 = \Delta p_1 = m_1 \delta v_1 = m_1(v'_1 - v_1)$$ and likewise $$I_2 = \Delta p_2 = m_2 \delta v_2 = m_1(v'_2 - v_2)$$ but by Newton's law of reaction $I_2 = -I_1$ Now if you consider the quantities $\Delta P = m_1 v_1 - m_2 v_2$ and $\Delta P' = m_1 v'_1 - m_2 v'_2$ (where I have chosen the capital p simple to distinguish the difference of momenta from the individual momenta of the carts). We construct $$\Delta P' - \Delta P = m_1 (v'_1 - v_1) - m_2 (v'_2 - v_2) = I_1 - I_2 = 2I_1 .$$ You would expect the graph that you use to show a discontinuity at the time of the interaction equal to twice the impulse. - I understand now! Thanks so much! So are the spikes in the graphs supposed to represent the discontinuity of which you speak? – Outlier Nov 19 '12 at 1:52 Excellent answer. But can I ask why you have subtracted the momenta of the carts? – Outlier Nov 19 '12 at 6:58 Er...because you said you were plotting the difference in momenta and that got me started thinking along those lines. – dmckee♦ Nov 19 '12 at 15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9679380655288696, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/160442-logarithmic-equations.html
# Thread: 1. ## Logarithmic Equations I've been trying to solve this equation for more than an hour already and I still get the wrong answer: log5 (x) - log25 (x) + log125 (x) = 5 The answer is 5^6 or 15,625 but I keep getting 5^(15/2) for some reason. 2. There's some pretty nasty log bases there. Have you tried to change base? i.e. $\log_5x = \frac{\ln x}{\ln{5}}$ 3. I tried solving the equation with base Ln, but i get lost with the fractions and I get a very ugly number, so to speak 4. Maybe you can work backwards given $\log_5 5^6 - \log_{25} 5^6 + \log_{125}5^6$ $6\log_5 5 - \log_{25} (5^2)^3 + \log_{125}(5^3)^2$ $6\log_5 5 - 3\log_{25} 5^2 + 2\log_{125}5^3$ $6\log_5 5 - 3\log_{25} 25 + 2\log_{125}125$ $6-3+2=5$ 5. I did that just to make sure the answer was correct, but I still didn't get anywhere 6. $\log_5 x - \log_{25} x + \log_{125} x = 5$ $\frac{\ln x}{\ln 5} - \frac{\ln x}{\ln 25} + \frac{\ln x}{\ln 125} = 5$ $\ln x \cdot \bigg( \frac{1}{\ln 5} - \frac{1}{\ln 25} + \frac{1}{\ln 125} \bigg)= 5$ Can you carry on from there now? 7. Hello, >_<SHY_GUY>_<! $\log_5(x) - \log_{25}(x) + \log_{125}(x) \:=\: 5$ Using the Base-change Formula, we have: . . . $\displaystyle \frac{\ln x}{\ln 5} - \frac{\ln x}{\ln25} + \frac{\ln x}{\ln125} \;=\;5$ . . . . $\displaystyle \frac{\ln x}{\ln 5} - \frac{\ln x}{\ln 5^2} + \frac{\ln x}{\ln5^3} \;=\;5$ . . . $\displaystyle \frac{\ln x}{\ln 5} - \frac{\ln x}{2\ln 5} + \frac{\ln x}{3\ln 5} \;=\;5$ . . $\displaystyle \frac{6\ln x - 3\ln x + 2\ln x}{6\ln 5} \;=\;5$ . . . . . . . . . . . . . . $\displaystyle \frac{5\ln x}{6\ln 5} \;=\;5$ . . . . . . . . . . . . . . $5\ln x \;=\;30\ln 5$ . . . . . . . . . . . . . . . $\ln x \;=\;6\ln 5$ . . . . . . . . . . . . . . . $\ln x \;=\;\ln 5^6$ . . . . . . . . . . . . . . . . $x \;=\;5^6$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8803102374076843, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/120931-need-help-matrix-operation.html
# Thread: 1. ## Need help for matrix operation I am having matrix equation A= B x C Where B= 0 1 0 1 0 0 1 1 0 1 1 0 and C = 1 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 And by equation A= B x C my answer is A= 0 1 1 0 1 0 1 1 0 1 0 0 1 0 1 1 1 0 1 0 1 1 1 0 **************************** I want to ask If A and B are known to me , how can i calculate C I tired C= A x B(inverse) in matlab but it gave error Kindly help me to calculate C if A and B are known 2. Originally Posted by moonnightingale I want to ask If A and B are known to me , how can i calculate C I tired C= A x B(inverse) in matlab but it gave error Kindly help me to calculate C if A and B are known Hi there. You can multiply both sides of the equation by $B^{-1}$, but from the left side. i.e. $C=B^{-1}A$ not $C=AB^{-1}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8928503394126892, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/16296-conic-sections.html
# Thread: 1. ## conic sections 1. Write the equation of the parabola given the focus at (-3,-2) and the directrix is the line y=-6. It is helpful to draw a sketch of what is given to help in choosing the correct formula. Be sure you have the correct formula to fill in. leave the equation in standard form. 2.Given the ellipse equation 9x^2 + 4y^2-18x + 16y=0 , find the coordinates of the center, the vertices, and the foci. Sketch the ellipse on the grid showing each of these points as dots. Show steps for converting to standard form. PLEASE INCLUDE WORK STEP BY STEP. THANK YOU 2. Hello, JROD23! The ellipse problem is not pretty . . . 2. Given the ellipse equation: . $9x^2 + 4y^2-18x + 16y\:=\:0$ find the coordinates of the center, the vertices, and the foci. Sketch the ellipse. To get Standard Form, we must complete-the-square. We are given: . $9x^2 - 18x + 4y^2 + 16y \;=\;0$ . . . . . $9(x^2 - 2x\qquad) + 4(y^2 + 4y\qquad) \;=\;0$ . . . . $9(x^2 - 2x$ + 1 $) + 4(y^2 + 4y$ + 4 $)\; = \;0$ + 9 + 16 . . . . . . . . . . . $9(x-1)^2 + 4(y + 2)^2 \;=\;25$ Divide by 25: . $\frac{9(x-1)^2}{25} + \frac{4(y+2)^2}{25} \;=\;1$ And we have: . . $\frac{(x-1)^2}{\frac{25}{9}} + \frac{(y+2)^2}{\frac{25}{4}} \;=\;1$ This is a "vertical" ellipse. .Its center is: $(1,\,\text{-}2)$ . . The major axis is vertical: . $a = \frac{5}{2}$ . . The minor axis is horizontal: . $b = \frac{5}{3}$ The vertices (ends of the major axis) are $\frac{5}{2}$ units above and below the center. . . Vertices: $\left(1,\,\frac{1}{2}\right),\;\left(1,\,\text{-}\frac{9}{2}\right)$ The co-vertices (ends of the minor axis) are $\frac{5}{3}$ units left and right of the center. . . Covertices: $\left(\text{-}\frac{2}{3},\,\text{-}2\right),\;\left(\frac{8}{3},\,\text{-}2\right)$ The foci are above and below the center. .We must find $c$. We have: . $c^2 \:=\:a^2-b^2\quad\Rightarrow\quad c^2 \:=\:\left(\frac{5}{2}\right)^2 - \left(\frac{5}{3}\right)^2 \:=\:\frac{125}{36} \quad\Rightarrow\quad c \,=\,\frac{5\sqrt{5}}{6}$ . . Foci: $\left(1,\,\text{-}2+\frac{5\sqrt{5}}{6}\right),\:\left(1,\,\text{-}2-\frac{5\sqrt{5}}{6}\right)$ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7853403687477112, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/169621/what-means-the-boundary-of-a-space
What means the boundary of a space By definition topological spaces are clopen and then their boundaries are empty, but for example, is said that the boundary of the closed unit interval is it's two endpoints a so on. whath is the meanining of "boundary" in this context? - 1 Answer There are two notions of boundary in mathematics. One is the boundary of a subset of a topological space, and it is a relative notion: it takes as input two objects, a topological space and a subset of it. As a subset of $\mathbb{R}$, the boundary of $[0, 1]$ is its endpoints. (However, as a subset of $\mathbb{R}^2$, for example, the boundary of $[0, 1]$ is $[0, 1]$. Again, relative.) There is also the boundary of a manifold with boundary, which takes as input one object, a manifold with boundary. As a manifold with boundary, the boundary of $[0, 1]$ is also its endpoints. - oh thats clarifier thanks – Eduardo Alan Dávalos Peña Jul 11 '12 at 20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.972776472568512, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/determinacy
## Tagged Questions 3answers 498 views ### Counterintuitive consequences of the Axiom of Determinacy? I just read Dr Strangechoice's explanation that if all subsets of the real numbers are Lebesgue measurable, then you can partition $2^\omega$ into more than $2^\omega$ many pairwis … 1answer 182 views ### Which forcings preserve (some) determinacy? The question is exactly as in the title. I'm interested in general in all questions of the form "which forcings preserve property P?" for any P, but determinacy assumptions occupy … 2answers 265 views ### Weakly homogeneous trees under AD If AD$_\mathbb{R}$ holds and $\kappa < \Theta$ then every tree $T$ on $\kappa$ is weakly homogeneous (Martin–Woodin, "Weakly homogeneous trees.") I recall hearing that the hypo … 1answer 183 views ### sigma-algebra generated by OD sets Assume $V=L(\mathbb{R})$ and the Axiom of Determinacy. Is every set of reals generated by ordinal-definable sets of reals under the operations of countable union and intersection? … 2answers 622 views ### How additive is Lebesgue measure in ZF+AD ? What is known about the additivity of Lebesgue measure under the Axiom of Determinacy? That is, for what cardinals $\kappa$ do we have with $|I| = \kappa$, for all functions \$f : … 0answers 318 views ### How to prove projective determinacy (PD) from I0? Martin and Steel (in 1987?) showed that if there are infinite many Woodin cardinals then every projective set of reals is determined (PD). However, it is mentioned in many texts th … 1answer 300 views ### Consistency strengths related to the perfect set property I want a model of $\mathrm{MA}_{\sigma\mathrm{-centered}}+\neg\mathrm{CH}$ in which every set of reals in $L(\mathbb{R})$ has the perfect set property. In terms of consistency stre … 2answers 653 views ### Martin’s cone theorem and recursion theory Martin's remarkable cone theorem in the theory of determinacy says the following: Suppose $A\subseteq \omega^\omega$ is Turing invariant and determined. If \$\forall x\exists y … 1answer 253 views ### value of Theta in ZF+AD Since I found out about it, I've always been interested in the Axiom of Determinacy rather than the Axiom of Choice. Along these lines, I've kept flipping back to http://en.wikipe … 1answer 164 views ### The projection of a weakly homogeneous tree is determined Where can I read a proof of this? 1answer 205 views ### Related Open Game in Analytic Determinacy For this question, please refer to Chapter 33 page 638, Set Theory Millennium Edition, by Thomas Jech. The proof of analytic games $G_A$ is converted into an open game $G^\ast$ on …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202281832695007, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/63633/element-algebraically-distinguishable-from-its-inverse/63642
## element algebraically distinguishable from its inverse ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (This question came up in a conversation with my professor last week.) Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$. Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ? What if $G$ is finite? - I wonder if (in a non abelian maybe finite group) $x^{-1}$ is conjugate to $x$ ... – Olivier Bégassat May 1 2011 at 19:33 You might try thinking of G as a permutation group (via Cayley's Theorem). – Dan Ramras May 1 2011 at 19:45 1 @Olivier: No, in the alternating group $Alt(4)$, the 3-transposition $(1,2,3)$ is not conjugate to its inverse. (But it is in $Sym(4)$...) – Alain Valette May 1 2011 at 19:48 1 Olivier, see crazyproject.wordpress.com/2010/05/14/… – Andreas Thom May 1 2011 at 19:51 4 Incidentally, this is important and irritating if you were say, creating a library of character tables (ctbllib), since an element and its inverse are (a) basically indistinguishable to the average eye or from the ATLAS, but (b) are actually distinguishable by nit-picking details of things you are in fact storing about groups. This complicates the consistency checking of tables, especially modular tables where (indeed in one of the Mathieu groups) an element and its inverse may actually be distinguished by the modular table but not the ordinary, or maybe by the homology but not the character. – Jack Schmidt May 2 2011 at 0:51 ## 3 Answers The Mathieu group $M_{11}$ does not have this property. A quote from Example 2.16 in this paper: "Hence there is no automorphism of $M_{11}$ that maps $x$ to $x^{−1}$." Background how I found this quote as I am no group theorist: I used Google on "groups with no outer automorphism" which led me to this Wikipedia article, and from there I jumped to this other Wikipedia article. So I learned that $M_{11}$ has no outer automorphism. Then I used Google again on "elements conjugate to their inverse in the mathieu group" which led me to the above mentioned paper. EDIT: Following Geoff Robinson's comment let me show that any element $x\in M_{11}$ of order 11 has this property, using only basic group theory and the above Wikipedia article. The article tells us that $M_{11}$ has 7920 elements of which 1440 have order 11. So $M_{11}$ has 1440/10=144 Sylow 11-subgroups, each cyclic of order 11. These subgroups are conjugates to each other by one of the Sylow theorems, so each of them has a normalizer subgroup of order 7920/144=55. In particular, if $x$ and $x^{-1}$ were conjugate to each other, then they were so by an element of odd order. This, however, is impossible as any element of odd order acts trivially on a 2-element set. - 23 I like that this answer gives an explanation of how you can learn new math by clever searching. It's an important skill and one that people don't talk about so much. – Noah Snyder May 1 2011 at 22:58 4 It would have been nice to say in the answer itself which element of $M_{11}$ the element $x$ is. I think one choice is an element of $x$ order $11$, since a Sylow $11$-normalizer in $M_{11}$ has order $55$, so no element of order $11$ is conjugate to its inverse within $M_{11}$, which gives what you want since $M_{11}$ has no outer automorphisms. – Geoff Robinson May 2 2011 at 7:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No, such an isomorphism does not always exist, and the smallest counterexample is $G=C_5\rtimes C_4$ with $C_4$ acting faithfully. It is not hard to see that the only automorphisms of $G$ are inner, and that they cannot map an element of order 4 to its inverse. - Here's a comment which might as well be written down. If $f$ is required to be an inner automorphism, then for $G$ finite this question can be understood using the character table of $G$: $x$ is conjugate to its inverse if and only if $\chi(x)$ is real for all characters $\chi$. Since $\chi(x^{-1}) = \overline{ \chi(x) }$, one direction is clear. In the other direction, if $\chi(x)$ is real then $\chi(x) = \chi(x^{-1})$ for all characters $\chi$, hence $c(x) = c(x^{-1})$ for all class functions $c$. One also has the following cute result: the number of conjugacy classes which are closed under inversion is equal to the number of irreducible characters all of whose values are real (equivalently, the number of self-dual irreps). Since there exist plenty of groups (even simple groups) whose character tables have complex entries, there are plenty of groups with elements not conjugate to their inverses. This is one way to address the question for finite groups with no outer automorphisms. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423766732215881, "perplexity_flag": "head"}
http://mathoverflow.net/questions/92645/admissible-group-operation-on-etale-separated-finite-type-scheme/93824
admissible group operation on etale, separated, finite type scheme Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Grothendieck in SGA 1 introduces a proposition in expose 5 (proposition 3.1) which states: Let `$X$` be etale, separated of finite type over `$Y$`, locally noetherian, and let `$G$` be a finite group which operates on `$X$` by `$Y$`-automorphisms. Then `$G$` operates admissibly and the quotient scheme `$X/G$` is etale over `$Y$`. The hint he gives is that we may show this for `$X$` quasi-projective, and to use proposition 1.8, which states that `$G$` operates admissibly on `$X$` iff `$X$` is the union of open affines that are invariant under the action of `$G$`. I am unsure how to show this. Help please? EDIT: so I understand how to make the reduction to the quasi-projective case (since every etale morphism is quasi-finite, and then just apply Zariski's main theorem, and we get thus that this is quasi-projective), but I am unsure how to show that if a finite group operates on a quasi-projective scheme then it operates admissibly. I have a rough idea for how it could be done for a projective scheme, but I do not see how I could alter it for quasi-projective :( I could post this proof if it is of any help to readers?? - A minor correction: etale does not imply radicial, and in fact is essentially in the other direction (etale plus radicial implies that a map is an open immersion). For instance, for fields, radicial means purely inseparable, while etale means (finite) separable. – Akhil Mathew Apr 12 2012 at 4:27 I agree finite etale does not imply radicial, but in the particular case that our morphism is not finite, wouldn't it then necessarily be radicial? Please correct me if this understanding is wrong >.< – Lucy Apr 12 2012 at 21:00 Not quite. Etale means basically that the fibers are all finite unions of finite separable field extensions (that, plus flatness), while radicial means that the fibers are all finite purely inseparable extensions. – Akhil Mathew Apr 13 2012 at 0:10 1 Answer We can reduce to the case where $Y$ is affine, and in this case (as you observe), $X$ is quasi-projective over $Y$ by Zariski's main theorem. Consequently, the key step is to show that a finite group operating on a quasi-projective scheme does so admissibly (meaning that we can form the quotient nicely). Admissibility is equivalent to the condition that every $G$-orbit is contained in an open affine, but any finite subset of projective space is contained in an open affine (e.g. find a hypersurface not containing any element of the subset and take the complement). A proof of étaleness in the case of a finite morphism is on p. 56 of Murre's "Lectures on an introduction to Grothendieck's theory of the fundamental group." It is based on a sequence of reductions, which I'll try to outline (without assuming finiteness of $X \to Y$, but I will assume that $Y$ has finite Krull dimension). First, étaleness descends under faithfully flat base change, and it can be checked on the stalks. Moreover, the process of taking the quotient by a finite group (which, on the level of rings, amounts to taking fixed points) is preserved under flat base change. (This is essentially the following observation: if $G$ acts on an $A$-algebra $B$, then if $A'$ is a flat extension of $A$, then $(A' \otimes_A B)^G \simeq B^G \otimes_A A'$; one proves this by fitting both into exact sequences.) What all this means is that if you want to check that a map $X/G \to Y$ is étale, then you may as well base-change the whole problem to $T \to Y$ where $T$ is a faithfully flat extension of the local scheme $\mathrm{Spec} \mathcal{O}_y$ (for any $y \in Y$). Anyway, a lot of things simplify when you're allowed to make these kinds of base changes. So let's assume for the sake of argument that $Y$ is the $\mathrm{Spec}$ of a complete local ring, noetherian, whose residue field is algebraically closed. Then $X$ splits into two pieces: the first is a piece finite étale over $Y$ (and thus a disjoint union of copies of $Y$, since etale covers of $T$ split) and a second piece whose image does not contain the closed point of $Y$. Let's consider what happens to the piece which is finite étale over $Y$, and so looks like $Y \sqcup Y \sqcup \dots \sqcup Y$. Here the group $G$ just acts by permuting the factors, and the quotient is what you get by identifying a bunch of pieces of $Y$, so in particular looks like $Y \sqcup Y \sqcup \dots \sqcup Y$ with a different number of pieces. Now what can we say about the piece $X'$ over $Y - { \ast}$ for $\ast \in Y$ the closed point? Not that much, but we do know that it is of smaller Krull dimension, and using induction on the dimension, we can assume that $X'/G$ is étale over $Y - { \ast}$. It follows that $X/G$ is etale over $Y$, completing the (sketched) proof. The key lemmas that make possible this replacement of $Y$ by such a nice ring are the following: Lemma: Let $(A, \mathfrak{m})$ be a local noetherian ring and $L/k$ an algebraic extension. Then there exists a flat extension $(\widetilde{A}, \widetilde{m})$ of $(A, \mathfrak{m})$ such that the $\widetilde{A}/\widetilde{m} \simeq L$ of the same Krull dimension. This is essentially the result of EGA III.10.3. Corollary: If $(A, \mathfrak{m})$ is any local noetherian ring of finite Krull dimension, then there exists a flat extension $(B, \mathfrak{n})$ of the same Krull dimension and such that $B$ is complete local and such that $B/\mathfrak{n}$ is algebraically closed. This follows from the lemma applied to the completion of $A$. The whole point is that the etale maps to $\mathrm{Spec} B$ are a lot simpler than the maps to $\mathrm{Spec} A$ (for instance, the etale covers of $\mathrm{Spec} B$ are all trivial). In the present case, the strict henselianization would also do. To recap, the point of the argument is to induct on the dimension of $Y$. Then, one reduces to checking the claim of etaleness by making a very strong base change, such that $X$ splits into an easily understood finite piece and a less easily understood but inductively controlled non-finite piece. These types of reductions are quite useful. For instance, they're integral to the proof of Zariski's main theorem given in EGA. Also, I'm pretty sure that the hypothesis of finite Krull dimension on $Y$ can be removed by a "noetherian descent" type argument (i.e., regarding $X, Y$ as a limit of schemes of finite type over $\mathbb{Z}$), but I'm not completely sure how noetherian descent is supposed to work for étaleness. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296398758888245, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/211877-trig-tent-question.html
# Thread: 1. ## Trig tent question Okay, so this problem goes like this: A two-person tent is to be made so that the height at the center is a = 4 feet (see the figure below). If the sides of the tent are to meet the ground at an angle 60°, and the tent is to be b = 5 feet in length, how many square feet of material will be needed to make the tent? (Assume that the tent has a floor and is closed at both ends, and give your answer in exact form.) I am not sure how to set this one up. It looks like an equilateral triangle. 2. ## Re: Trig tent question Hello, goldbug78! A two-person tent is to be made so that the height at the center is a = 4 feet. If the sides of the tent are to meet the ground at an angle 60°, and the tent is to be b = 5 feet in length, how many square feet of material will be needed to make the tent? (Assume that the tent has a floor and is closed at both ends, and give your answer in exact form.) Code: ``` * * \ * \ * \ /|\ \ / | \ \ x / |4 \ x * / | \ * / | \ * 5 *-----*-----* : x/2 : x/2 :``` The front is an equilateral triangle with altitude 4 feet. Pythagorus: . $\left(\tfrac{x}{2}\right)^2 + 4^2 \:=\:x^2 \quad\Rightarrow\quad \tfrac{x^2}{4} + 16 \:=\:x^2$ . . . . . . . . . . $\tfrac{3}{4}x^2 \:=\:16 \quad\Rightarrow\quad x^2 \:=\:\tfrac{64}{3} \quad\Rightarrow\quad x \:=\:\tfrac{8\sqrt{3}}{3}$ The front triangle has area: . $\tfrac{1}{2}\left(\tfrac{8\sqrt{3}}{3}\right)(4) \:=\:\tfrac{16\sqrt{3}}{3}$ . . The two triangular panels have area: . $2 \times \tfrac{16\sqrt{3}}{3} \:=\:\tfrac{32\sqrt{3}}{3}$ A rectangular panel has area: . $\left(\tfrac{8\sqrt{3}}{3}\right)(5) \:=\:\tfrac{40\sqrt{3}}{3}$ . . The three rectangular panels have area: . $3 \times\tfrac{40\sqrt{3}}{3} \:=\: 40\sqrt{3}$ The total area is: . $\frac{32\sqrt{3}}{3} + 40\sqrt{3} \;=\;\frac{152\sqrt{3}}{3}\text{ ft}^2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502825140953064, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/10/21/the-codifferential/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
# The Unapologetic Mathematician ## The Codifferential From our calculation of the square of the Hodge star we can tell that the star operation is invertible. Indeed, since $*^2=(-1)^{k(n-k)}\lvert g_{ij}\rvert$ — applying the star twice to a $k$-form in an $n$-manifold with metric $g$ is the same as multiplying it by $(-1)^{k(n-k)}$ and the determinant of the matrix of $g$ — we conclude that $*^{-1}=(-1)^{k(n-k)}\lvert g^{ij}\rvert*$. With this inverse in hand, we will define the “codifferential” $\displaystyle\delta=(-1)^k*^{-1}d*$ The first star sends a $k$-form to an $n-k$-form; the exterior derivative sends it to an $n-k+1$-form; and the inverse star sends it to a $k-1$-form. Thus the codifferential goes in the opposite direction from the differential — the exterior derivative. Unfortunately, it’s not quite as algebraically nice. In particular, it’s not a derivation of the algebra. Indeed, we can consider $fdx$ and $gdy$ in $\mathbb{R}^3$ and calculate $\displaystyle\begin{aligned}\delta(fdx)&=-*d*(fdx)=-*d(fdy\wedge dz)=-*\frac{\partial f}{\partial x}dx\wedge dy\wedge dz=-\frac{\partial f}{\partial x}\\\delta(gdy)&=-*d*(gdy)=-*d(gdz\wedge dx)=-*\frac{\partial g}{\partial y}dy\wedge dz\wedge dx=-\frac{\partial g}{\partial y}\end{aligned}$ while $\displaystyle\begin{aligned}\delta(fgdx\wedge dy)&=-*d*(fgdx\wedge dy)\\&=-*d(fgdz)\\&=-*\left(\left(\frac{\partial f}{\partial x}g+f\frac{\partial g}{\partial x}\right)dx\wedge dz+\left(\frac{\partial f}{\partial y}g+f\frac{\partial g}{\partial y}\right)dy\wedge dz\right)\\&=\left(\frac{\partial f}{\partial x}g+f\frac{\partial g}{\partial x}\right)dy-\left(\frac{\partial f}{\partial y}g+f\frac{\partial g}{\partial y}\right)dx\end{aligned}$ but there is no version of the Leibniz rule that can account for the second and third terms in this latter expansion. Oh well. On the other hand, the codifferential $\delta$ is (sort of) the adjoint to the differential. Adjointness would mean that if $\eta$ is a $k$-form and $\zeta$ is a $k+1$-form, then $\displaystyle\langle d\eta,\zeta\rangle=\langle\eta,\delta\zeta\rangle$ where these inner products are those induced on differential forms from the metric. This doesn’t quite hold, but we can show that it does hold “up to homology”. We can calculate their difference times the canonical volume form $\displaystyle\begin{aligned}\left(\langle d\eta,\zeta\rangle-\langle\eta,\delta\zeta\rangle\right)\omega&=d\eta\wedge*\zeta-\eta\wedge*\delta\zeta\\&=d\eta\wedge*\zeta-(-1)^k\eta\wedge**^{-1}d*\zeta\\&=d\eta\wedge*\zeta-(-1)^k\eta\wedge d*\zeta\\&=d\left(\eta\wedge*\zeta\right)\end{aligned}$ which is an exact $n$-form. It’s not quite as nice as equality, but if we pass to De Rham cohomology it’s just as good. ### Like this: Posted by John Armstrong | Differential Geometry, Geometry No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208163022994995, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/46353/is-observable-universe-an-explanation-against-olbers-paradox
# Is observable universe an explanation against Olbers' paradox? First of all, let me tell you that I'm not a physicist but rather a computer scientist with a mere interest in physics at nowhere near a professional level so feel free to close this question if it doesn't make any sense. I remember a physicist friend mentioning me about an argument about the finiteness of the universe. I have looked it up and it turned out to be Olbers' Paradox. We computer scientist like to use astronomical numbers to help us imagine the complexity of an algorithm. One of the most common one is the number of atoms in the Observable Universe (which we take as $10^{80}$) so I have a crude understanding about the observable universe concept. I had known these two for some time hence I woke up with the dilemma today. So my question is, how come it can be argued that universe is finite just because it is dark if we know that we can only observe a finite portion of it? Can't it be the case that the universe is infinite even if the sky is dark because not all the light from all the stars reach the earth? I have searched this a little bit but I think I need an explanation in simpler terms (like popular physics). A historical perspective would also be welcomed. - – Qmechanic♦ Dec 9 '12 at 12:40 People answering here, please remember, when stating the paradox, that light from distant sources goes down as the square of distance. – arivero Dec 9 '12 at 22:19 ## 3 Answers So my question is, how come it can be argued that universe is finite just because it is dark if we know that we can only observe a finite portion of it? You are mixing two theories here. Olbers paradox has as a basic theory a static infinite in space and time universe. The dark night sky means that either the universe is not static, or has a beginning, or has a finite extent in space. Or all three. Can't it be the case that the universe is infinite even if the sky is dark because not all the light from all the stars reach the earth? A different model than a static infinite in space and time universe is needed in this case, an infinite universe that appeared at a time t=0, for example, so that the light of distant stars would not have reached us by now. But there are more data than the dark night sky to be fitted by a cosmological model and the available data fit the Big Bang model quite well: the Big Bang occurred approximately 13.75 billion years ago, which is thus considered the age of the Universe. After its initial expansion from a singularity, the Universe cooled sufficiently to allow energy to be converted into various subatomic particles, including protons, neutrons, and electrons. - Here is an analogy for you: You find yourself in a forest. A forest ranger stand next to you. He tells you the forest goes on and on and has no boundary. You look around. There is tree trunks all around you, yet you do see gaps between the trunks. How do you react to the forest ranger's remark? You tell him he is wrong. If the forest was infinite in extent, wherever you look, your line of sight would ultimately hit a tree trunk. The forest ranger thinks for a while and responds: "the forest stretches without limits, but you only see part of the forest. This is because the ground is not flat, and distant trees disappear behind the horizon." As any analogy, this is not a full representation of 'the real thing'. It does not provide an analogy for all aspects of the commonly accepted solution to Olbers' paradox. Still it might help explaining how concepts like "observable universe" contribute to the resolution of the paradox. - +1 for the beautiful analogy – gokcehan Dec 11 '12 at 11:02 Olber's paradox just states that if the universe is infinite and static (and flat) then any line of sight to the sky will invariably hit a star/source of light, and therefore our sky should be bright all the time. The observable universe is a concept that arises from the expansion of the universe. Because the universe is expanding in such a way that points farther away from us are moving away faster than points closer to us (i.e, the further something is from us, the faster it's moving away) there is a definite point beyond which light will never reach us, since that point is moving away faster than the speed of light. Now to come to your question - You're absolutely right. We don't actually know whether then whole (observable + the rest) universe is finite or not. There are theories which hypothesize the lower limit on the size of the entire universe (not just the observable universe) but in literature people mostly restrict themselves to speaking about the observable universe since we cannot (by definition) observe anything outside the observable universe, so no hypothesis can be tested. This isn't necessarily a contradiction of Olber's paradox, since he was talking about an infinite and static universe. So his paradox can be seen as favouring an expanding universe model rather than a static one to explain what we see. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485681653022766, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3803014
Physics Forums ## Effective angular diameter of the sun for this experiment Hi, We are using a radio interferometer to measure the angular diameter of the Sun. An in depth description of how this is done is not relevant to solve the immediate problem I have. But briefly, we rotate the interferometer in azimuth so that the direction it's looking at traverses across the center of the Sun. We take readings of an interference pattern from two mirrors off to the sides during this traverse. Then we use this pattern to find out the angular diameter of the Sun that we saw. My problem is that the interferometer rotates in azimuth around an axis that is perpendicular to the surface of the Earth, but the Sun is up at some latitude. So the interferometer doesn't trace a straight line diameter across the Sun. It traces an arc. The length of this arc is going to be the effective angular diameter that we finally obtain. This is described in the attached diagram. The real angular diameter of the Sun is the length of the straight line. But the value for the angular diameter that we actually obtain is the length of the curved line. That curved line is the path the interferometer actually traverses. I want to find a relationship between the length of the curved line, the length of the straight line, and the latitude of the Sun, so I can find the real angular diameter of the Sun from the value that I obtained. Could someone please suggest a way I can do this? I think it has to do with spherical trigonometry, but I don't know where to start. I don't want to go through a whole textbook of Spherical Trigonometry just to solve this. What I need is a relevant equation. Thanks. Attached Thumbnails PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Sorry, that diagram is wrong. Attached is a better diagram. Attached Thumbnails OK, I found an expression for the effective angular diameter of the first diagram (but not for the second one yet) using the spherical law of cosines. If the latitude of the sun is $\theta$, and the actual angular diameter of the sun ( straight line ) is $d$, the effective angular diameter ( curved line ) as shown in the first diagram ( when the direction the interferometer is pointing towards doesn't go through the center of the sun ) is, $$d \phi = \cos (\theta) \arccos \left( \frac{\cos( d ) - \sin^2 ( \theta )}{\cos^2 ( \theta )} \right)$$ Thread Tools | | | | |--------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Effective angular diameter of the sun for this experiment | | | | Thread | Forum | Replies | | | General Astronomy | 2 | | | Cosmology | 3 | | | Advanced Physics Homework | 0 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052236080169678, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1789/why-rsa-encryption-key-is-based-on-modulophin-rather-than-modulo-n/1792
# Why RSA encryption key is based on modulo(phi(n)) rather than modulo n While calculating RSA encryption key we take modulo(phi(n)) rather that modulo(n). I couldn't understand why its so? - 3 – fgrieu Feb 1 '12 at 11:57 ## 3 Answers Firstly, a number theory textbook might well help here. I own An Introduction to the theory of numbers which will give you a grounding in this and many other number theory topics which crop up in crypto. It's not my favourite book notation-wise, but in terms of topics covered it is very thorough. Now, on to what you need help with. There exists a theorem called the division theorem that says any number $n = pq + r$, with the constraints that $n > q > r \geq 0$. In plain words, any number $n$ may be represented by a quotient $q$ (possibly multiplied by a number $p$) and a remainder $r$. An example would be to say that $13 = 7 + 5$ - you recognise this in that $13/7$ could be written as "$1$ remainder $5$". Modular arithmetic is related - let's start off with $13 \equiv 5 \mod 7$. Intuitively, modular arithmetic is often explained as "you take the remainder and that is the result" which isn't far from the truth - what we actually say is that $7$ and $13$ are part of the same congruence class - that is, $q$ and $r$ ($7$ and $5$) remain the same and the congruence class is made up of all the $n$ for all the $p$ possible. So for $p=2$, $n = 2*7 + 5 = 19$. So $19\equiv 5 \mod 7$ (feel free to check it!). So, you're concerned with the use of $\mod \phi(n)$ in RSA - well the expression usually given is $de = 1 \mod \phi(n)$. We can re-write that in division-theorem form as $de = k\phi(n) + 1$ - where $k$ is $p$ from the previous example, but renamed so as not to be confused with the prime $p$ that forms $n$. Now observe what you may have learnt about exponents - that $x^{a+b} = x^ax^b$ and that $x^{ab} = (x^a)^b)$ If you're not convinced, pick numeric examples. Now, a message $m$ raised to the power $de$ is $m^{de}$ - but we have a different expression for $de$ already - that is - $m^{de} = m^{k\phi(n) + 1}$. This can then be written as $(m^{\phi(n)})^k \times m^1$. There is a very good reason for this form being desirable - namely the Euler-Fermat generalisation to Fermat's Little Theorem, which states that: $$a^{\phi(n)} = 1 \mod n$$. Hopefully you should see that our manipulating the algebra to get $m^{\phi(n)}$ in it is not an accident at all - since under $\mod n$, $(m^{\phi(n)})^k \times m^1$ becomes $1^k \times m^1 = m$. This is what makes RSA work the way it does. I'd also like to address the point fgrieu has raised in his comment, since it is important. The requirement is for the private key is such that $de = 1 \mod \phi(n)$ - or $de = 1 + k\phi(n)$. Clearly, $k$ can be any number we like - any congruence class for which this relation holds will work. The lowest common multiple of two numbers is the smallest possible number such that both these numbers divide it - for example, the lowest common multiple of $4$ and $6$ is $12$. n this case we observe $4$ divides $12$, as does $6$. Now, the lowest common multiple of $p-1, q-1$ is a number such that both $p-1$ and $q-1$ divide it - well $\phi(n)$ is one such number, for starters! However, (thanks to Poncho), we can say that it is not the smallest possible number. To explain this, consider that $p$ is an odd number and that odd numbers can all be written of the form $2k+1$ for some $k$. If we subtract one, we are left with an even number which will have at least one factor of $2$. Therefore, since $p-1$ and $q-1$ are both even and both have a factor of two, their product ($\phi(pq)$) will have two factors of $2$ at the least, which is 2x what is necessary (at the minimum). More generally, the definition of a lowest common multiple uses the fundamental theorem of arithmetic to demonstrate that the lcm is in fact equal to the product of the maximum power each prime is raised to in each number's prime expansion - eurgh, yes? Well, let's look at that. I chose $4$ and $6$ before - now the prime expansions are $4 = 2^2$ and $6=2\times 3$. Taking the maximum exponentiation of each prime, $2^2 \times 3 = 12$. However, if we did $4\times6 = 2^2 \times 2 \times 3 = 2^3 \times 3$ we see we have gained an extra, unnecessary factor of $2$. It turns out the Euler-Fermat generalisation can be generalised again - to the Carmichael theorem which states that: $$a^{\lambda(n)} = 1 \mod n$$ Where $\lambda(n)$ is the smallest such integer where this expression is true. A slight detour - $\phi(pq)=\phi(p)\phi(q)$ and since $\phi(p) = p-1$ that is how we end up with our definition of $\phi(pq)=(p-1)(q-1)$. Now, in the general case, it turns out that $\lambda(pq) = lcm(\lambda(p), \lambda(q)) = lcm(p-1, q-1)$. By the same reasoning as the case with $\phi(pq)$, $de = 1 \mod \lambda(pq)$ allows us to invert $m^e$ back to $m$ under $\mod n$. fgrieu has used the term "Rings" - if you're interested in those, I highly recommend Rings, Fields and Groups which is possibly the best Maths textbook I have ever read. It's about (abstract) algebra and also covers some of the number theoretic points, but not as many as a pure number-theory textbook does. - 1 One note: $\phi(n)$ will never be the smallest possible number (assuming that n has at least two distinct odd prime factors, and so $lcm(p-1, q-1) < (p-1)(q-1)$ will always hold (because both (p-1) and (q-1) both have 2 as a factor). – poncho Feb 1 '12 at 17:43 @poncho ah of course. I'll edit that in. I was focusing too hard on my explanation and missed the obvious. Thanks :) – Antony Vennard Feb 1 '12 at 17:51 +1: it's Fermat's Little Theorem. – Jason S Feb 3 '12 at 14:14 In RSA, the public key is $e$ and private key is $d$, if: $ed=1 \mod{\phi (n)}$ To rearrange: $d=e^{-1} \mod{\phi (n)}$ In an public key system, it should be the case that one cannot compute the private key from the public key. Therefore, at least one of the variables should be kept private. In the above equation, everyone knows $e$, everyone can compute inverses, and if you were to use $n$ as the modulus, that would be known as well, allowing anyone to compute $d$. The reason $\phi (n)$ is used as the modulus is that it is assumed to be hard to compute $\phi (n)$ given only $n$, but it is easy to compute if you know $p$ and $q$ such that $n=pq$: $\phi(n)=(p-1)(q-1)$. Essentially, the easiest way to compute $\phi (n)$ is to factor $n$ into $p$ and $q$, and then compute it. There are deeper reasons why it is specifically $\phi (n)$ but this should get you started. - 1 It took the liberty to reverse the implication in the first statement. In the other direction, it was incorrect; check with $n=55$, $e=3$, $d=7$ which forms a valid RSA key, yet with $e\cdot d\not \equiv 1 \pmod {\phi(n)}$. – fgrieu Feb 3 '12 at 5:57 Thanks. I see why it was incorrect before. – PulpSpy Feb 3 '12 at 13:17 Why there is no congruence? – user5507 Feb 4 '12 at 3:31 @user5507: $e\cdot d \equiv 1 \pmod {\phi(n)}$ is a sufficient, but not necessary condition for $(n,e,d)$ being a working RSA key. In my counterexample above, it happens not to hold. When $n$ is the product of distinct odd primes $(p,q)$, the necessary and sufficient condition for $(n,e,d)$ being a working RSA key is $e\cdot d\equiv 1 \pmod {LCM(p-1,q-1)}$, and in my counterexample this congruence $\pmod {20}$ holds. – fgrieu Feb 4 '12 at 11:01 One usually wants to get back the plain text by decrypting the cipher text obtained by encrypting the plain text. This works for RSA only by taking $d = e^{-1} \bmod \varphi(n)$ and NOT with $d = e^{-1} \bmod n$. Security issues don't matter if the method does not work. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 113, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663516283035278, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Geometric_algebra
# Geometric algebra Not to be confused with Algebraic geometry. For other uses, see Geometric algebra (disambiguation). A geometric algebra is the Clifford algebra of a vector space over the field of real numbers endowed with a quadratic form. The spacetime algebra and the conformal geometric algebra are specific examples of such geometric algebras. The term is also used as a collective term for the approach to classical, computational and relativistic geometry that applies these algebras. A key feature of GA is its emphasis on geometric interpretations of certain elements of the algebra (the multivectors) as geometric entities. A multivector may be any of: a scalar (also called 0-vector), a vector (or 1-vector), a bivector (2-vector), a trivector (3-vector), a n-vector of higher order, or any combination thereof. For example, certain n-vectors can be interpreted to represent subspaces of the vector space. Via this interpretation of multivectors as geometric entities, geometric operations are realized as algebraic operations in the algebra. Moreover, since vector and scalars exist side-by-side in the geometric algebra, it is possible for a vector to be added to a scalar – an operation that does not occur in conventional linear algebra. It turns out that, in addition to being possible, the sum of a scalar and a vector has meaningful interpretations and practical uses. Proponents argue[1] that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity. Others claim that in some cases the geometric algebra approach is able to sidestep a "proliferation of manifolds"[2] that arises during the standard application of differential geometry. The associated geometric calculus is a generalization of vector calculus that is an alternative to the use of differential forms and Hodge duality. In 1878, the year before his death, Clifford expanded upon Grassmann's Ausdehnungslehre to form what are now usually called Clifford algebras in his honor although Clifford himself chose to call them "geometric algebras" and this term was repopularized by Hestenes[3] in the 1960s. Geometric algebra (GA) finds application in physics, in graphics and in robotics. ## Definition and notation Given a finite dimensional real quadratic space V = Rn with quadratic form Q : V → R, the geometric algebra for this quadratic space is the Clifford algebra Cℓ(V,Q). The algebra product is called the geometric product. It is standard to denote the geometric product by juxtaposition. For quadratic forms of any signature, an orthogonal basis {e1,...,en} can be found for V such that each ei2 is either −1, 0 or +1. The number of ei's associated with each of these three values is expressed by the signature, which is an invariant of the quadratic form. When Q is nondegenerate there are no 0's in the signature, and so an orthogonal basis of V exists with p elements squaring to 1 and q elements squaring to −1, with p + q = n. We denote this algebra $\mathcal{G}(p,q)$. For example, $\mathcal G(3,0)$ models 3D Euclidean space, $\mathcal G(1,3)$ relativistic spacetime and $\mathcal G(4,1)$ a 3D Conformal Geometric algebra. ### Standard bases and the geometric product Standard n-vector basis; unit scalar 1 (represented by a black number line), unit vectors, unit bivectors, and a unit trivector, all in 3d. Viewing the geometric algebra as a quotient of the tensor algebra, the geometric algebra's product is inherited from the tensor algebra. However, some authors introduce the geometric product in the following way by defining the product on a standard basis. One can always find a basis $\{e_1,\cdots,e_n\}$ for V such that $e_i \cdot e_j = 0 \,$ for all i ≠ j (orthogonality) and $\,e_i \cdot e_i = Q(e_i) \in \{-1,0,1\}$. The set of all possible products of these n symbols with indices in increasing order, including 1 as the empty product, then forms a basis for the geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra $\mathcal{G}(3,0)$: $\{1,e_1,e_2,e_3,e_1e_2,e_1e_3,e_2e_3,e_1e_2e_3\}\,$ A basis formed this way is called a standard basis for the geometric algebra, and any other orthogonal basis for V fitting the above description will produce another standard basis. Each standard basis consists of 2n elements. Given a standard basis, the geometric product between elements of the algebra is completely described by the rules: • $\,e_ie_i=Q(e_i)$ • $\,e_ie_j=-e_je_i$ for i ≠ j (orthogonal vectors anticommute) • $\,e_i \alpha = \alpha e_i$ (scalars commute) • $\,(AB)C = A(BC) = ABC$ (associativity of the geometric product) • $\,(A+B)C=AC+BC$ and $\,C(A+B)=CA+CB$ (distributivity of the geometric product over addition). The first four rules give the geometric product of any two elements in the standard basis as another up to a sign or zero. A few example computations follow: $\,(e_1e_2)(e_3e_4)=e_1e_2e_3e_4$ $\,(e_2)(e_1e_2e_4)=(e_2e_1)(e_2e_4)=-(e_1e_2)(e_2e_4)=-e_1(e_2e_2)e_4=-Q(e_2)e_1e_4.$ The geometric product of any two elements in the algebra can be computed with these rules, including the last. Specifically, if the standard basis elements are {bi | i∈S} with S being an index set, then $(\Sigma_i \alpha_i b_i)(\Sigma_j \beta_j b_j)=\Sigma_{i,j} \alpha_i\beta_j b_i b_j\,$. ### Grading Geometric interpretation for the outer product of n vectors (u, v, w) to obtain an n-vector (parallelotope elements), where n = grade,[4] for n = 1, 2, 3. The "circulations" show orientation.[5] Using a standard basis, a graded vector space structure can be established. A vector which happens to have grade n is called an n-vector, or sometimes a grade n vector. This grading does not make the algebra a graded algebra with the Clifford product,[6] but the grading does form a graded algebra using the exterior product, defined later. Ordinary vectors (those in V) are in the span of $\{e_1,\cdots,e_n\}$ and are called 1-vectors. Scalars are called 0-vectors. Elements in the span of $\{e_ie_j\mid 1\leq i<j\leq n\}$ are called 2-vectors, and elements in the span of $\{e_ie_je_k\mid 1\leq i<j<k\leq n\}$ are called 3-vectors and so on until the last grade of n-vectors. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of mixed grade. Generic elements of the geometric algebra are usually called multivectors with the term vector usually reserved for 1-vectors. The grading of multivectors is independent of the orthogonal basis chosen originally. A multivector $A$ may be decomposed with the grade projection operator $\langle A \rangle _r$ which outputs the grade r portion of A. As a result: $A = \sum_{r=0}^{n} \langle A \rangle _r$ As an example, the geometric product of two vectors $a b = a \cdot b + a \wedge b = \langle a b \rangle_0 + \langle a b \rangle_2$ since $\langle a b \rangle_0=a\cdot b\,$ and $\langle a b \rangle_2 = a\wedge b\,$ and $\langle a b \rangle_i=0\,$ for i other than 0 and 2. ### Inner and outer products Given two vectors a and b, if the geometric product ab is[7] anticommutative; they are perpendicular (top) because a∧b = −b∧a and a · b = 0, if it's commutative; they are parallel (bottom) because a∧b = 0 and a · b = b · a. There are two other important operations on V × V in the geometric algebra besides the geometric product. Let a and b be elements of V: • The inner product on V is the symmetric bilinear form arising as the symmetric part of geometric multiplication (and equivalently the bilinear form arising from the quadratic form Q), and is denoted by $a \cdot b$. It is related to the geometric product and quadratic form by these equations: $a \cdot b = \tfrac{1}{2}(ab+ba)=b \cdot a$ $aa =a \cdot a= Q(a)$ The inner product of two vectors is always a 0-vector of the algebra. • The outer product on V, denoted with ∧, arises as the antisymmetric part of the geometric product: $a \wedge b = \tfrac{1}{2}(ab-ba)=-b \wedge a$ $a\wedge a=0$ The outer product of two vectors is always a 2-vector of the algebra. • The inner and outer products are united into the geometric product since $ab = a \cdot b + a \wedge b$, Thus the geometric product of two vectors is in general of mixed grade. If a⋅a ≠ 0 for some vector a, then a−1 exists and is equal to a/(a⋅a). For a positive-definite or negative-definite quadratic form, all nonzero vectors have multiplicative inverses. Not all the elements of the algebra are invertible. For example, if u is a unit vector in V (i.e. a vector such that uu = 1), the elements 1 ± u have no inverse since they are zero divisors: (1 − u)(1 + u) = 1 − uu = 1 − 1 = 0. Vectors will be represented by lower case letters (e.g. a), and multivectors by upper case letters (e.g. A). Scalars will be represented by Greek characters. ### Representation of subspaces Geometric algebra represents subspaces of V as multivectors, and so they coexist in the same algebra with vectors from V. A k dimensional subspace W of V is represented by taking an orthogonal basis $\{b_1,b_2,\cdots b_k\}$ and using the geometric product to form the blade D = b1b2⋅⋅⋅bk. There are multiple blades representing W; all those representing W are scalar multiples of D. These blades can be separated into two sets: positive multiples of D and negative multiples of D. The positive multiples of D are said to have the same orientation as D, and the negative multiples the opposite orientation. Blades are important since geometric operations such as projections, rotations and reflections are implemented by using the geometric product to multiply vectors and blades.[clarification needed] ### Unit pseudoscalars Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace W of V is a blade that is the product of the members of an orthonormal basis for W. It can be shown that if I and I′ are both unit pseudoscalars for W, then I = ±I′ and I2 = ±1. Suppose the geometric algebra $\mathcal{G}(n,0)$ with the familiar positive definite inner product on Rn is formed. Given a plane (2-dimensional subspace) of Rn, one can find an orthonormal basis {b1,b2} spanning the plane, and thus find a unit pseudoscalar I = b1b2 representing this plane. The geometric product of any two vectors in the span of b1 and b2 lies in $\{\alpha_0+\alpha_1 I\mid \alpha_i\in\mathbb{R} \}$, that is, it is the sum of a 0-vector and a 2-vector. By the properties of the geometric product, I 2 = b1b2b1b2 = −b1b2b2b1 = −1. The resemblance to the imaginary unit is not accidental: the subspace $\{\alpha_0+\alpha_1 I\mid \alpha_i\in\mathbb{R} \}$ is R-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each 2-dimensional subspace of V on which the quadratic form is definite. It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to −1, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces. In $\mathcal{G}(3,0)$, an exceptional case occurs. Given a standard basis built from orthonormal ei's from V, the set of all 2-vectors is generated by $\{e_3e_2,e_1e_3,e_2e_1\}\,$. Labelling these i, j and k (momentarily deviating from our uppercase convention), the subspace generated by 0-vectors and 2-vectors is exactly $\{\alpha_0+i\alpha_1+j\alpha_2+k\alpha_3\mid \alpha_i\in\mathbb{R}\}$. This set is seen to be a subalgebra, and furthermore is R-algebra isomorphic to the quaternions, another important algebraic system. ### Extensions of the inner and outer products It is common practice to extend the outer product on vectors to the entire algebra. This may be done through the use of the grade projection operator: $C \wedge D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_{r+s} \,$ (the outer product) The inner product on vectors can also be generalised, but in more than one non-equivalent way. The paper (Dorst 2002) gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged. Among these several different generalizations of the inner product on vectors are: $\, C \;\big\lrcorner\; D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{s-r}$   (the left contraction) $\, C \;\big\llcorner\; D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{r-s}$   (the right contraction) $\, C * D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_0$   (the scalar product) $\, C \bullet D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{|s-r|}$   (the "(fat) dot" product) $\, C \bullet_H D := \sum_{r\ne0,s\ne0}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{|s-r|}$   (Hestenes's inner product)[8] (Dorst 2002) makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations. A number of identities incorporating the contractions are valid without restriction of their inputs. Benefits of using the left contraction as an extension of the inner product on vectors include that the identity $ab = a \cdot b + a \wedge b$ is extended to $aB = a \;\big\lrcorner\; B + a \wedge B$ for any vector a and multivector B, and that the projection operation $\mathcal{P}_b (a) = (a \cdot b^{-1})b$ is extended to $\mathcal{P}_B (A) = (A \;\big\lrcorner\; B^{-1}) \;\big\lrcorner\; B$ for any blades A and B (with a minor modification to accommodate null B, given below). ### Terminology specific to geometric algebra Some terms are used in geometric algebra with a meaning that differs from the use of those terms in other fields of mathematics. Some of these are listed here: Outer product In GA this refers to what in other contexts is usually called the exterior product. It is not the outer product of linear algebra. Inner product In GA this generally refers to a scalar product on the vector subspace (which is not required to be positive definite) and any chosen generalization of this product to the entire algebra. It is not specifically the inner product on a normed vector space. Versor In GA this refers to an object that can be constructed as the geometric product of any number of non-null vectors. The term otherwise typically refers to a unit quaternion. ## Geometric interpretation ### Projection and rejection In 3d space, a bivector a∧b defines a 2d plane subspace (light blue, extends infinitely in indicated directions). Any vector c in the 3-space can be projected onto and rejected normal to the plane, shown respectively by c⊥ and c∥. For any vector a and any invertible vector m, $\, a = amm^{-1} = (a\cdot m + a \wedge m)m^{-1} = a_{\| m} + a_{\perp m}$ where the projection of a onto m (or the parallel part) is $\, a_{\| m} = (a\cdot m)m^{-1}$ and the rejection of a onto m (or the perpendicular part) is $\, a_{\perp m} = a - a_{\| m} = (a\wedge m)m^{-1} .$ Using the concept of a k-blade B as representing a subspace of V and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible k-blade B as[9] $\, \mathcal{P}_B (A) = (A \;\big\lrcorner\; B^{-1}) \;\big\lrcorner\; B$ with the rejection being defined as $\, \mathcal{P}_B^\perp (A) = A - \mathcal{P}_B (A) .$ The projection and rejection generalize to null blades B by replacing the inverse B−1 with the pseudoinverse B+ with respect to the contractive product.[10] The outcome of the projection coincides in both cases for non-null blades.[11].[12] For null blades B, the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used,[13] as only then is the result necessarily in the subspace represented by B.[11] The projection generalizes through linearity to general multivectors A.[14] The projection is not linear in B and does not generalize to objects B that are not blades. ### Reflections The definition of a reflection occurs in two forms in the literature. Several authors work with reflection along a vector (negating only the component parallel to the specifying vector, or reflection in the hypersurface orthogonal to that vector), while others work with reflection on a vector (negating all vector components except that parallel to the specifying vector). Either may be used to build general versor operations, but the latter has the advantage that it extends to the algebra in a simpler and algebraically more regular fashion. #### Reflection along a vector Reflection of vector c along a vector m. Only the component of c parallel to m is negated. The reflection of a vector a along a vector m, or equivalently in the hyperplane orthogonal to m, is the same as negating the component of a vector parallel to m. The result of the reflection will be $\! a' = {-a_{\| m} + a_{\perp m}} = {-(a \cdot m)m^{-1} + (a \wedge m)m^{-1}} = {(-m \cdot a - m \wedge a)m^{-1}} = -mam^{-1}$ This is not the most general operation that may be regarded as a reflection when the dimension n ≥ 4. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection of a vector may be written $\! a \mapsto -MaM^{-1}$ where $\! M = pq \ldots r$ and $\! M^{-1} = (pq \ldots r)^{-1} = r^{-1} \ldots q^{-1}p^{-1} .$ If we define the reflection along a non-null vector m of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example, $(abc)' = a'b'c' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1}) = -ma(m^{-1}m)b(m^{-1}m)cm^{-1} = -mabcm^{-1} \,$ and for the product of an even number of vectors that $(abcd)' = a'b'c'd' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1})(-mdm^{-1}) = mabcdm^{-1} .\,$ Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector A using any reflection versor M may be written $\, A \mapsto M\alpha(A)M^{-1} ,$ where α is the automorphism of reflection through the origin of the vector space (v ↦ −v) extended through multilinearity to the whole algebra. #### Reflection on a vector Reflection of vector c on a vector n. The rejection of c on n is negated. The result of reflecting a vector a on another vector n is to negate the rejection of a. It is akin to reflecting the vector a through the origin, except that the projection of a onto n is not reflected. Such an operation is described by $\, a \mapsto nan^{-1} .$ Repeating this operation results in a general versor operation (including both rotations and reflections) of an general multivector A being expressed as $\, A \mapsto NAN^{-1} .$ This allows a general definition of any versor N (including both reflections and rotors) as an object that can be expressed as geometric product of any number of non-null 1-vectors. Such a versor can be applied in a uniform sandwich product as above irrespective of whether it is of even (a proper rotation) or odd grade (an improper rotation i.e. general reflection). The set of all versors with the geometric product as the group operation constitutes the Clifford group of the Clifford algebra Cℓp,q(R).[15] ### Hypervolume of an n-parallelotope spanned by n vectors For vectors $a$ and $b$ spanning a parallelogram we have $a \wedge b = ((a \wedge b) b^{-1}) b = a_{\perp b} b$ with the result that $a \wedge b$ is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area. Similar interpretations are true for any number of vectors spanning an n-dimensional parallelotope; the outer product of vectors a1, a2, ... an, that is $\bigwedge_{i=1}^n a_i$, has a magnitude equal to the volume of the n-parallelotope. An n-vector doesn't necessarily have a shape of a parallelotope - this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope. ### Rotations A rotor that rotates vectors in a plane rotates vectors through angle θ, that is x→RθxRθ† is a rotation of x through angle θ. The angle between u and v is θ/2. Similar interpretations are valid for a general multivector X instead of the vector x.[16] If we have a product of vectors $R = a_1a_2....a_r$ then we denote the reverse as $R^{\dagger}= (a_1a_2....a_r)^{\dagger} = a_r....a_2a_1$. As an example, assume that $R = ab$ we get $RR^{\dagger} = abba = ab^2a =a^2b^2 = R^{\dagger}R$. Scaling R so that RR† = 1 then $(RvR^{\dagger})^2 = Rv^{2}R^{\dagger}= v^2RR^{\dagger} = v^2$ so $RvR$† leaves the length of $v$ unchanged. We can also show that $(Rv_1R^{\dagger}) \cdot (Rv_2R^{\dagger}) = v_1 \cdot v_2$ so the transformation RvR† preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; R is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a versor (presumably for historical reasons). There is a general method for rotating a vector involving the formation of a multivector of the form $R = e^{-\frac{B \theta}{2}}$ that produces a rotation $\theta$ in the plane and with the orientation defined by a bivector $B$. Rotors are a generalization of quaternions to n-D spaces. For more about reflections, rotations and "sandwiching" products like RvR† see Plane of rotation. ## Examples and applications ### Intersection of a line and a plane A line L defined by points T and P (which we seek) and a plane defined by a bivector B containing points P and Q. We may define the line parametrically by $p = t + \alpha \ v$ where p and t are position vectors for points T and P and v is the direction vector for the line. Then $B \wedge (p-q) = 0$ and $B \wedge (t + \alpha v - q) = 0$ so $\alpha = \frac{B \wedge(q-t)}{B \wedge v}$ and $p = t + \left(\frac{B \wedge (q-t)}{B \wedge v}\right) v$. ### Rotating systems The mathematical description of rotational forces such as torque and angular momentum make use of the cross product. The cross product in relation to the outer product. In red are the unit normal vector, and the "parallel" unit bivector. The cross product can be viewed in terms of the outer product allowing a more natural geometric interpretation of the cross product as a bivector using the dual relationship $a \times b = -I (a \wedge b) \,.$ For example,torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle. Suppose a circular path in an arbitrary plane containing orthonormal vectors $\hat{ u}$ and$\hat{ v}$ is parameterized by angle. $\mathbf{r} = r(\hat{ u} \cos \theta + \hat{ v} \sin \theta) = r \hat{ u}(\cos \theta + \hat{ u} \hat{ v} \sin \theta)$ By designating the unit bivector of this plane as the imaginary number ${i} = \hat{ u} \hat{ v} = \hat{ u} \wedge \hat{ v}$ ${i}^2 = -1$ this path vector can be conveniently written in complex exponential form $\mathbf{r} = r \hat{ u} e^{{i} \theta}$ and the derivative with respect to angle is $\frac{d \mathbf{r}}{d\theta} = r \hat{ u} {i} e^{{i} \theta} = \mathbf{r} {i}$ So the torque, the rate of change of work W, due to a forceF, is $\tau = \frac{dW}{d\theta} = F \cdot \frac{d r}{d\theta} = F \cdot (\mathbf{r} {i})$ Unlike the cross product description of torque, $\tau = \mathbf{r} \times F$, the geometric-algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors ${\hat{u}}$ and ${\hat{v}}$. ### Electrodynamics and special relativity In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, Cℓ1,3, called spacetime algebra (STA).[3] or less commonly, Cℓ3, called the algebra of physical space (APS) where Cℓ3 is isomorphic to the even subalgebra of the 3+1 Clifford algebra, Cℓ0 3,1 . While in STA points of spacetime are represented simply by vectors, in APS, points of (3+1)-dimensional spacetime are instead represented by paravectors: a 3-dimensional vector (space) plus a 1-dimensional scalar (time). In spacetime algebra the electromagnetic field tensor has a bivector representation ${F} = ({E} + i c {B})e_0$.[17] Here, the imaginary unit is the (four-dimensional) volume element, and $e_0$ is the unit vector in time direction. Using the four-current ${J}$, Maxwell's equations then simplify to $\nabla {F} = \mu_0 c {J}$. Boosts in this Lorenzian metric space have the same expression $e^{{\beta}}$ as rotation in Euclidean space, where ${\beta}$ is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity. ## Relationship with other formalisms $\mathcal G(3,0)$ may be directly compared to vector algebra. The even subalgebra of $\mathcal G(2,0)$ is isomorphic to the complex numbers, as may be seen by writing a vector P in terms of its components in an orthonormal basis and left multiplying by the basis vector e1, yielding $Z = {e_1} P = {e_1} ( x {e_1} + y {e_2}) = x (1) + y ( {e_1} {e_2})\,$ where we identify i ↦ e1e2 since $({e_1}{e_2})^2 = {e_1}{e_2}{e_1}{e_2} = -{e_1}{e_1}{e_2}{e_2} = -1 \,$ Similarly, the even subalgebra of $\mathcal G(3,0)$ with basis {1, e2e3, e3e1, e1e2} is isomorphic to the quaternions as may be seen by identifying i ↦ −e2e3, j ↦ −e3e1 and k ↦ −e1e2. Every associative algebra has a matrix representation; the Pauli matrices are a representation of $\mathcal G(3,0)$ and the Dirac matrices are a representation of $\mathcal G(1,3)$, showing the equivalence with matrix representations used by physicists. ## Geometric calculus Main article: Geometric calculus Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms.[18] Essentially, the vector derivative is defined so that the GA version of Green's theorem is true, $\oint_{A} dA \nabla f = \oint_{dA} dx f$ and then one can write $\nabla f = \nabla \cdot f + \nabla \wedge f$ as a geometric product, effectively generalizing Stokes theorem (including the differential forms version of it). In $1D$ when A is a curve with endpoints $a$ and $b$, then $\oint_{A} dA \nabla f = \oint_{dA} dx f$ reduces to $\int_{a}^{b} dx \nabla f = \int_{a}^{b} dx \cdot \nabla f = \int_{a}^{b} df = f(b) -f(a)$ or the fundamental theorem of integral calculus. Also developed are the concept of vector manifold and geometric integration theory (which generalizes Cartan's differential forms). ## Conformal geometric algebra (CGA) Main article: Conformal geometric algebra A compact description of the current state of the art is provided by Bayro-Corrochano and Scheuermann (2010),[19] which also includes further references, in particular to Dorst et al (2007).[20] Another useful reference is Li (2008).[21] Working within GA, Euclidian space $\mathcal E^3$ is embedded projectively in the CGA $\mathcal G^{4,1}$ via the identification of Euclidean points with 1D subspaces in the 4D null cone of the 5D CGA vector subspace, and adding a point at infinity. This allows all conformal transformations to be done as rotations and reflections and is covariant, extending incidence relations of projective geometry to circles and spheres. Specifically, we add orthogonal basis vectors $\, e_+$ and $\, e_-$ such that $\, {e_+}^2 = +1$ and $\, {e_-}^2 = -1$ to the basis of $\mathcal{G}(3,0)$ and identify null vectors $n_{\infty} = e_- + e_+$ as an ideal point (point at infinity) (see Compactification) and $n_{o} = \tfrac{1}{2}(e_- - e_+)$ as the point at the origin, giving $n_{\infty} \cdot n_{o} = -1$. This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry and in this case allows the modeling of Euclidean transformations as orthogonal transformations. A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics. ## History Before the 20th century Although the connection of geometry with algebra dates as far back at least to Euclid's Elements in the 3rd century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a systematic way to describe the geometrical properties and transformations of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space.[22] Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view, the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra described certain properties (or Strecken such as length, area, and volume). His contribution was to define a new product — the geometric product — on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in n dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra. Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, following lectures of Gibbs. In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of i, j, k to indicate the basis vectors of R3: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, quaternions can be identified as Cℓ03,0(R), the even part of the Clifford algebra on Euclidean 3-space, which unifies the three approaches. 20th century and present Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Hermann Weyl and Claude Chevalley. The geometrical approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's Geometric Algebra[23] discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory.[24] David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra. In computer graphics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. ## Software GA is a very application oriented subject. There is a reasonably steep initial learning curve associated with it, but this can be eased somewhat by the use of applicable software. The following is a list of freely available software that does not require ownership of commercial software or purchase of any commercial products for this purpose: • GA Viewer Fontijne, Dorst, Bouma & Mann The link provides a manual, introduction to GA and sample material as well as the software. • CLUViz Perwass Software allowing script creation and including sample visualizations, manual and GA introduction. • Gaigen Fontijne For programmers,this is a code generator with support for C,C++,C# and Java. • Cinderella Visualizations Hitzer and Dorst. • Gaalop [3] Standalone GUI-Application that uses the Open-Source Computer Algebra Software Maxima to break down CLUViz code into C/C++ or Java code. • Gaalop Precompiler [4] Precompiler based on Gaalop integrated with CMake. • Gaalet, C++ Expression Template Library Seybold. • GALua, A Lua module adding GA data-types to the Lua programming language Parkin. ## References 1. McRobie, F. A.; Lasenby, J. (1999) Simo-Vu Quoc rods using Clifford algebra. Internat. J. Numer. Methods Engrg. Vol 45, #4, p. 377−398 2. ^ a b 3. R. Penrose (2007). . Vintage books. ISBN 0-679-77631-1. 4. J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 83. ISBN 0-7167-0344-0. 5. For example, when vv=v⋅v≠0, the product is grade-0 and not grade-2, as a graded algebra would require. 6. Distinguishing notation here is from Dorst (2007) Geometric Algebra for computer Science §B.1 p.590.; the point is also made that scalars must be handled as a special case with this product. 7. This definition follows Dorst (2007) and Perwass (2009) – the left contraction used by Dorst replaces the ("fat dot") inner product that Perwass uses, consistent with Perwass's constraint that grade of A may not exceed that of B. 8. Dorst appears to merely assume B+ such that B ⨼ B+ = 1, whereas Perwass (2009) defines B+ = B†/(B ⨼ B†), where B† is the conjugate of B, equivalent to the reverse of B up to a sign. 9. ^ a b 10. Perwass (2009) §3.2.10.2 p83 11. That is to say, the projection must be defined as PB(A) = (A ⨼ B+) ⨼ B and not as (A ⨼ B) ⨼ B+, though the two are equivalent for non-null blades B 12. This generalization to all A is apparently not considered by Perwass or Dorst. 13. Perwass (2009) §3.3.1. Perwass also claims here that David Hestenes coined the term "versor", where he is presumably is referring to the GA context (the term versor appears to have been used by Hamilton to refer to an equivalent object of the quaternion algebra). 14. "Electromagnetism using Geometric Algebra versus Components". Retrieved 19 March 2013. 15. Clifford Algebra to Geometric Calculus, a Unified Language for mathematics and Physics (Dordrecht/Boston:G.Reidel Publ.Co.,1984 16. Dorst, Leo; Fontijne, Daniel; Mann, Stephen (2007). Geometric algebra for computer science: an object-oriented approach to geometry. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 978-0-12-369465-2. OCLC 132691969. 17. Hongbo Li (2008) Invariant Algebras and Geometric Reasoning, Singapore: World Scientific. Extract online at http://www.worldscibooks.com/etextbook/6514/6514_chap01.pdf 18. Artin, Emil (1988), Geometric algebra, Wiley Classics Library, New York: John Wiley & Sons Inc., pp. x+214, ISBN 0-471-60839-4, MR 1009557  (Reprint of the 1957 original; A Wiley-Interscience Publication) 19. Doran, Chris J. L. (February 1994). Geometric Algebra and its Application to Mathematical Physics (Ph.D. thesis). University of Cambridge. OCLC 53604228. • Baylis, W. E., ed. (1996), Clifford (Geometric) Algebra with Applications to Physics, Mathematics, and Engineering, Birkhäuser • Baylis, W. E. (2002), Electrodynamics: A Modern Geometric Approach (2 ed.), Birkhäuser, ISBN 978-0-8176-4025-5 • Bourbaki, Nicolas (1980), Eléments de Mathématique. Algèbre, Ch. 9 "Algèbres de Clifford": Hermann • Dorst, Leo (2002), The inner products of geometric algebra, Boston, MA: Birkhäuser Boston  More than one of `|location=` and `|place=` specified (help) • Hestenes, David (1999), New Foundations for Classical Mechanics (2 ed.), Springer Verlag, ISBN 978-0-7923-5302-7 • Hestenes, David (1966). Space-time Algebra. New York: Gordon and Breach. ISBN 978-0-677-01390-9. OCLC 996371. • Lasenby, J.; Lasenby, A. N.; Doran, C. J. L. (2000), "A Unified Mathematical Language for Physics and Engineering in the 21st Century", Philosophical Transactions of the Royal Society of London (pp. 1-18) (A 358) • Doran, Chris; Lasenby, Anthony (2003). Geometric algebra for physicists. Cambridge University Press. ISBN 978-0-521-71595-9. • Macdonald, Alan (2011). Linear and Geometric Algebra. Charleston: CreateSpace. ISBN 9781453854938. OCLC 704377582. • J Bain (2006). "Spacetime structuralism: §5 Manifolds vs. geometric algebra". In Dennis Dieks. The ontology of spacetime. Elsevier. p. 54 ff. ISBN 978-0-444-52768-4.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 128, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8963172435760498, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/24235-axis-symmetry.html
# Thread: 1. ## Axis of symmetry Question: If f(x) = 3x^2 - 15x + 3 and g(x) = x-2, the equation for the axis pf symmetry of the graph of y = f(g(x)) is? Not sure what to do or what this means. Please Help. 2. Originally Posted by mathlg Question: If f(x) = 3x^2 - 15x + 3 and g(x) = x-2, the equation for the axis pf symmetry of the graph of y = f(g(x)) is? Not sure what to do or what this means. Please Help. Plug & Chug... $f[g(x)] = 3(x-2)^2 - 15(x-2) + 3$ 3. The axis of symmetry for a parabola is a straight line passing through the turning point and parallel to the y axis. replacing x with (x-2) is equivalent to a translation of 2 to the right. hope this helps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8572745323181152, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/50661?sort=oldest
unboundedness of number of integral points on elliptic curves? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If $E/\mathbf{Q}$ is an elliptic curve and we put it into minimal Weierstrass form, we can count how many integral points it has. A theorem of Siegel tells us that this number $n(E)$ is finite, and there are even effective versions of this result. If I'm not mistaken this number $n(E)$ is going to be a well-defined invariant of $E/\mathbf{Q}$ (because different minimal Weierstrass models will have the same number of integral points). Is it known, or conjectured, that $n(E)$ is unbounded as $E$ ranges over all elliptic curves? Note: the question is trivial if one does not put $E$ into some sort of minimal form first: e.g. take any elliptic curve of rank 1 and then keep rescaling $X$ and $Y$ to make more and more rational points integral. - I vaguely remember a similar question may imply some conjecture. Try searching for "integral points on minimal". Currently the 2 URLs timeout for me. May be wrong... – jerr18 Dec 29 2010 at 15:39 web.archive.org/web/20080509110233/http://… Stein: Is it conjectured that the number of integral points on minimal curves is unbounded? If so, then abc implies that ranks are unbounded. – jerr18 Dec 29 2010 at 16:07 4 Not that it helps much, but this can be expressed more intrinsically in terms of the N\'eron model $\mathcal{E}$ of $E$ over $\mathbf{Z}$, avoiding the ick of minimal Weierstrass models (locally or globally). Namely, the equality $E(\mathbf{Q}) = \mathcal{E}(\mathbf{Z})$ carries the pts that are everywhere integral with respect to local (or global) minimal Weierstrass models over to exactly the pts in $\mathcal{E}(\mathbf{Z})$ disjoint from the identity section and supported in the open relative identity component $\mathcal{E}^0$. So that gives a clean defn of $n(E)$ over any number field. – BCnrd Dec 30 2010 at 5:39 1 By curiosity, is the result known for elliptic curves over function fields? In this case, unboundedness of the rank was proved by Shafarevich and Tate (there are also results by Ulmer). I don't know whether these constructions yield arbitrarily many integral points. – François Brunault Jan 4 2011 at 12:54 1 Over function fields you have to be careful because the number of integral points can be not just unbounded but infinite! For example, in characteristic 2 if $(x,y)$ is an integral point on the supersingular curve $y^2+y=x^3+a$ then so is $(x^2,y+x^3)$, so as long as $x \notin \overline{{\bf F}_2}$ we get an infinite sequence of integral points. For instance, let $a=t^3$ and start from $(t,0)$ to get an infinite sequence of integral points (i.e. solutions of $y^2+y=x^3+t^3$ in polynomials $x,y \in {\bf F}_2[t]$; yes, that's a Néron model). – Noam D. Elkies Jun 10 2011 at 2:52 show 2 more comments 2 Answers It is expected that the number of integral points is bounded in terms of the rank (this is known for some curves not in minimal Weierstrass form, Silverman JLMS 28 (1983), 1–7). So, if you could prove unboundedness of $n(E)$, you'd have a shot at proving unboundedness of rank which, as you know, is a hard problem. On the other hand, if you believe Lang's (and Vojta's) conjectures on rational points on varieties of general type, then you would conclude that $n(E)$ is uniformly bounded (Abramovich, Inv. Math. 127 (1997), 307–317). BTW, Kevin, don't you have some catching up to do? - 1 Abramovich restricts to semi-stable curves, if I understand correctly. He speculates that there is a bound on $n(E)$ depending on the number of additive places. Remark 2 after Cor 1 in springerlink.com/content/xgbvqxm4383nwhqu – Chris Wuthrich Dec 29 2010 at 18:04 1 @Felipe: yes, I do. But that "catching up" involves going to work and spending a day concentrating; I can ask idle questions whilst babysitting three children! I'm at work tomorrow so I'll get things done then: I have a "no MO at work" policy, for example! – Kevin Buzzard Dec 29 2010 at 18:46 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi Kevin, I proved that if $E/\mathbf{Q}$ is given using by a minimal Weiestrass equation, then $\#E(Z) \le C^{\text{rank} E(Q) + n(j) + 1}$ where $n(j)$ is the number of distinct primes dividing the denominator of the $j$-invariant of $E$ and $C$ is an absolute constant. This is in J. Reine Angew. Math. 378 (1987), 60-100. Mark Hindry and I proved that if you assume the abc conjecture, then you can remove the n(j) in the above estimate. This is in Invent. Math. 93 (1988), 419-450. It is a conjecture due to Lang. The papers contain more general results for (quasi)-S-integral points over number fields. - Thanks Joe! I didn't know about these results, which are pretty neat. To answer my question though we need some sort of bound the other way, right? I guess we now know that ABC implies that if there does happen to exist a universal bound for the rank then there's also a universal bound for the integral points---this presumably being the source of William Stein's quoted comment above. Can one prove "rank unbounded implies integral points on minimal models unbounded" though? – Kevin Buzzard Dec 31 2010 at 20:36 2 Kevin asks: Can one prove "rank unbounded implies integral points on minimal models unbounded" though? No. In fact, it may well be that the number of integral points on minimal models is uniformly bounded (I don't have a strong feeling on that). One might guess that integral points satisfy h(P) << h(E) for an absolute constant, while Lang suggests that on "most" curves of positive rank, the smallest non-torsion point satisfies h(P) >> C^h(E). Conclusion would be that "most" curves have no integral points. So there should be lots of curves with big rank and no integral points. – Joe Silverman Jan 1 2011 at 13:50 Hi Professor Silverman! I believe the displayed formula is broken because it needs two backslashes before the pound sign. – Zev Chonoles Jan 12 2011 at 3:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172763228416443, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/215487/function-that-sends-1-2-3-4-to-0-1-1-0-respectively/217821
# Function that sends $1,2,3,4$ to $0,1,1,0$ respectively I already got tired trying to think of a function $f:\{1,2,3,4\}\rightarrow \{0,1,1,0\}$ in other words: $$f(1)=0\\f(2)=1\\f(3)=1\\f(4)=0$$ Don't suggest division in integers; it will not pass for me. Are there ways to implement it with modulo, absolute value, and so on, without conditions? - 1 Sorry, what is it all about? For instance, what is $a$ and $b$, isn't it a one variable function you would like to see? – Berci Oct 17 '12 at 7:41 3 – Brian M. Scott Oct 17 '12 at 7:55 12 `f == isPrime` ;) – wim Oct 17 '12 at 11:14 4 Have you considered the function $f : \{1,2,3,4\} \to \{0,1\}$ such that $f(1)=0$, $f(2)=1$, $f(3)=1$, and $f(4)=0$? :) – Snowball Oct 17 '12 at 21:51 4 @starovoitovs: You really should clearly state the intended purpose. Honestly, for most mathematical purposes, definition you've given in your question is probably what you really should be wanting to use. If your purpose is for fast implementation on a computer, then you really should say that, to let people who know about such things make good suggestions (and you should probably include how it's going to be used, as well). If your purpose is something else, then if you state it, people can give you answers to suit your needs rather than just having fun. – Hurkyl Oct 18 '12 at 0:03 show 7 more comments ## 10 Answers Another one (a bit more complicated than the parabola) is: $$f(x)=\frac{2}{\sqrt{3}}\sin\bigg(\frac{\pi}{3}(x-1)\bigg)$$ This one generates: $0,1,1,0,-1,-1,0,...$ And another simple one: $$f(x)=1.5-\left | 2.5-x\right|$$ - Seeing the zeroes of the function and its symmetry, one tries to fit a quadratic curve to get $f(x)=-0.5(x-1)(x-4)$. - 5 It has division by integers. – Asaf Karagila Oct 17 '12 at 15:13 4 Write $1/2$ as $0.5$ ... – lhf Oct 17 '12 at 23:56 I think the OP meant integer division, like `1/2 = 0`. – asmeurer Oct 21 '12 at 1:36 How does one arrive at this result? – Arjang Oct 21 '12 at 6:25 Very ingenious, and quick! – amWhy Nov 29 '12 at 0:14 If you have bit operations, just return the two's bit of the input. So $y=x \gt \gt 1 ;\ \ f(x)=y\%2$ - Or similarly, $(x>>1)\&1$. Is the $\&$ operation cheaper than $\%$? – Yong Hao Ng Oct 21 '12 at 5:46 @YongHaoNg It can be. It depends on the underlying hardware and, especially, how efficient the compiler is. – Rick Decker Dec 7 '12 at 1:16 I see, thanks for the info. – Yong Hao Ng Dec 8 '12 at 9:35 Here's a simple one using only mods and a square $f(x)=(x-1)^2 \mod 3$ - +1 My favourite... – copper.hat Oct 21 '12 at 4:42 To all: I had not seen Doug Stone's similar $f(x)=2x^2+3 \mod 5$ – coffeemath Oct 21 '12 at 6:20 1 @coffeemath I recently realized that all the modulo solutions can be obtained from the initial Lagrange Interpolation $-\frac{1}{2}(x-1)(x-4)$. Under modulo 5 you get $2x^3+3$ and under modulo 3 you get $x^2+x+1$, which is what you have here. – Yong Hao Ng Oct 31 '12 at 10:41 Interesting. So under mod 7 it would be $(-1/2)(x-1)(x-4)$ which since $-1/2=6/2=3$ would give $3(x^2-5x+4)$ or $3x^2-x-2$. And that works. I guess as long as 2 is invertible mod n we have the example. I wonder about when $n$ is even... – coffeemath Nov 28 '12 at 13:08 Heaviside functions (using the appropriate convention) should work too. Use $f(x) = U(x-1) - U(x-2)$ where $U$ is the Heaviside step function. - Since you tagged "binary" in your question, you might also want to recall that Karnaugh map is a standard way to map inputs to outputs with just complement, AND and OR gates. (Or "~", "$\&$" and "|" bit-wise operators in C) For example, you can define $a,b,c$ to be bits at position 2,1,0 here to use the map. If you draw out the map, this is what it looks like: $$\begin{array}{c|c|c|c|c|c|} & &bc &bc &bc &bc\\ \hline & & 00 & 01 & 11 & 10\\ \hline a& 0 & \text{X} & 0 & 1 & 1 \\ \hline a& 1 & 0 & \text{X} & \text{X} & \text{X} \\ \hline \end{array}$$ Explanation: X denotes values that cannot occur (normally called "Don't care" I think). We want to focus on representing the "1"s, which is represented by the entries $\bar abc$ and $\bar a b\bar c$. (Notice that you only get "1" for one of the variables, they cannot occur together.) They can be combined: $\bar a bc + \bar a b \bar c=\bar a b (c+\bar c)=\bar a b$. Getting rid of the $\bar a$ is possible when you noticed that its alternative row has no entries. (i.e. the 2 entries below are "X"s) Using this idea you can construct any function for any bigger variable. It is probably not going to be the most efficient implementation, but you can get the solution fast. From there, you may do some reduction using logical operations and the final result should be decent. - +1 for Karnaugh map, now an example of it to fit this instance. – Arjang Oct 21 '12 at 6:09 Okay let me try to figure out how to draw a table. :) – Yong Hao Ng Oct 21 '12 at 6:12 From a mathematical point of view, describing such a function is trivial: $$f(x)=\begin{cases} 0 & \text{if } x \in \{1,4\} \\ 1 & \text{if } x \in \{2,3\} \\ \end{cases}.$$ It's not a particularly slick formula for the function, but it's certainly straightforward. An alternative is to search for "magic numbers". For example: $$f(x)=2x^2+3 \mod 5.$$ To find this function, I just let my computer search until the numbers happened to match. If you're looking for an efficient implementation of this function, in C say, any one of these would compute the function: ````char f=(x&2)>>1; char f=(x>>1)%2; // this is Ross Millikan's suggestion char f=(x>>1)&1; ```` Here `&` is bitwise `and`, `>>` is right bit-shift by one, and `%` is the mod operation. If you only need an `if(f!=0) { ... }` statement (i.e., "if $f(x)\neq 0$"), then this would suffice: ````if(x&2) { ... } ```` An alternative to the above is simply storing the values in memory. E.g. via: ````char f[5]={0,0,1,1,0}; ```` whenceforth, if you want to compute $f(x)$, you can just recall `f[x]` from memory. - Something more complex maybe? $$f(x)=\max(0, \text{Im}\, i^{x-1}-\text{Re}\, i^{x-1})$$ Or something simpler: $$f(x) = 1 - \max(0, 2-x, x-3)$$ Similarly (in the vein of the $|x-2.5|$ answers): $$f(x) = 3-\max(2, |2x-5|)$$ - Or $f(x)=4-max(x,5-x)$ in the same vein as your "simpler". But I liked the powers of $i$ version, since they cycle around the circle, and since one needs two in a row, using the line $y=x$ etc... – coffeemath Oct 21 '12 at 4:08 Given any set of $n$ points and values, you can always construct a polynomial of degree less than or equal to $n-1$ that goes through all the points. See http://en.wikipedia.org/wiki/Polynomial_interpolation. Using the method from there, we get $$\left[\begin{smallmatrix}1 & 1 & 1 & 1\\8 & 4 & 2 & 1\\27 & 9 & 3 & 1\\64 & 16 & 4 & 1\end{smallmatrix}\right] \left[\begin{smallmatrix}a_{3}\\a_{2}\\a_{1}\\a_{0}\end{smallmatrix}\right] = \left[\begin{smallmatrix}0\\1\\1\\0\end{smallmatrix}\right]$$ Multiplying by the inverse matrix on the left on both sides gives $$\left[\begin{smallmatrix}a_{3}\\a_{2}\\a_{1}\\a_{0}\end{smallmatrix}\right] = \left[\begin{smallmatrix}0\\- \frac{1}{2}\\\frac{5}{2}\\-2\end{smallmatrix}\right]$$ meaning that the resulting polynomial is $- \frac{1}{2} x^{2} + \frac{5}{2} x -2$ (indeed, if you factor this, you get the same thing as Jasper Loy). You can easily check that this polynomial works. Note that in this case, the polynomial had degree one less than we were expecting. - If someone knows how to make the elements of those vectors line up, please edit and fix. I just used SymPy's $\LaTeX$ generation to get that. – asmeurer Oct 21 '12 at 1:49 Perhaps for equally spaced inputs and symmetric outputs one would expect degree 2. – coffeemath Oct 21 '12 at 3:34 Look the example for Lagrange Interpolation, then it is easy to construct any function from any sequence to any sequence. In this case : $$L(x)=\frac{1}{2}(x-1)(x-3)(x-4) + \frac{-1}{2}(x-1)(x-2)(x-4)$$ wich simplifies to: $$L(x)=\frac{-1}{2}(x-1)(x-4)$$ which could possibly explain Jasper's answer, but since the method for derivation was not mentioned can not say for sure. - 1 ...which is Jasper's answer. – lhf Oct 21 '12 at 1:53 1 @lhf: on the specified domain, all of these functions are the same. This is a nice derivation of the answer Jasper Loy gave. – robjohn♦ Oct 21 '12 at 8:02 @robjohn You mean that $L(x)=2x^3+3$ under modulo 5 and $L(x)=x^2+x+1=(x-1)^2$ under modulo 3 right? I found out about this while answering a similar question elsewhere. Are the polynomials unique in the chosen domain? – Yong Hao Ng Oct 31 '12 at 10:47 @YongHaoNg: I mean that on the given domain, $\{1,2,3,4\}$, all of the various representations give the same function:$$\begin{align}1\to0\\2\to1\\3\to1\\4\to0\end{align}$$ – robjohn♦ Oct 31 '12 at 13:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927186131477356, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/measure-theory
# Tagged Questions Questions relating to measures, measure spaces, Lebesgue integration and the like. 2answers 23 views ### Baire's theorem from a point of view of measure theory According to Baire's theorem, for each countable collection of open dense subsets of $[0,1]$, their intersection $A$ is dense. Are we able to say something about the Lebegue's measure of $A$? Must it ... 0answers 25 views ### Given $e \in L(0,1)$, why is $\gamma(t)=\int_0^1 G(t,s) e(s)\, ds$ in $C^2[0,1]\cap C^3(0,1)$? If I have a function $e \in L(0,1)$, why is the function $\gamma$ defined by $$\gamma(t)=\int_0^1 G(t,s) e(s)\, ds$$ an element of $C^2[0,1]\cap C^3(0,1)$, where: $G$ is the Green's function of BVP ... 0answers 16 views ### Question on a third-order boundary value problems This is the corollary $2.1$, from the article "Positive solutions of third order semipositone boundary value problems" if u'''=\lambda \left(\sum_{i=1}^m c_i(t)u^{\mu_i}-d(t)\right)+e(t), t\in ... 0answers 49 views ### For what $p$ is $x^p$ Lebesgue Integrable? Revising for an exam on Monday any help with the following question would be greatly appreciated; If $f$ is a function on $(0, \infty)$ taking values in $\mathbb R$, defined $f(x)=x^p$ ($p$ is a real ... 1answer 18 views ### $\lim_{y \to \infty}\int_{R}f(x-t)\frac{t}{t^2 +y^2}dt=0?$ for $f\in L^{p}$, $p \in [1,\infty)$ For $f\in L^{p}$, $p \in [1,\infty)$ we want to prove: $$\lim_{y \to \infty}\int_{R}f(x-t)\frac{t}{t^2 +y^2}dt=0$$ I'm not sure whether we can exchange the limit and the integral, cuz I cannot find ... 0answers 24 views ### Measurability of multifunction Let $f:[a,b]\times \mathbb{R}^{n} \times \mathbb{R}^{m} \rightarrow \mathbb{R}^{n}$. Suppose $f (.,x, u)$ is Lebesgue measurable for each $(x,u)$. Suppose also that $f$ is continuous at $(x, u)$ ... 5answers 252 views ### Examples of properties that hold almost everywhere, but that explicit examples unknown. In measure theory one makes rigorous the concept of something holding "almost everywhere" or "almost surely", meaning the set on which the property fails has measure zero. I think it is very ... 2answers 23 views ### Two random variable with the same variance and mean Let $Y\in L^{2}(\Omega,\Sigma,P)$ and let $E[Y^2|X]=X^2$ and $E[Y|X]=X$. Could we prove that $Y=X$ almost surely. My partial answer: By the definition of conditional expectation we have ... 0answers 33 views ### Alternative rigorous definition of a surface integral Consider some open subset $U$ of $\mathbb{R}^n$ where $U$ has a (piecewise) $C^1$-boundary. Let $f$ be some smooth (enough) real function. Is there some way to give a measure-theoretic definition of ... 1answer 41 views ### Optimal probability measure Let $A$ be a finite set and let $\Bbb P$ be a probability measure on $A^{\Bbb N_0}$. Further, let $x_i:A^{\Bbb N_0}\to A$ be projection maps, so that $(x_i)_{i=0}^\infty$ can be treated as a ... 1answer 57 views ### Measures on all subsets of $\aleph_0$ A theorem of Ulam says: A finite measure $\mu$ defined on all subsets of a set of cardinality $\aleph_1$ must be $0$ for all subsets if it sends every $1$-element subset to $0$. Will this ... 0answers 20 views ### Lebesgue measure of set $M = \{ [x,y] \in \mathbb{R}^2; 2 < x + y < 3; x < y < 3x \}$? although we can do this by splitting the area four ways and computing four integrals, my book suggests that I try the substitution $u = x + y$ and $v = \frac{y}{x}$. I expressed $x$ and $y$ in ... 0answers 33 views ### Proving that $\bigotimes_{i=1}^n \cal{B}_{X_i} = \cal{B}_{X}$ Theorem: Given separable metric spaces $X_1,\ldots,X_n$ and $X=\prod_{i=1}^n X_i$, where $X$ has the product metric $d(f,g)=\sqrt{d_1 (f(1),g(1))^2 +\cdots + d_n (f(n),g(n))^2}$. Then ... 1answer 25 views ### Inequality between 2p-norm and p-norm for random variables Recently I was studying something about random matrix theory, and class of sub-guassian / sub-exponential random variables is of interest. In the literature it gave an inequality as following: ... 1answer 44 views ### The semifinite portion of a measure $\mu$ Let $\mu$ be a measure and define $\mu_1$ such that $\mu(E)=\mu_1(E)$ for $\mu(E)$ finite. And for $\mu(E)$ infinite definite $\mu_1$ such that: (i) if $E$ contains finite subsets of arbitrarily ... 4answers 87 views ### Book Suggestions for an Introduction to Measure Theory [duplicate] Couldn't find this question asked anywhere on the site, so here it is! Do you guys have any recommendations for someone being introduced to measure theory and lebesgue integrals? A mentor has ... 1answer 28 views ### A Measure For The Space of Probability Density Functions Consider the space of all joint probability density functions of two variables. I want to know what the measure is of the portion of this space that is filled by uncorrelated joint pdfs relative to ... 1answer 45 views ### Measurability of an Indexed Product-Measure If for any fixed $\omega_1$, $P_{\omega_1}$ is a probability measure and $Q_{\omega_1}$ is a stochastic kernel and both are measurable in $\omega_1$, is the indexed product measure ... 3answers 152 views ### why measure theory I studied elementary probability theory. For that, density functions were enough. What is a practical necessity to develop measure theory? What is a problem that cannot be solved using elementary ... 1answer 23 views ### Does $u\in L^p(B)$ implies $u_{|\partial B_t}\in L^p(\partial B_t)$ for almost $t\in (0,1]$? Let $B$ be the unit ball in $\mathbb{R}^N$ with center in origin and consider the space $L^p(B)$ with Lebesgue measure ($1<p<\infty$). Let $B_t\subset B$ be a concentric ball of radius \$t\in ... 1answer 44 views ### Isomorphism Subalgebra Given, the unit interval $I$ endowed with the Lebesgue measure $\mu$, and let $A$ be the (Boolean) algebra of Jordan measurable subsets of $X$ with respect to $\mu$, (i.e. those sets that satisfying ... 1answer 48 views ### Simplification of an expression How do I simplify the following expression? $$\displaystyle \frac{\int_q^1 w(s) \int_0^s e(\xi) d\xi ds}{2\int_q^1 w(s) ds} p$$ where $w(t)$ is nondecreasing $w(t)>0$ on $(q,1]$ , \$e ... 1answer 28 views ### Are the continuous functions on $G$ dense in $L^{1}(G)$? If $G$ is a locally compact group, is the set $C_{c}(G)$ of all continuous functions on $G$ with compact support dense in $L^{1}(G)$? 1answer 32 views ### Basic question about the definition of an integral on a measure space Let $(X,\mathcal{B},\mu)$ be a measure space. $\bf{\text{Definition:}}$ For a non-negative measurable function $f$ on $X$, $E\in \mathcal{B}$, $$\int_{E}f d\mu := \text{inf}\int_{E}\varphi d\mu$$ ... 1answer 77 views ### If $f$ is a bounded measurable function $\Longrightarrow$ there is a sequence of step functions such that $s_n \longrightarrow f \; a.e$? If $f:[0,1]\longrightarrow\mathbb{R}$ is a bounded measurable function $\Longrightarrow$ there is a sequence of step functions $\displaystyle s_n=\sum_{j=1}^{p} c_j \cdot \chi _{I_j}$ such that \$s_n ... 1answer 21 views ### Conditional expectation is square-integrable I am given the following definition: Let $(G_i:i\in I )$ be a countable family of disjoint events, whose union is the probability space $\Omega$. Let the $\sigma$-algebra generated by these events ... 1answer 32 views ### Show that E is measurable? Suppose $E_1= [1, 1 \frac12] , E_2 = (2, 2\frac14), E_3 = [3, 3\frac18], E_4 = (4 , 4 \frac{1}{16}) , \dots , E= \bigcup_{n=1}^{\infty}E_n$ i) Show $E$ is measurable ii) Compute $m(E)$ Here is ... 0answers 60 views ### Let $g$ be a bounded measurable function on $[0,1]$. Let $g$ be a bounded measurable function on $[0,1]$. For each $n$ Let $\displaystyle I_j=j\cdot \frac{1}{2^n}+[0,\frac{1}{2^n}]$ , $j=0,1\cdots ,2^n-1$ , a partition of $[0,1]$ by bisections ... 1answer 38 views ### A two-dimensional set of measure zero I have a 2D domain $[0,1]\times[0,1]$. This domain contains some set of measure zero $A$, the last understood as the Lebesgue measure in $\mathbb{R}^{2}$. Is the following true: for almost all ... 1answer 67 views ### How to understand C(X)'' = bounded Borel measurable functions? Let $X$ be a compact metric space and $C(X)=\{ f:X\rightarrow \mathbb{R} \ | \ \ f \ continuous\}$ with the uniform norm. It is a separable Banach space. 1) I'm aware of the fact that $C(X)^*$, the ... 1answer 48 views ### Riemann integral with intervals? Let $f(x) = \begin{cases} 3 && 0 \leq x \leq 1 \\ 0 && 1 \leq x \leq 2 \end{cases}$ Compute $\displaystyle \ \ \int_0^2 f(x)dx\,\,\,$. You can use the definition of Riemann integral ... 1answer 35 views ### Counterexample to upper continuity Let $M$ be a $\sigma$-algebra of subsets of a set $X$ and let $\mu:M\rightarrow[0,\infty)$ be a finitely additive set function. I'm trying to decide if it's automatically true that for all ascending ... 0answers 38 views ### Which definition is correct? I have encountered several different definitions of left Haar measure that don't all seem to agree. The setting I care about is Locally Compact Groups. The first seems to completely disagree with ... 1answer 56 views ### E measurable with m(E) < $\infty$? Suppose that $E$ is measurable with $m(E)$ $<$ $\infty$. ii) Show that $\displaystyle \ \ \int_E 2f\,\,\,$ $=$ $2$$\displaystyle \ \ \int_E f\,\,\,$ if $f$ is bounded and measurable. I told my ... 1answer 35 views ### Haar measure $\tau$-additive? I'm reading some results from Measure Theory Volume 4 by D.H. Fremlin, and I'm stuck on something. This is pulled out of one of his lemmas (stated more generally for topological groups): A Haar ... 1answer 74 views ### If $f :\mathbb{R}\to\mathbb{R}$ is measurable, then $E = \{x: f(x) \geq 3\}$ is measurable Prove: Suppose $f : \mathbb{R}\to\mathbb{R}$ where $f$ is measurable and $E = \{x: f(x) \geq 3\}$. Show $E$ is measurable. I saw this statement while reading in a paper and thought this might ... 0answers 24 views ### Lebesgue inner measure the definition of inner measure: m ∗ (A)=sup{m(S):S∈M,S⊆A} I need to prove: 1) If inner measure=outer measure then A is measurable set 2) m*(AUC)+m*(A n C)>m*(A)+ m* (C) 3)m*(UA)> sum (m*(A)) for ... 1answer 32 views ### Why are Haar measures finite on compact sets? I'm working through the answer by t.b. to another user's question here: A net version of dominated convergence? because I am trying to work through a related problem and I think it will be ... 0answers 20 views ### Are Haar measures complete? If $G$ is a locally compact group and $\mu$ is a left Haar measure for $G$, then is the measure space $(G,B(G),\mu)$ complete (where $B(G)$ is the set of Borel subsets of $G$)? Or do we have to take ... 0answers 44 views ### Proving that $\int \left| f-g \right|~d\mu = 2\int_{A_0} (f-g)~d\mu$ Given a (dominant) measure $\mu$, consider two probability measures $f~d\mu$ and $g~d\mu$ over $(\Omega, \mathcal F)$, I'd like to check the following reasoning for showing that \int \left| f-g ... 1answer 24 views ### Showing that $\mathbb{P}[X\geq a]\leq \exp[-ta]\mathbb{E}[\exp[tX]]$ The problem is to show that $\mathbb{P}[X\geq a]\leq \exp[-ta]\mathbb{E}(\exp[tX])$ given $\exp(tX)<\infty$ for $t\in \mathbb{R}$ where $X$ is a random variable. Then to show that ... 1answer 39 views ### Finitely additive measure on $\mathbb R$ Suppose $\mathcal B$ is the Borel $\sigma$-algebra on $\mathbb R$. Let $\mu : \mathcal B \rightarrow [0, \infty ]$ be a finitely additive(but not necessarily countably additive), ... 0answers 15 views ### Convergence of an Integral in a locally compact group I'm trying to finish an exercise which I asked about earlier here: Mapping $G$ into its group algebra as left multiplication. Continuous? $\bf{\text{The setting:}}$ Let $G$ be a locally compact ... 0answers 31 views ### question about essential supremum Consider $u : \Omega \rightarrow \mathbb{R}$ a measurable, nonnegative and bounded function. With $\Omega \subset \mathbb{R}^n$ bounded and open . Is true that \$\mathrm{ess } \inf \ u = \inf \ ... 1answer 29 views ### $\int_\Omega fd\mu_n\to\int_\Omega fd\mu,\ \forall\ f\in C_0(\Omega)$ implies $\mu_n(\Omega)\to \mu(\Omega)$? Let $\Omega\subset\mathbb{R}^N$ be a bounded open smooth domain and $C_0(\Omega)$ the set of bounded continuous functions with compact support. It is know that $C_0(\Omega)^\star =M(\Omega)$, where ... 0answers 25 views ### Lebesgue Integral Rudin Problem [duplicate] Suppose {$n_k$} is an increasing sequence of positive integers and E is the set of all x$\in$($-\pi, \pi$) at which {sin$n_k x$} converges. Prove that $m(E)=0$. Hint: For every A $\subset$ E, ... 1answer 36 views ### Calculate an integral in a measurable space Let $(X,\mathcal{M})$ a measurable set with measure $\mu$. Let $f$ be an integrable non negative function, such that $K:=\int_{E}f \mathrm d\mu<\infty$, where $E\in(X,\mathcal M)$. Let ... 1answer 58 views ### About measure theoretic interior and boundary Reference:- Evans-Gariepy, Federer, other books and notes of geometric measure thoery. I just want to clarify whether these definitions of measure theoretic interior and boundary are correct. Given ... 0answers 35 views ### A few questions about Measure Algebras I've written up some of my understanding as well as I can of the Measure Algebra, trying to see the details behind a very brief treatment. There a couple places where I cannot make see how to make ... 1answer 67 views ### Involution in $L^{1}(G)$ is isometric. (Sorry for asking so many questions of the same type. There is an underlying issue that I think once resolved will allow me to understand them all at once.) Let $G$ be a locally compact group, and ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 192, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284744262695312, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/109/rsa-with-small-exponents/1702
# RSA with small exponents? Just to establish notation with respect to the RSA protocol, let $n = pq$ be the product of two large primes and let $e$ and $d$ be the public and private exponents, respectively ($e$ is the inverse of $d \bmod \varphi(n)$). Given a plaintext message $m$, we obtain the ciphertext $c = m^e \bmod n$; we subsequently decrypt the ciphertext by calculating $c^d \bmod n$. Suppose I'm trying to implement RSA on a device with low computational power, and these exponentiations take too long. I decide to make my implementation run faster by choosing small values for $e$ and $d$ (e.g. in the tens or hundreds). Are there efficient attacks against such an implementation? - ## 8 Answers First I must state that a secure RSA encryption must use an appropriate padding, which includes some randomness. See PKCS#1 for details. That being said, $d$ is the "private exponent" and knowledge of $d$ and $n$ is sufficient to decrypt messages. $n$ is public (by construction) so $d$ must be kept private at all costs. If it is very small then an attacker can simply try values for $d$ exhaustively. On a more general basis, if the size of $d$ is lower than 0.29 times the size of $n$ (in bits) then there exists an efficient key recovery attack. The accepted wisdom is that trying to get a $d$ much smaller than $n$ is a bad idea for security. On the other hand, there is no problem in having a small $e$, down to $e = 3$. Actually, with RSA as you describe, there is a problem with a very small $e$: if you use $e = 3$ and encrypt the very same message $m$ with three distinct public keys, then an attacker can recover $m$. But that's not really due to using a small $e$; rather, it is due to not applying a proper padding. - Hi Thomas, I think it is by now more accepted to use the 4th number of Fermat (0x010001 or 65537) as public exponent because of attacks when the number 3 is used. I understood this is less succeptible to attacks, while the number of calculations is limited because only two bits are set. Would you agree? – owlstead Jan 20 '12 at 15:19 3 @owlstead: we use $65537$ mostly out of Tradition. The "attacks" with $e = 3$ are due to the lack of padding, and lack of padding is already a much bigger worry than that: to have an actual weakness due to $e = 3$ (compared to $e = 65537$), you have to thoroughly damage the algorithm (remove the padding step), which creates a bunch of other much bigger weaknesses. With proper padding, no problem with $e = 3$. However, I use $65537$ by default because it avoids questions, and it is not bad either. – Thomas Pornin Jan 20 '12 at 15:32 Are there efficient attacks against such an implementation? Yes. You need to keep d larger than the 4th root of n=pq. Otherwise Wiener's Attack can be used to compute d. - 3 – fgrieu Jan 20 '12 at 15:36 Yes, you can use small public exponents (e.g., 3 is fine), as long as you never encrypt the same plaintext under three or more RSA public keys with exponent 3. Otherwise, there is "Hastad's broadcast attack" that can extract the plaintext, without needing to factor the modulus. Also, ensure that the private exponent is large enough, as pointed out Jason S (which will usually be the case, if primes are chosen randomly). - hrishikeshp19 suggests repeated squaring, which is essential if you aren't doing it already. Also "Montgomory Multiplication" can also be used to speed up these computations. Beware though, as improper implementations give way to a timing attack. Actually, if you are implementing RSA yourself there are a number of intricacies that you need to pay attention to. Such implementations are best not left to an amateur. - You need to read some recent papers and their references to get up to speed with these attacks. Try "New Weak RSA Keys" by Nitaj and "Revisiting Wiener's Attack – New Weak Keys in RSA" by Maitra and Sarkar Note that if you're trying to speed things up then there are almost certainly better solutions than trying to keep the exponents small. - If you're really in a constrained environment, use an exponent of 5, and that will be okay. I raise an eyebrow about being so constrained that you can't use 64K+1 (65537), but I'm not going to debate it. The best answer to your dilemma (assuming you really have it) is to use 5. (adding in) An exponent of 17 is not bad, either and is also a common compromise made. Jon - 2 Do you have any reasons for specifically $5$ (instead of, for example, $3$)? Also, you should note that this only applies to the public exponent, not the private one. – Paŭlo Ebermann♦ May 16 '12 at 22:08 Even if your computing power is small, you can use larger exponents. There are some algorithms such as repeated squaring method which help you to compute larger exponents a lot faster than brute force. Repeated squaring method can also be applied in RSA by building up one bit at a time, so we can double the exponent of a number in one go. So the number of multiplication we have to make is log n (where n is an exponent) compared to n multiplications for normal computation of exponent of the number. - Why a downvote? – hrishikeshp19 Jan 26 '12 at 4:37 downvote without a comment is lame. – hrishikeshp19 Jul 20 '12 at 17:27 In addition to the special case analytical attacks for small public exponents, I wouldn't use a low value of e due to Partial Key Exposure. See "Exposing an RSA Private Key Given a Small Fraction of its Bits.": Our results show that RSA, and particularly low public exponent RSA, are vulnerable to partial key exposure. Edit: added quote - What? Partial key exposure is an extremely unlikely event, and a large value of $e$ doesn't even completely prevent this attack. This is definitely not a good reason to pick a low value for $e$. Picking a low $d$ is a bad idea for other, more important reasons. – Gilles Apr 25 at 7:43 1. I am arguing against low 'e', not for it. 2. Low 'e' values have lead to PKE successfully in the past, as you can see from the referenced paper. 3. What makes you think I'm arguing for picking a low decryption exponent? – staafl Apr 25 at 11:20 Sorry, I meant this is not a good reason not to pick a low value for $e$. – Gilles Apr 25 at 12:05 It is an additional reason. And PKE is not an 'extremely unlikely event' as you seem to believe. I saw an example of it a few days ago and I stand by my point unless you give me a counter-argument. – staafl Apr 25 at 15:23 My first understanding of the paper was that the value of $e$ didn't make a practically significant difference — but upon rereading I realize I may have missed something. In situations where a side channel leaks some bits of $d$, the reconstruction attack only works for $e$ up to about $\sqrt{N}$, right So does this mean we should always pick a random $e \gt \sqrt{N}$? (That is often difficult in practice as many implementations out there only support small values of $e$…) – Gilles Apr 26 at 17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454633593559265, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/606/time-capsule-cryptography/1033
# Time Capsule cryptography? Does there exist any cryptographic algorithm which encrypts data in such a way that it can only be decrypted after a certain period of time? The only idea that I can think of, is something like this: Seed a PRNG with a public value. Run the PRNG for a week and use the final value it produces to encrypt the message. Now anyone who runs that PRNG for a week starting from the seed value you made public can decrypt the message. Obviously this breaks down since they could use more computational power than you; if the time span was years then Moore's Law would apply, etc. Is there anything like this other than physically burying a private key on a USB stick in a literal time capsule? - 5 <tongue-in-cheek> Use the headline from the front page of the New York Times, dated Sept-1-2015 as your encryption key. Your data will remain safe until then.</> For the rest of us - Nick's answer is a good one. =) – Tails Jun 23 '11 at 5:47 Butterflies aside, that would be a good way to prove your time travel machine works. :P – Jake Jun 24 '11 at 16:58 If you had to enter the key into a black box and it was programmed to function as a time lock, then it could wait to decrypt until the time is reached, but no algorithm will do this on its own. – James Black Oct 22 '11 at 0:46 And any "black box" could be reverse engineered. – Chris Lively Oct 22 '11 at 0:51 ## 14 Answers Yes. There has been a lot of work on "proof of work" protocols or "time-lock puzzles." Typically in cryptography, functions are either easy to compute or intractable. These protocols look at functions that are moderately hard to compute. To do time-release encryption, you need a puzzle with the following properties: 1. Difficulty of the puzzle can be monotonically increased according to some difficulty parameter 2. Best algorithm to solve it is intrinsically sequential (parallel computing doesn't help) 3. Amortized cost of solving a group of puzzles is the same as a single puzzle 4. There is a trapdoor (shortcut) that allows efficient evaluation of the puzzle With time-release crypto, the idea is to generate a puzzle that will take a certain amount of time to solve based on an estimate of a person's computational power and how it will grow in the future (e.g., Moore's law). As you can tell, it only gives you a fuzzy indication of how long it will stay secret (see below for a real world example). Property 2 is very important because adding parallel computation is easy and it is hard to estimate how much parallelization is possible. Note that most proof of work protocols do not care about property 2 because they are used in different ways. These might be based on finding partial collisions/preimages in hash functions or exhaustive search of a small space (e.g., bitcoin), however speedups with parallel computing is trivial in these examples. There are also a number of memory-based puzzles that require lots (more than can be cached) of memory accesses, which is a more predictable measure of time on computers. Back to time-release crypto. The idea is to instantiate a puzzle $p$ with difficulty $d$: $p=\mathsf{Puzzle}(d)$. A trapdoor $t$ for efficiently solving it is known to the person who generates the puzzle. This person then encrypts her message under key $k$ with a normal encryption function: $c_1=\mathsf{Enc}_k(m)$. She then release an "encryption" of the key she used by combining the key with the solution to the puzzle. Note that she can efficiently solve the puzzle with trapdoor $t$ but the recipient can't. She computes: $c_2=k \oplus \mathsf{Solve}_t(p)$. The ciphertext is $c_1,c_2$ plus a description of the puzzle. To decrypt, the recipient computes $s=\mathsf{Solve}(p)$, which should take a moderate amount of time. He then recovers the key by $k=c_2\oplus s$, and can then decrypt $c_1$. The disadvantage to time-release cryptography is that the recipient must devote an entire processor to solving the problem for the period of time that it should remain secret. The best proposal for a Puzzle with the right properties is due to Rivest, Shamir and Wagner in this paper. It is based on repeated squaring in RSA groups. A recent result precludes any intrinsically sequential time-lock puzzles in the random oracle model (e.g., based on hashing). In 1999, Rivest created a time capsule message to commemorate the 35th anniversary of MIT's Laboratory for Computer Science. He talks about how he designed it to require 35 years to decrypt. It is an interesting read. - 1 Interesting. The problem is, of course, that it doesn't guarantee that anyone will decrypt whatever the secret is, only that they could given a certain amount of time. If what you want is for your secret to become public, you might not want to risk nobody trying to break it. Doing it yourself is a non-answer for a couple of reasons. – Steve Dispensa Sep 3 '11 at 17:08 Requiring a certain amount of computation isn't the same as requiring a certain amount of time to pass, though. – Nick Johnson Sep 4 '11 at 13:04 – Ian Boyd May 12 '12 at 19:17 @IanBoyd The puzzle can be efficiently created. It's solving it that's difficult. – Thomas Dec 27 '12 at 13:37 No. See the following. http://www.gwern.net/Self-decrypting%20files It has a decent list of ways you might try to do it. However, there is no guaranty that the desired amount of time would actually elapse until the "lock" was broken. In other words, significantly more or significantly less time might be required. Now, some MIT guys did something in '99 that they hope will be opened in 2034. however, again, they do leave open the possibility that it could be broken earlier.. or not in anywhere near the amount of time they estimate. http://people.csail.mit.edu/rivest/lcs35-puzzle-description.txt Most of the ways available depend on needing to run an operation on a computer that will take a certain amount of time before the final decryption key can be obtained. However, you have no control over hardware. Ergo, you have no real control over the amount of time that would elapse before the final decryption key is acquired. The MIT guys did mention that they took "Moore's Law" into account. However, most people would argue that Moore's Law has very little to do with overall processing performance. The law was about the doubling of transistor counts every 2 years; however processor performance has very much outstripped that. The info on this site says pretty much the same thing: http://dl.acm.org/citation.cfm?id=888615 The secondary way is to give the key to a trusted third party who will only reveal it at a certain point in time. However, this is just as tricky because they may be forced using any number of mechanisms to release it earlier or they may even lose it somewhere along the way. So, the answer is that there is no implementation accepted by anyone as "secure". There can't be. - I suspect most things encrypted with a scheme such as this will be opened late or never, because nobody is prepared to put that much effort into opening it (eg, nobody cares). – Nick Johnson Oct 24 '11 at 4:52 With neither a trusted third party nor trusted hardware, we know no system with an even mildly accurate delay of release. If we accept a trusted third party, there are options. For example: The trusted third party generates a public/private key pair per hour (for an asymmetric cipher such as RSA-OAEP), publish the public $Pub_t$ keys in advance (signed with the long-term public key of the third party), and publish a regularly updated list of all the past private keys $Priv_t$. To time-lock some information $P$ until $t$: fetch $Pub_t$, and its signature, from the trusted third party; check the signature; draw a random key $K$ for a symmetric cipher such as AES-CTR; encipher $K$ using key $Pub_t$ giving $KT$; encipher $P$ using key $K$ giving $C$; forget $K$ and $P$; publish the time-locked information $KT||C$. When time $t$ has come, anyone can fetch $Priv_t$ from the trusted third party; decipher $KT$ using key $Priv_t$ giving $K$; decipher $C$ using key $K$ giving $P$. In a variant, the trusted third party generates deterministically the $Pub_t$/$Priv_t$ pairs from a master key and $t$; this allows arbitrary precision for $t$ with constant storage. Trusted hardware with a trusted real-time clock (e.g. some HSM) can be used to implement the time lock (or to implement the above trusted third party). With trusted hardware lacking a trusted real-time clock (e.g. a Smart Card), the clock can be delegated to a trusted third party. I believe (never done it) that buying a certificate from a certification authority also buys a free service, which answers unauthenticated queries "is this certificate still valid?" with a signed answer "this certificate was still valid at time $t$", which the trusted hardware can check (against the trusted third party's long term public key) to determine that the current time is at least $t$ (on the trusted third party's clock), regardless of how this signed answer has reached the trusted hardware. - • Ronald Rivest, Adi Shamir, and David Wagner. Time-lock puzzles and timed-release Crypto. March 1996. That paper describes how to encrypt a message $M$, so that decrypting $M$ requires a controlled amount of computation (say, $T$ CPU cycles). Here is the gist of the main scheme. To encrypt, Alice chooses large primes $p,q$ and computes a RSA modulus $n=pq$. Alice also picks an integer $t$ large enough that performing $t$ modular squarings (modulo $n$) will take the decryptor about $T$ CPU cycles. Alice then picks a random value $a$ and computes $b = a^{2^t} \bmod n$; she then uses $b$ as a secret key to encrypt her message (e.g., with AES). Alice gives the RSA modulus $n$ to the decryptor, and the number $t$. The decryptor can recompute $b$ by squaring $a$, repeatedly squaring $t$ times (modulo $n$ each time). Thus, it will take the decryptor about $T$ CPU cycles to decrypt. In contrast, Alice can encrypt using much less than $T$ CPU cycles, using the following trick. Alice computes $e = 2^t \bmod \varphi(n)$. Here $\varphi(n) = (p-1)(q-1)$, so Alice can compute $\varphi(n)$, but no one else can. Moreover, Alice can compute $e$ using $2 \log t$ squarings and multiplications modulo $\varphi(n)$, which takes vastly less than $T$ CPU cycles. Alice then encrypts using $b = a^e \bmod n$. Thus, Alice can encrypt much more efficiently than the recipient can decrypt. To put it another way, Alice can create a puzzle whose difficulty she can control very precisely. The only way to decrypt is to solve the puzzle. For instance, she can arrange that it will probably take about 20 years to decrypt. Moreover, creating the puzzle is much quicker than solving it (she can create the puzzle in seconds, even though it will take decades to solve it). This provides a way to send a message that can only be decrypted after performing a certain amount of computation, i.e., after a certain amount of time. The paper also describes another way to solve the problem, if one has a trusted agent (perhaps implemented using TCG, or perhaps considered trustworthy for other reasons). - There is such a kind of primitive in the article about timed commitments from Boneh and Naor. This is a kind of encryption scheme where decryption can be forced, but at a heavy non parallelizable cost, and such that it can be proven in advance, at low cost, that forced decryption will work. It relies on repeated squarings modulo a RSA modulus. - The problem with all of these schemes is that someone has to do it. If you tune the work to match your guess at your primary adversary's compute power, and you guess that your adversary has many times your compute power, then you guarantee that nobody but your adversary or someone with more resources than he will be able to decrypt your secret. If you want two things -- 1) it becomes public after time T 2) it becomes public not long after T -- then this system doesn't get you there. You can't do it yourself, and your adversary might decide not to do it. – Steve Dispensa Sep 3 '11 at 20:25 Because there's no inherent sense of time for a computer, there's not really any way to accomplish this. The best you could do would be to require a lot of computation and try to make it have to be as serial as possible. But even if there's a good way to do that, faster computers could do it more quickly. If you want something to stay encrypted for a fixed amount of time, your best bet would be to encrypt it using any standard scheme and lock the key or keys in a time-lock safe. - Set up a relay on alpha centauri and transmit the key to it. Absolutely guaranteed 8 year delay until the key is available. - 2 But couldn't there side-channel attacks (like someone intercepting the data on the way)? – Paŭlo Ebermann♦ Oct 22 '11 at 22:08 use public key encryption to communicate with your relay site :) – ddyer Dec 23 '11 at 21:29 InfoBiology by printed arrays of microorganism colonies for timed and on-demand release of messages is a recently development method that leverages biological systems to create messages that impose a limit on the amount of time it takes to learn a message. The idea is simply to encode a message in a pattern of e. coli colonies. The message is released after the e. coli colonies have been growing in a particular growth medium for a certain period of time (the phenotypic expression of the e. coli is designed to have a particular lag time). - – Ilmari Karonen Sep 28 '11 at 16:31 – Ethan Heilman Sep 28 '11 at 16:57 Going with the theme of out-of-the-box solutions started by e501, I do recall at least a half-serious suggestion that, with a retransmitter (or just a big mirror) placed in outer space at a suitable distance from Earth, one could use light-speed lag as an effective time capsule mechanism — just encode the message as a laser pulse and send it out for a round trip of as many light years as you want to delay it by in years. In principle, you could also do this just by bouncing the light off naturally occurring objects in space, like planets or stars or nebulae. However, the transmission losses involved in this would probably make it infeasible without extremely powerful transmitters and sensitive receivers, to the point where it's not at all clear whether it would actually be any more practical than just launching a huge retroreflector out into space. Also, the brighter the outgoing pulse is, the bigger the risk that scattering from gas and dust along the way might provide a side-channel attack. - 1 Finally a solution involving uncheatable physical time – Tobias Kienzler Aug 13 '12 at 13:09 I'm thinking some sort of Ironkey like device, where the crypto and clock are internal to the device and encapsulated in such a way that if they are physically tampered with, the chip (and the key) self-destructs. - Since there's no way to control your execution environment, there's no way to ensure your algorithm is executed as described, with the actual current time. Thus, the only way to ensure something like this is to make it computationally difficult to decrypt, which is a limit on the amount of computational effort spent, not on the amount of real time spent. The easiest way to do this would be to encrypt the data with a short private key, and require people to factor the public key in order to unlock it. - 1 Maybe clinging to an ongoing factoring effort could be useful - e.g. if you know that some project works on factoring some large composite number, use this number as part of your RSA public key to encrypt your message. (Of course, this might not work at all if the number in question has more prime factors than RSA allows - and if it is a number created by someone, this someone might already know the factors.) – Paŭlo Ebermann♦ Sep 3 '11 at 12:43 Requiring people to factor the public key is not a good basis for a solution, because factoring a public key is a task that can be parallelized, and thus the computation needed to factor cannot be easily related to passage of time. You want an inherently sequential problem. See my answer for one solution. – D.W. Sep 4 '11 at 4:46 I would try an aproach with a web service. Once you own the server you can rely on the date it has. You crypto information should contain a date that the server validates an once this is reach you can provide real decryt key for user. - Check out "Time-Lapse Cryptography". Seems like a pretty good system. I doubt the infrastructure is currently in place to support it, but you never know what the authors have out there. - Two big problem with the "it will take a computer 20 years to crack it" idea 1. It will only take 2 years for 10 computers to crack it 2. brute force cracking is a random process, so it could be cracked almost instantly, at very low admittedly, but not quite zero , probability. Oh I suppose you could encode using a really really large key .The brute force crack techqnique might then have the expected crack time of 200 years. But the key is calculable, using a process that must be linearly done, so that no parallism is allowed, and it will take 20 years at 20 Gigaflops to process. Actually the speed limit is around 4 Ghz so recently cpu pwoer is coming from parallelism.. which cannot help a linear algorithm... But there is a risk that someone would divise a way to find the resulting key without doing all the intermediate calculations, and thus turn 20 years into a short time .. - 2 Point 2 is not a very convincing argument, to be honest. Just because the probability is not zero doesn't mean it's going to happen. Also, speed limit is more like 9GHz, at least to public knowledge. – Thomas Dec 31 '12 at 5:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466741681098938, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=327693
Physics Forums Thread Closed Page 1 of 2 1 2 > Physics final exam review - please help! bsbtstrrbt PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help What have you done so far? If you don't show us any work you've been able to do, we can't tell you what you might be misunderstanding. erererger Physics final exam review - please help! Quote by SusanC1105 Well I know the answers to just about all of them, because we have discussed them in class, but I'm just unclear about some of the detailed physics explanations behind why the answer are correct. Like #1 - I know that the speed on the way up is the same as the way down, I just don't know how to explain why... That is incorrect. The velocity of the rock on the way up at height $$x$$ is the same as the velocity of the rock on the way down at height $$x$$ You can prove this using kinematics ($$v_f^2=v_0^2+2ad$$), or using preservation of energy. But that is irrelevant for this problem, as it deals with just the upwards motion. The first segment you're asked about is from 0 meters above the ground, to 5 meters above the ground. What is the rock's initial velocity (You can calculate this using the two approaches I cited above) and what is its acceleration? You can easily construct an expression describing the time it takes for it to traverse the 5 meters. Now consider the second segment, which is from 15 meters above the ground, to 20 meters above the ground. What is the rock's velocity at the start of this motion and what is its acceleration? As with the previous segment, you can construct an expression for how long it takes to traverse the 5 meters, or any distance, really. I suggest you solve this parametrically, as any exam questions are very likely to have follow-up questions which you will find your parametric solution very useful for solving. EDIT: The answer in my spoiler is irrelevant. I misread the question. Your reasoning is still incorrect, though. Spoiler The velocity of the stone is not the same the way up as it is the way down. The acceleration is constant, that much is true, but the last 5 seconds of its flight are the 5 seconds before it hits the floor (20 meters below the throwing point) and first 5 seconds of its flight are when its first cast upwards. They are VERY different. The formula that has all the answers in this case is $$v_f^2=v_0^2+2ad$$ Since we know the velocity is symmetrical with respect to the maximum height (The velocity of the stone 5 seconds before reaching the maximum height, is the same as its velocity 5 seconds after reaching the maximum height. The same can be said about its height), we'll make the argument as though our stone started from the maximum height, some $$x$$ meters above the throwing point, which is 20 meters above the ground. Going by the above formula, when is the stone's velocity greater, in the last 5 seconds of its flight (When it clears the ADDITIONAL 20 meters of the height of the throwing point), or in the first 5 seconds of its flight? yjttytytjty Quote by SusanC1105 Well I know the answers to just about all of them, because we have discussed them in class, but I'm just unclear about some of the detailed physics explanations behind why the answer are correct. Like #1 - I know that the speed on the way up is the same as the way down, I just don't know how to explain why... I think its better to explain it in terms of "the direction of acceleration" Quote by SusanC1105 Well see, this is why I'm here then! Here are my answers: 2. The apparent weight is lighter than the actual weight because of newtons second law. The true weight and the normal force are acting on the person. So the apparent weight= mg +ma Your final answer is correct, but your reasoning is murky. Make a free body diagram and apply Newton's second law of motion. Your apparent weight is defined by the normal force you exert on the scale (And the scale exerts on you). $$mg-N=ma$$ where $$a$$ is the acceleration of the elevator. From here you can isolate N and compare it with its value when the elevator isn't accelerating. 3.Pulling your arm back will reduce the force because you extend the impulse over a greater period of time, which reduces the impact of the force. This is correct. On a test, I would phrase my answer like this, though: $$J\equiv \frac{dP}{dt} \Delta t$$ In order to catch the ball, we must reduce its momentum to 0. Pulling our arm back would mean we increase the time it takes for the ball to undergo the change in momentum. In doing so, we are reducing the average force it exerts on us. Mathematically speaking, $$J$$ is a known quantity, so if we increase $$\Delta t$$, then the average force must decrease accordingly. 4. I'm not too sure on this one, but I think the ball will come out at a 90 degree angle because it is still influenced by the rotational momentum, but doesn't have the string tension there to keep it on course anymore, so it goes off in a new direction? This is incorrect. Remember Newton's first law of motion, and remember that a change in velocity requires that a force acts on the mass in question. And since velocity is a vector quantity (Well, momentum to be more precise), a change in its direction is as much a change as is a change to its magnitude. Try and relate what the tension does in this motion. What it changes. 5. A. potential energy is added to the pendulum prior to release, the pendulum wants to get back to equilibrium and it will take energy to do so. B. kinetic energy is the greatest just as it swings through the equilibrium point on its way down, because it is assisted by gravity? C. The potential energy is highest when the pendulum is at the top of each swing, I don't know how to explain why though. This is correct, but again your reasoning is murky at best. There are only conservative forces acting on the system. Therefore, $$E_k+U_g=constant$$ At what point is the kinetic energy at its maximum (Hint: The potential energy would have to be at its minimum), and at what point is the kinetic energy at its minimum? (Remember that it depends on the velocity of the mass) 6. The frictional force is necessary to negotiate a curve, and gravitational forces are not acting in the right direction to be able to help the car through the curve. This is incorrect. In order to negotiate a curve, there has to be such a force that the car doesn't slip out of the curve, or into it, right? You can take a curve to the limit where it's a circular arc. This is a very big hint. Look at the forces acting on it, and demand that it be in equilibrium so that it doesn't slide up or down the incline. That means that it stays in motion throughout the curve. 7. The sled can not climb a hill higher than its starting point because it only started with a certain amount of potential energy, and since some is lost to frictional forces, there will not be enough to climb a taller hill. 8. There is an advantage with longer contact time, this increased the impulse and puts a greater force on the bat. The bat speed is what creates the force, which is the other part of the impulse equation. J = FdeltaT. Increasing the bat speed will also increase the impulse, making the ball go further. 9.The airbag does the same thing as moving your hand back when catching the baseball, extends the impulse over a greater period of time, reducing the impact you feel. 10. I had to guess at this one, do you swing the bag of oranges back and forth, constantly changing your center of gravity, causing you to move in the direction the oranges are going? 11. this is a transverse wave because the force you are exerting to create these waves is perpendicular to the direction of the waves. 12. It is possible for 2 waves traveling in the same direction to create a smaller wave, as long as the 2 waves are out of phase with eachother, creating interference and thus a smaller wave is produced. 13. Increased tension on a guitar string will increase the frequency and decrease the wavelength of the standing waves. I'm sure there is an equation that proves this, but I'm not sure what it is. 14. I'm pretty clueless on this one. 15. Object A will be easier to set into rotational motion, but I don't understand the physics behind this. 16. Gravity is acting on the person but what is the other force? I don't think its friction because the person is at rest. Could it be the floor acting on the person or a normal force or something? 17. The velocity the ball is rolling at is not an important factor when calculating the time it takes to hit the floor, because this is only dependent on gravity, and the height that the ball falls from. 18. You will see the lighting before you hear the thunder because light waves travel faster than sound waves. I don't really know what physics explanation he is looking for other than that? 19. Sound waves cannot travel through a vacuum because the waves need air particles to move in, and without them, there would be no sound waves. 20. The bird increases the potential energy of the clam when it takes it from the ground, when the bird drops the clam, gravity is acting to bring the clam back to earth. Then there is a collision between the clam and the rock where they both exert a force on eachother, but energy is always conserved. I feel like I have an idea what the answers are, I just can't recall the equations or laws that explain them. A bit busy now, will reply to the rest later. Replies in bold. dtyjtjdtj Did you ever fugure out number 4,7,10, 12, 15 or 16, i got the rest if you need any? thrhrhrth Is there any that you need srthsrthh for 5 i used the PE=mgh and explained that height is the only thing changing. and KE= .5mv^2, the other ones im kinda iffy about. 4,10,15 are the worst. Quote by SusanC1105 Well see, this is why I'm here then! Here are my answers: 2. The apparent weight is lighter than the actual weight because of newtons second law. The true weight and the normal force are acting on the person. So the apparent weight= mg +ma 3.Pulling your arm back will reduce the force because you extend the impulse over a greater period of time, which reduces the impact of the force. 4. I'm not too sure on this one, but I think the ball will come out at a 90 degree angle because it is still influenced by the rotational momentum, but doesn't have the string tension there to keep it on course anymore, so it goes off in a new direction? 5. A. potential energy is added to the pendulum prior to release, the pendulum wants to get back to equilibrium and it will take energy to do so. B. kinetic energy is the greatest just as it swings through the equilibrium point on its way down, because it is assisted by gravity? C. The potential energy is highest when the pendulum is at the top of each swing, I don't know how to explain why though. 6. The frictional force is necessary to negotiate a curve, and gravitational forces are not acting in the right direction to be able to help the car through the curve. 7. The sled can not climb a hill higher than its starting point because it only started with a certain amount of potential energy, and since some is lost to frictional forces, there will not be enough to climb a taller hill. You're forgetting the push mentioned in the question, which will give it some extra energy, so it might be enough to climb a taller hill, depending on how tall it is 8. There is an advantage with longer contact time, this increased the impulse and puts a greater force on the bat. The bat speed is what creates the force, which is the other part of the impulse equation. J = FdeltaT. Increasing the bat speed will also increase the impulse, making the ball go further. Careful with terminology. "bat speed is what creates the force" makes little sense. Think more along the lines of force being the rate of change of momentum. Now, what's momentum? 9.The airbag does the same thing as moving your hand back when catching the baseball, extends the impulse over a greater period of time, reducing the impact you feel. Terminology! You're reducing the force experienced by your body. "Impact" isn't a physical quantity 10. I had to guess at this one, do you swing the bag of oranges back and forth, constantly changing your center of gravity, causing you to move in the direction the oranges are going? If you're swinging it back and forth, then you'll just be oscillating about where you were standing without going anywhere. And besides, even if you do, it doesn't have anything to do with your center of gravity. No, they want you to throw the oranges somewhere. I'll leave you to try to reason it out. 11. this is a transverse wave because the force you are exerting to create these waves is perpendicular to the direction of the waves. Correct, but I'm not sure I like the phrasing.... 12. It is possible for 2 waves traveling in the same direction to create a smaller wave, as long as the 2 waves are out of phase with eachother, creating interference and thus a smaller wave is produced. You might want to specify that it is destructive interference 13. Increased tension on a guitar string will increase the frequency and decrease the wavelength of the standing waves. I'm sure there is an equation that proves this, but I'm not sure what it is. I'm pretty sure the implicit assumption is that the length is unchanged. If the length is unchanged, the wavelength is unchanged. But of course, the frequency increases. The equation you want is v = f*lamba. (In your case, v increases) 14. I'm pretty clueless on this one. Google/wiki Doppler effect 15. Object A will be easier to set into rotational motion, but I don't understand the physics behind this. Have you come across moments of inertia? If not, consider the forces required to set a mass into circular motion at different radii. 16. Gravity is acting on the person but what is the other force? I don't think its friction because the person is at rest. Could it be the floor acting on the person or a normal force or something? Yes, it is the normal force, exerted by the floor on the person. 17. The velocity the ball is rolling at is not an important factor when calculating the time it takes to hit the floor, because this is only dependent on gravity, and the height that the ball falls from. Correct 18. You will see the lighting before you hear the thunder because light waves travel faster than sound waves. I don't really know what physics explanation he is looking for other than that? I think that's all there is to it, really. 19. Sound waves cannot travel through a vacuum because the waves need air particles to move in, and without them, there would be no sound waves. Yup. 20. The bird increases the potential energy of the clam when it takes it from the ground, when the bird drops the clam, gravity is acting to bring the clam back to earth. Then there is a collision between the clam and the rock where they both exert a force on eachother, but energy is always conserved. Describe the energy conversions more fully. PE->KE when dropped. I feel like I have an idea what the answers are, I just can't recall the equations or laws that explain them. Attempting to pick up where RoyalCat left off. Answers in bold. Seems like you're still confused over 4. Once the string breaks, what are the forces acting on it? Apply Newton's First Law. That's all there is to it. rsthsrth srthhrs Well, what are your thoughts on the responses I've provided? Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |---------------------------------------------------------------|----------------------------------|---------|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557275772094727, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81745/lovasz-theta-function-of-circulant-graphs
## Lovasz theta function of circulant graphs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a cirulant graph with no loops at vertices and vertex degree $d$. Is the Lovasz theta function of this graph given by: $\vartheta(G) = \max_{i}\frac{-N\epsilon_{i}}{-\epsilon_{i}+d-1}$? where $\epsilon_{i}$ are the eigenvalues of the adjacency matrix $A$ of the graph and is given by $\epsilon_{i} = 2\sum_{j=2}^{\frac{N-1}{2}}a_{j}cos(\frac{2\pi(j-1)i}{N})$ where $a_{j} \in \{0,1\}$ form the first row of $A$. - What evidence do you have that the LTF for that graph has the form you describe? Numerical? Educated guesswork? Confirmation in special cases? – Yemon Choi Nov 24 2011 at 2:22 1 confirmation in special cases and educated guess work! It is of this form for cycle graphs!! – unknown (google) Nov 24 2011 at 4:12 I think the denominator in your formula should be $-\epsilon_i +d$. You bound is then equal to the Delsarte-Hoffman bound for the size of a coclique in a regular graph. – Chris Godsil May 2 2012 at 2:23 ## 2 Answers Computing Lovasz $\theta$ for circulant graphs can be reduced to linear programming; this is well-known, I think (already mentioned in A.Schrijver's 1979 paper "A comparison of the Delsarte and Lovasz bounds"). Indeed, $A$ is an element of the Bose-Mesner algebra of the commutative associative scheme (obtained from the natural action of the dihedral group on $N$ points), and Schrijver shows that in this case $\theta$ can be found by simultaneous diagonalisation of all the basis elements of the algebra (in this case, it is the same as diagonalizing the (symmetric) circulant matrices) and solving the resulting linear program. - true but does that reduce to this is the question! – unknown (google) Nov 24 2011 at 19:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No. In the case of the circulant graph of 2n vertices C(1,n) i.e., the Möbius Ladder, we obtain a division by zero in the calculus. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8937503099441528, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/165837-log-system-equations.html
Thread: 1. Log System of equations I'm needing a little assistance with this problem, haven't any trouble until this. A village of 1000 inhabitants increases at a rate of 10% per year. A neighbouring village of 2000 inhabitants decreases at a rate of 5% per year. After how many years will these two villages have the same population? So I wrote them as two exponential functions and made them equal to each other. Not sure this is how I should do it but it feels right. $1000(1.10)^x=2000(0.95)^x$ Then by definition $C^x=y <==> log(y)=x$ Turned them into this $log(1.1)1000=log(0.95)2000$ then I know it can be turned into this. $log1000/log1.1=log2000/log0.95$ And now I'm stuck, help! 2. Originally Posted by Gerard So I wrote them as two exponential functions and made them equal to each other. Not sure this is how I should do it but it feels right. $1000(1.10)^x=2000(0.95)^x$ To make things easier, divide both sides by 1000. What next? 3. Originally Posted by pickslides To make things easier, divide both sides by 1000. What next? $(1.1)^x=2(0.95)^x$ Then using the definition... $log1/log1.1 = log2/log0.95$ ? 4. Using what definition? Taking the logarithm of both sides (any base) of $(1.1)^x= 2(0.95)^x$ you get x log(1.1)= x log(.95)+ log 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9645500183105469, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/02/17/from-cartan-matrix-to-root-system/?like=1&source=post_flair&_wpnonce=b0679ce6ea
# The Unapologetic Mathematician ## From Cartan Matrix to Root System Yesterday, we showed that a Cartan matrix determines its root system up to isomorphism. That is, in principle if we have a collection of simple roots and the data about how each projects onto each other, that is enough to determine the root system itself. Today, we will show how to carry this procedure out. But first, we should point out what we don’t know: which Cartan matrices actually come from root systems! We know some information, though. First off, the diagonal entries must all be ${2}$. Why? Well, it’s a simple calculation to see that for any vector $\alpha$ $\displaystyle\alpha\rtimes\alpha=2\frac{\langle\alpha,\alpha\rangle}{\langle\alpha,\alpha\rangle}=2$ The off-diagonal entries, on the other hand, must all be negative or zero. Indeed, our simple roots must be part of a base $\Delta$, and any two vectors $\alpha,\beta\in\Delta$ must satisfy $\langle\alpha,\beta\rangle\leq0$. Even better, we have a lot of information about pairs of roots. If one off-diagonal $\alpha_i\rtimes\alpha_j$ element is zero, so must the corresponding one $\alpha_j\rtimes\alpha_i$ on the other side of the diagonal be zero. And if they’re nonzero, we have a very tightly controlled number of options. One must be $-1$, and the other must be $-1$, $-2$, or $-3$. But beyond that, we don’t know which Cartan matrices actually arise, and that’s the whole point of our project. For now, though, we will assume that our matrix does in fact arise from a real root system, and see how to use it to construct a root system whose Cartan matrix is the given one. And our method will hinge on considering root strings. What we really need is to build up all the positive roots $\Phi^+$, and then the negative roots $\Phi^-$ will just be a reflected copy of $\Phi^+$. We also know that since there are only finitely many roots, there can be only finitely many heights, and so there is some largest height. And we know that we can get to any positive root of any height by adding more and more simple roots. So we will proceed by building up all the roots of height ${1}$, then height ${2}$, and so on until we cannot find any higher roots, at which point we will be done. So let’s start with roots of height ${1}$. These are exactly the simple roots, and we are just given those to begin with. We know all of them, and we know that there is nothing at all below them (among positive roots, at least). Next we come to the roots of height ${2}$. Every one of these will be a root of height ${1}$ plus another simple root. The problem is that we can’t add just any simple root to a root of height ${1}$ to get another root of height ${2}$. If we step in the wrong direction we’ll fall right off the root system! We need to know which directions are safe, and that’s where root strings come to the rescue. We start with a root $\beta$ with $\mathrm{ht}(\beta)=1$, and a simple root $\alpha$. We know that the length of the $\alpha$ string through $\beta$ must be $\beta\rtimes\alpha$. But we also know that we can’t step backwards, because $\beta-\alpha$ would be (in this case) a linear combination of simple roots with both positive and negative coefficients! If $\beta\rtimes\alpha=0$ then we can’t step forwards either, because we’ve already got the whole root string. But if $\beta\rtimes\alpha>0$ then we have room to take a step in the $\alpha$ direction from $\beta$, giving a root $\beta+\alpha$ with height $\mathrm{ht}(\beta+\alpha)=2$. As we repeat this over all roots $\beta$ of height ${1}$ and all simple roots $\alpha$, we must eventually cover all of the roots of height ${2}$. Next are the roots of height ${3}$. Every one of these will be a root of height ${2}$ plus another simple root. The problem is that we can’t add just any simple root to a root of height ${2}$ to get another root of height ${3}$. If we step in the wrong direction we’ll fall right off the root system! We need to know which directions are safe, and that’s where root strings come to the rescue… again. We start with a root $\beta$ with $\mathrm{ht}(\beta)=2$, and a simple root $\alpha$. We know that the length of the $\alpha$ string through $\beta$ must again be $\beta\rtimes\alpha$. But now we may be able to take a step backwards! That is, it may turn out that $\beta-\alpha$ is a root, and that complicates matters. But this is okay, because if $\beta-\alpha$ is a root, then it must be of height ${1}$, and we know that we already know all of these! So, look up $\beta-\alpha$ in our list of roots of height ${1}$ and see if it shows up. If it doesn’t, then the $\alpha$ string through $beta$ starts at $\beta$, just like before. If it does show up, then the root string must start at $\beta-\alpha$. Indeed, if we took another step backwards, we’ve have a root of height ${0}$, which doesn’t exist. Thus we know where the root string starts. We can also tell how long it is, because we can calculate $\beta\rtimes\alpha$ by adding up the Cartan integers $\gamma\rtimes\alpha$ for each of the simple roots $\gamma$ we’ve used to build $\beta$. And so we can tell whether or not it’s safe to take another step in the direction of $\alpha$ from $\beta$, and in this way we can build up each and every root of height ${3}$. And so on at each level we start with the roots of height $n$ and look from each one $\beta$ in the direction of each simple root $\alpha$. In each case, we can carefully step backwards to determine where the $\alpha$ string through $\beta$ begins, and we can calculate the length $\beta\rtimes\alpha$ of the string, and so we can tell whether or not it’s safe to take another step in the direction of $\alpha$ from $\beta$, and we can build up each and every root of height $n+1$. Of course, it may just happen (and eventually it must happen) that we find no roots of height $n+1$. At this point, there can be no roots of any larger height either, and we’re done. We’ve built up all the positive roots, and the negative roots are just their negatives. ### Like this: Posted by John Armstrong | Geometry, Root Systems ## 6 Comments » 1. [...] root system we can construct a connected Dynkin diagram, which determines a Cartan matrix, which determines the root system itself, up to isomorphism. So what he have to find now is a list of Dynkin diagrams which can possibly [...] Pingback by | February 19, 2010 | Reply 2. [...] classify these, we defined Cartan matrices and verified that we can use it to reconstruct a root system. Then we turned Cartan matrices into Dynkin [...] Pingback by | March 12, 2010 | Reply 3. Thanks for spelling this algorithm out; I found Humphrey’s description (p.56) a bit condensed. One thing I found a bit confusing though, is that you say that the [latex]\alpha[/latex] string [latex]\beta[/latex] has ‘length’ [latex]\beta \rtimes\alpha[/latex]. I think you mean that r-q (in your and Humphreys’ notation) equals this number; the length, i.e. the number of roots in the string, would be r+q+1. Comment by Landau | May 2, 2011 | Reply 4. Damn you, Latex! I should have used dollar signs. (Here I read that both dollar signs and [ latex ] [/ latex ] would work: http://wordpress.org/extend/plugins/wp-latex/faq/ ) Thanks for spelling this algorithm out; I found Humphrey’s description (p.56) a bit condensed. One thing I found a bit confusing though, is that you say that the $\alpha$ string through $\beta$ has ‘length’ $\beta \rtimes\alpha$. I think you mean that r-q (in your and Humphreys’ notation) equals this number; the length, i.e. the number of roots in the string, would be r+q+1. Comment by Landau | May 2, 2011 | Reply 5. But how do I find the scalar products where α are the simple roots from the cartan matrix? I know 2/, from this how do I find out ? I basically want to construct the root diagram for the algebra. Comment by Ramanujan | March 2, 2012 | Reply 6. Who said anything about an algebra? I’m just talking about a collection of vectors and associated transformations satisfying certain symmetry properties on the one hand, and an integer-entry matrix on the other hand. No algebras here! Comment by | March 2, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 79, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145869016647339, "perplexity_flag": "head"}
http://mathoverflow.net/questions/66252/is-the-leopoldt-conjecture-almost-always-true/68781
## Is the Leopoldt conjecture almost always true? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The famous Leopoldt conjecture asserts that for any number field $F$ and any prime $p$, the $p$-adic regulator of $F$ is nonzero. This is known to be equivalent to the vanishing of $H^2(G_{F'/F},\mathbf{Q}_p/\mathbf{Z}_p)$, where $F'$ is the maximal pro-$p$ extension of $F$ unramified outside $p$. When $F/\mathbf{Q}$ is abelian, the conjecture was proven by Brumer. My question: is there any reasonable sense in which the Leopoldt conjecture is "usually" true - e.g., is it known for any fixed $F$ at almost all primes $p$, or (say) for almost all quartic extensions of $\mathbf{Q}$ with $p$ fixed? A glance through the mathscinet reviews of all the papers with "Leopoldt conjecture" in their title didn't reveal anything, but perhaps this is well-known to experts. - One result is, Buchmann and Sands (1988) fixed any prime (not 5) and found infinitely many $S_5$ extensions with LC true. ams.org/journals/proc/1988-104-01/… – Junkie May 28 2011 at 5:52 ## 3 Answers Olivier and all, If you trust your own minds, you should better try directly and read version 2 of the proof for only CM fields, which I posted in June this years. The rest is hot wind - people may bet, in front of the list of names who failed at Leopoldt you may put the odds for my breakthrough around 1% - but it is only reading which can provide your own judgment of whether this 1 has to be more likely than the complementary 99. I teach the proof in class since 3 weeks and it works quite fluidly and the students can grab the construction very well - useless to say, it is enriched by many details, since it is a 3-d year course (guess something like first graduate year). I gave up the construction of techniques for non CM fields, the Iwasawa skew symmetric pairing, and reduced to the skeletton of the principal ideas, exactly in order to respond to the loud whispers about my expressivity. As for the Cambridge seminar mentioned, it was a great experience - but it happened during a week loaded with important other seminars, and in spite of the particular attention offered, we did not have more than 3 or 4 meetings of two hours, this was certainly not enough for completing a proof with all the details, just the time for gathering some important questions and find out on what particular issue people would like to know more. This is taken into account in the present version. It is also true that Minhyong Kim, this friendly and enthusiastic fellow, asked for my allowance to put the draft on the blog, exactly in the expectation that more students and young researchers would just try and bite at it, and raise questions, which were very welcome. The expected impact did not happen. Therefore I friendly invite you to simply read. Anyone having concrete questions is gladly invited to write me a mail, if I understand the question his chances are one in a thousand that I will not respond. Sorry if I intruded your discussion Preda Mihailescu - Dear Preda, thanks for this post. I am going to read your version 2. – Joël Jul 4 2011 at 11:27 1 Dear Joel, You are welcome and have fun. I will post within the next week also a shorter paper on the Gross conjecture, this might be interesting too - I teach it in the same course, and the result is somehow faster ... Let me know if you get stuck anywhere, I will try to help Best Preda – Preda Jul 4 2011 at 12:13 Dear Preda, Six months have passed and unfortunately I have not yet found the time to read seriously your paper, which I still very much want to do. I wanted to ask you a question: you have a much shorter 2008 paper called "An application of Baker's theory to Leopoldt's conjecture" which proves Leopoldt's conjecture in the case of a totally real extension where $p$ splits completely (which is already a real breakthrough, and contains the cases I personally need of the conjecture). Is this paper published? Do you think I should read it as an introduction to your more recent paper? – Joël Feb 4 2012 at 16:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think that not much is known. For example I don't think that we are any closer to prove Leopoldt's conjecture for a given $F$ for infinitely many $p$ than to prove it for all $p$. Here is a result though: for $K_n=$ cyclotomic fields generated over $\bf Q$ (variante: over a fixed quadrqtic imqginary field; over a fixed totally real field) by the roots of unity of order $p^n$, the "defect" of Leopoldt's conjecture (the dimension of $H^2(K_n'/K_n,{\bf Q}_p)$) stays bounded as $n$ goes to infinity. This is a consequence of the main conjecture known in this case and in the variantes. This is already a very useful result (used for example by Minhyong Kim in his beautifull new proofs of old Diophantine results, such as Siegel theorem for a CM elliptic curve). Not really in the same spirit, but somehow similar to the brummer proof of the abelian case; it is important to mention Waldschmidt's beautiful result, that the defect in Leopoldt's conjecture is at most half of the degree of $F$. Since both Brumer's result and Waldschmidt's have proof using fundamentally the theory of transcendence, and for other reasons as well, many people (including myself) think that proving Leopoldt's conjecture will require some transcendence methods (as opposed as methods of algebraic number theory, automorphic forms, etc.). But "generic results" as asked by the question might be more accessible, if by no means simple. - Leopoldt's conjecture seems to have been proved now by Mihailescu: http://arxiv.org/abs/0905.1274 In fact before that, I think Fujiwara had done significant work on it (maybe the case of totally real fields?). - I was under the impression that "the jury was still out" on this, so I figured I'd ask my question in the meantime. – David Hansen May 28 2011 at 4:11 Given the number of "proofs" of big theorems I've seen on arXiv, I would agree that the "jury was still out" unless it has been accepted for publication, or other notables are talking about it and taking it seriously (like happened with Catalan). It has now been a preprint for 2 years and he gave talks in late 2009, and I haven't heard anyone particularly voice a confirmation. There is value in the paper, though maybe not everything he claims. Supposedly Lenstra gave up reading Mihailescu's manuscripts just when Catalan worked, so if it's serious math you should always give it some merit. – Junkie May 28 2011 at 5:20 5 Fujiwara retracted its proof. As for Mihailescu's, credible people I discussed it with tended to be more critical than "the jury is still out" and at least one was much more so. But you never know... – Olivier May 28 2011 at 7:15 1 I agree with your statement, Olivier (maybe derivative of the same credible persons), but I wasn't going to voice rumors without a specific flaw being noted. One problem is that Mihailescu has his own expositional style, to say the least, and it is often hard to figure out what he has really done or shown. As Minhyong Kim said when PM gave talks, he was very open to questions, and there was a lot going on, but the jury seems to say: this as-is is not a proof: but there are various new ideas which might lead to a proof. Now he claims the CM case separately, maybe this will clear matters. – Junkie May 28 2011 at 8:23 3 @Dr Shello: It's my understanding that the experts have not come out and said that Mihăilescu's proof is incorrect, but that they have not said it is correct either. – Rob Harron May 28 2011 at 17:29 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9712595343589783, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/12410/terminology-is-there-a-name-for-a-category-with-biproducts/12419
## Terminology: Is there a name for a category with biproducts? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Many people are familiar with the notion of an additive category. This is a category with the following properties: (1) It contains a zero object (an object which is both initial and terminal). This implies that the category is enriched in pointed sets. Thus if a product $X \times Y$ and a coproduct $X \sqcup Y$ exist, then we have a canonical map from the coproduct to the product (given by "the identity matrix"). (2) Finite products and coproducts exist. (3) The canonical map from the coproduct to the product is an equivalence. A standard exercise shows this gives us a multiplication on each hom space making the category enriched in commutative monoids (with unit). (4) An additive category further requires that these commutative monoids are abelian groups. I want to know what standard terminology is for a category which satisfies the first three axioms but not necessarily the last. I can't seem to find it using Google or Wikipedia. An obvious guess, "Pre-additive", seems to be standard terminology for a category enriched in abelian groups, which might not have products/coproducts. - 6 I'd just say category with biproducts, thereby avoiding adding yet another name! – Mariano Suárez-Alvarez Jan 20 2010 at 13:51 ## 2 Answers One name that I have seen used is semiadditive category. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "Category with biproducts" is probably the only standard name, but I'm not really fond of it because (at least in my experience) a more natural characterization of these categories satisfying (1)-(3) is as categories enriched in commutative monoids with finite coproducts. I would prefer to use "additive" for (1)-(3) (after all, "additive" doesn't say anything about being able to subtract!) and may have used that terminology in conversations with you, but I am unlikely to garner much support for this. One sometimes encounters the term "R-additive category" for an additive category enriched in R-Mod. Given that, maybe "$\mathbb{N}$-additive category" is an alternative, pretending that the usual usage of "additive" is short for "$\mathbb{Z}$-additive"? - But the enrichment in monoids is a consequence of the existence of biproducts---one does not say "finite group with Sylow subgroups for all primes". (It'd be so nice to be able to edit comments!) – Mariano Suárez-Alvarez Jan 20 2010 at 15:04 I said "categories enriched in commutative monoids with finite *coproducts*", not "categories enriched in commutative monoids with finite *biproducts*". Of course the latter would be silly. (Or have I misunderstood you?) – Reid Barton Jan 20 2010 at 15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338201880455017, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/4954/is-it-possible-to-have-mathematica-move-all-terms-to-one-side-of-an-equation/4960
# Is it possible to have Mathematica move all terms to one side of an equation? I have an inequality expression that I would like to express in terms of the relation of the parameters to zero. More simply, I want to have mathematica move all the terms to one side of the inequality so that it is expressed as $x - y \geq 0$. Currently it is expressed as $x \geq y$. The expressions inside $x$ and $y$ are much more complex, of course. Is this possible? It seems I need to provide some assumptions about the relations between variables, but it isn't clear to me how to do this. - 2 have you tried a rule like : `x > y /. (Greater[x_, y_] -> Greater[x - y, 0])` – b.gatessucks May 1 '12 at 15:19 ## 4 Answers Are you looking for Subtract? ````eq=x>=y Subtract@@eq>=0 ```` gives: x-y>=0 ## Edit If one wants a function, which keeps the order sign and adds the 0, one may use: ````oneSide=(Head[#][Subtract@@#,0]&) ```` and call e.g. `eq//oneSide` - 1 `Subtract @@` is a clever answer, +1. – rcollyer May 1 '12 at 15:35 2 This solution requires you to put the inequality back by hand after the `Subtract`. It shouldn't be necessary to do that. – Jens May 1 '12 at 15:54 This approach works by using the fact that an inequality or equality can be traversed by `Map` in the same way that a regular list can. It can take an arbitrary inequality or equation `eqn`, and you don't have to know in advance whether it's `>`, `<` or anything else. First I define the equation `eqn`, and then I use the fact that the second part of `eqn` is the right-hand side (`eqn[[2]]`) which I want to subtract from both sides: ````eqn = (x > y + z) Map[(# - eqn[[2]]) &, eqn] ```` `-y - z > 0` You could adapt it to bring only part of the right-hand side to the left, e.g. the `y` but not `z`: ````Map[(# - eqn[[2, 1]]) &, eqn] ```` `x - y > z` Edit By the way, using `Map` on an equation is a "natural" choice in this situation, because manipulating equations always involves doing the same thing on both sides. That "thing" can be formulated as applying a function `f[...]` to both sides. In this example it's a subtraction operation, but it could also be the operation of squaring both sides, multiplying them by a factor, expanding in a power series, and whatever else you might think of. In Mathematica, `Map` is the operation that corresponds to this type of manipulation. With inequalities, you just have to be more careful than with equalities because doing the same thing on both sides doesn't always leave the relation unchanged (think of multiplying by a negative number). In this question, that problem didn't arise because addition and subtraction don't cause inequalities to break. Edit 2 One should also realize that inequalities and equations (i.e., `Equal` and `Greater` etc.) can have two or more arguments, as in ````eqn = (x > y > z) ```` The answers by Peter and Pillsy do not take this into account, whereas the `Map` approach works naturally in these cases, too: ````Map[(# - Last[eqn]) &, eqn] ```` `x - z > y - z > 0` There has in fact been some discussion long ago if one should make the `Equal` function in Mathematica "listable" so that it applies this `Map` process automatically when you type `eqn - Last[eqn]`, but there are cases when that's not desirable (e.g., when taking the square root on both sides due to multi-valuedness), so we have to do it ourselves when needed. In the spirit of hiding the `Map` code from the user, one can of course define a function like Peter does, i.e., in my case ````Clear[oneSide]; oneSide[e_] := Map[(# - Last[e]) &, e] ```` - I think that this problem (like most algebraic manipulations) is best approached with pattern matching and rule replacement than by directly exploiting the index/list structure that inequalities share with practically every Mathematica expression. Both approaches will work, but rule replacement is, in my opinion, clearer, more precise, more flexible, and more extensible. For example: ````moveTermRule = (ineq : Less | Greater | LessEqual | GreaterEqual)[lhs_, rhs_] :> ineq[lhs - rhs, 0]; In[137]:= x <= y /. moveTermRule Out[137]= x - y <= 0 ```` This won't do anything to expressions that aren't inequalities, or that have more than two arguments, and everything ends up with a name by the nature of `RuleDelayed`, which makes it more obvious what's happening. - An alternative to Jens's solution makes use of `Thread[]` to subtract the rightmost entity of a relation from both sides: ````oneSide[ineq : (_Equal | _Unequal | _Less | _Greater | _LessEqual | _GreaterEqual)] := Thread[ineq - Last[ineq], Head[ineq]] ```` None of the solutions given thus far will work on more general `Inequality[]` objects like `x < y <= z`, though Jens's approach can be modified somewhat: ````Map[If[FreeQ[#, Less | Greater | GreaterEqual | LessEqual], # - z, #] &, x < y <= z] x - z < y - z <= 0 ```` Thus, ````oneSide[ineq_Inequality] := Map[If[FreeQ[#, Less | Greater | GreaterEqual | LessEqual], # - Last[ineq], #] &, ineq] ```` - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202358722686768, "perplexity_flag": "middle"}
http://cms.math.ca/10.4153/CJM-2008-029-2
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view Inverse Pressure Estimates and the Independence of Stable Dimension for Non-Invertible Maps Read article [PDF: 294KB] http://dx.doi.org/10.4153/CJM-2008-029-2 Canad. J. Math. 60(2008), 658-684 Published:2008-06-01 Printed: Jun 2008 • Eugen Mihailescu • Mariusz Urba\'nski Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript Abstract We study the case of an Axiom A holomorphic non-degenerate (hence non-invertible) map $f\from\mathbb P^2 \mathbb C \to \mathbb P^2 \mathbb C$, where $\mathbb P^2 \mathbb C$ stands for the complex projective space of dimension 2. Let $\Lambda$ denote a basic set for $f$ of unstable index 1, and $x$ an arbitrary point of $\Lambda$; we denote by $\delta^s(x)$ the Hausdorff dimension of $W^s_r(x) \cap \Lambda$, where $r$ is some fixed positive number and $W^s_r(x)$ is the local stable manifold at $x$ of size $r$; $\delta^s(x)$ is called \emph{the stable dimension at} $x$. Mihailescu and Urba\'nski introduced a notion of inverse topological pressure, denoted by $P^-$, which takes into consideration preimages of points. Manning and McCluskey study the case of hyperbolic diffeomorphisms on real surfaces and give formulas for Hausdorff dimension. Our non-invertible situation is different here since the local unstable manifolds are not uniquely determined by their base point, instead they depend in general on whole prehistories of the base points. Hence our methods are different and are based on using a sequence of inverse pressures for the iterates of $f$, in order to give upper and lower estimates of the stable dimension. We obtain an estimate of the oscillation of the stable dimension on $\Lambda$. When each point $x$ from $\Lambda$ has the same number $d'$ of preimages in $\Lambda$, then we show that $\delta^s(x)$ is independent of $x$; in fact $\delta^s(x)$ is shown to be equal in this case with the unique zero of the map $t \to P(t\phi^s - \log d')$. We also prove the Lipschitz continuity of the stable vector spaces over $\Lambda$; this proof is again different than the one for diffeomorphisms (however, the unstable distribution is not always Lipschitz for conformal non-invertible maps). In the end we include the corresponding results for a real conformal setting. Keywords: Hausdorff dimension, stable manifolds, basic sets, inverse topological pressure MSC Classifications: 37D20 - Uniformly hyperbolic systems (expanding, Anosov, Axiom A, etc.) 37A35 - Entropy and other invariants, isomorphism, classification 37F35 - Conformal densities and Hausdorff dimension © Canadian Mathematical Society, 2013 © Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8123689889907837, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=250175
Physics Forums Thread Closed Page 1 of 2 1 2 > ## two easy questions (1) From the length contraction equation, would $$(L\sqrt{1-(v/c)^2} )^3$$ give the coordinate volume of an object? Here's the mass equation: $$M=\frac{m_0}{\sqrt{1-(v/c)^2}}$$ My second question is ultimately about the 'speed limit' of the universe. As seen by an observer, an object's mass will approach infinity as its velocity approaches c. From this we can say that, as seen by an observer, it will take a force equal to infinity to accelerate the object to c, and because it's impossible for anything to apply that force, we say that the speed limit of the universe is c. For the most part this makes perfect sense. As seen by an observer, an object approaching a velocity of c would experience the fallowing: length approaching zero, time approaching infinity, mass approaching infinity etc. This can be easily visualized, as something approaches c, it escapes more and more light, and in theory, if it reached c, the object would vanish, in turn revealing a zero length, infinite time, and a questionably visualized infinite mass. But what about the proper variables? They don't change with an increase in velocity. Mass, time, and length, by definition, don't change in the object's inertial frame. The object keeps its own rest mass. This is my main point as the 'speed limit' was initially set because the object's coordinate mass approaches infinity, but now I'm saying that an object's mass doesn't change in its inertial frame. That being said, I think it's obvious that the energy required for an object to accelerate itself to c, is in fact finite. Keep in mind, it's very important that the force be applied from within the object's inertial frame. If the force was applied from outside of the frame, it would take an infinite force for the object to reach c. The speed limit of c holds very true in special relativity. In theory, an observer will NEVER see an object traveling at or faster than c, for more than one reason. To be honest, I see no proof that the speed limit of c holds true in all cases. Obviously there is no proof that faster than light travel is possible, but even if it was very possible, we still wouldn't be able to observe that. To conclude, in theory, an object can reach the speed of light or greater with a finite force applied to itself from within its inertial frame. (2) Why wouldn't this statement be true? (whether or not we know how to initiate the above bolded phrase, doesn't change the theoretical case) PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Mentor 1. No, you don't take 1/gamma to the third power, since the length contraction is just in one direction. 2. I can't make sense of the bolded phrase. What would it mean to apply a force within the inertial frame? It sounds like you're going to pull your own hair or something. If you mean that you're going to do something that gives you a constant proper acceleration, that's not enough to get you to reach the speed of light. Quote by Fredrik 1. No, you don't take 1/gamma to the third power, since the length contraction is just in one direction. 2. I can't make sense of the bolded phrase. What would it mean to apply a force within the inertial frame? It sounds like you're going to pull your own hair or something. If you mean that you're going to do something that gives you a constant proper acceleration, that's not enough to get you to reach the speed of light. I'm not sure what exactly what the bolded phrase would consist of, but that's not my point. I'll try to make a clear example here. We say you can't accelerate an object to c because the relativistic mass demands a force of infinity to do so. But what if the mass I wanted to accelerate wasn't relativistic at all, if I could "somehow" accelerate myself, it would only take a finite amount of energy to reach c. In this special case, I am my own observer so my rest mass is constant to me, and thus requiring finite energy. I'm just trying to show that in theory, nothing (yet), stops an object in its inertial frame from obtaining a velocity of c in this way. ## two easy questions Quote by epkid08 That being said, I think it's obvious that the energy required for an object to accelerate itself to c, is in fact finite. This statement does not make any sense whatsoever. Quote by epkid08 To conclude, in theory, an object can reach the speed of light or greater with a finite force applied to itself from within its inertial frame. Sorry, but more nonsense. A rocket with a constant thrust will never reach the speed of light with respect to another object. Not even in the limit because the hyperbolic velocity addition function does in fact have no limit. Mentor It's not just that I don't know how to accomplish what you're suggesting. It's that I don't know what you're suggesting. I don't think there's any way to make sense of it. What if I e.g. run behind you and give you a push once per second according to your clock. Even if we ignore the practical problems and how out of shape I am, you would still at best approximate the world line of an object with constant proper acceleration, and constant proper acceleration is definitely not enough to reach c. Quote by Fredrik It's not just that I don't know how to accomplish what you're suggesting. It's that I don't know what you're suggesting. I don't think there's any way to make sense of it. What if I e.g. run behind you and give you a push once per second according to your clock. Even if we ignore the practical problems and how out of shape I am, you would still at best approximate the world line of an object with constant proper acceleration, and constant proper acceleration is definitely not enough to reach c. That's not even close to a proper example of the special case I'm suggesting, and neither is the rocket example. There is no clear example that we could grasp, I already admitted that. Trying to find an example may or may not be hopeless. If applied a constant force per time on a rock, its acceleration would drop as its velocity approached c. That's an example of a force from outside of the inertial frame of an object, and becuase it's from outside of the inertial frame, we must use relativistic mass transformation to calculate the mass. If we are bounded to using the relativistic mass tansformation, then we no object with rest mass greater than 0 will ever travel at c or faster. So ask yourself, when are we not bounded to using the relativistic mass transformation? The answer is, when you are the object, when you are in the inertial frame, when you are observing yourself. Being the object in the inertial frame, we don't use the relativistic mass transformation, we use a simple m_0 = M. This is where you are getting confused. Because you are the object in the inertial frame, and you are supposed to be applying a force, you need to "somehow" apply a force to yourself. (what ever that means, I could be very possibly using the wrong words to describe; there is no example of this thus far) So, as you "apply a force to yourself" per time, you gain acceleration, and because your mass is not relative to your velocity, the amount of energy needed to accelerate you to c or more is finite. $$E=M_c*c^2$$ M_c signifies that the mass is constant for all v. Mentor Please note: while the acceleration can be viewed from your frame only, velocity only exists when viewed between two frames. You can't escape the fact that you'll always measure your velocity to be below C. Mentor Quote by epkid08 you need to "somehow" apply a force to yourself. The big problem here is much more fundamental than the speed limit of c. If I understand you correctly your idea expressed here violates the conservation of momentum, one of the most fundamental laws of the universe. If you want to invent a magical universe where momentum is not conserved then you are certainly free to decide that in your fantasy-land c won't be a limiting speed either. Mentor Quote by epkid08 That's not even close to a proper example of the special case I'm suggesting, and neither is the rocket example. There is no clear example that we could grasp, I already admitted that. Trying to find an example may or may not be hopeless. As I said, the problem isn't to find an example, it's to properly define what you would like to find an example of, and your attempts to do that don't make sense. Quote by epkid08 you need to "somehow" apply a force to yourself. We seem to be back to pulling our own hair. (But internal forces cancel according to Newton's 3rd, as I'm sure you know). It sounds like the OP is thinking about "proper velocity", not relative velocity. Proper velocity is momentum/mass, so it it not limited to c. There's plenty of info on the net about proper velocity, but for most purposes, it's not very useful. Al Mentor I have never heard of the term "proper velocity". Wouldn't that always be 0? Quote by DaleSpam I have never heard of the term "proper velocity". Wouldn't that always be 0? i think it would be, Dale. Quote by DaleSpam I have never heard of the term "proper velocity". Wouldn't that always be 0? No. It's momentum/mass, or proper acceleration times proper time. Or, If I travel to a star 10 ly away at 0.8c rel. velocity, I stop at star and divide rest distance by elapsed time on my clock to get (average) proper velocity. In this case, 1.33c is my (average) proper velocity. It's only useful in some cases if it isn't misused. Another way to look at it is, I left earth 7.5 yrs ago, now I'm 10 ly from earth, at rest with earth, 10 ly/7.5 yr is 1.33c. There is plenty of info about it on the net, but it's normally not very relevant. Al Quote by russ_watters You can't escape the fact that you'll always measure your velocity to be below C. True, but you're forgetting that the only time an object is observable is when its traveling at less than c. So the time that I was traveling at c, I wouldn't have been able to be measured, and so yes, at every point at which I could have been observed, it would read less than c. Quote by Fredrik We seem to be back to pulling our own hair. (But internal forces cancel according to Newton's 3rd, as I'm sure you know). Very true, but as I said before, it's not an example of what I'm talking about, as I am not focused on making examples at this point. A few ideas could possibly involve light, but let's not get into personal theories. Quote by DaleSpam The big problem here is much more fundamental than the speed limit of c. If I understand you correctly your idea expressed here violates the conservation of momentum, one of the most fundamental laws of the universe. If you want to invent a magical universe where momentum is not conserved then you are certainly free to decide that in your fantasy-land c won't be a limiting speed either. At this point it is just a theoretical case that's completely true. Given that we can "apply a constant force to our self", it would only take a finite amount of energy to accelerate oneself to c. As far as violating the conservation of momentum, I haven't even suggested an example yet, so you can't really assume so. One possible idea to get around this, off the top of my head, is some collision with light in a way where energy is transferred to you, but because the light's rest mass is zero, the negative 'action' that is demanded by the conservation of momentum would cancel out, therefore holding the conservation of momentum. (please do not quote this example) Bolded phrase is my main point of this topic. Quote by epkid08 Given that we can "apply a constant force to our self", it would only take a finite amount of energy to accelerate oneself to c. There's no theoretical limit to how much or how long you can accelerate. You could accelerate at 500 G for 10,000,000 yrs. No problem. Your velocity relative to any other mass in the universe will still be measured to be < c. Al Mentor Quote by epkid08 At this point it is just a theoretical case that's completely true. Given that we can "apply a constant force to our self", it would only take a finite amount of energy to accelerate oneself to c. Sure. As I said above, if you are going to throw one law out the window you may as well throw the rest out too. Just don't kid yourself that you are discussing anything other than fiction. Quote by epkid08 As far as violating the conservation of momentum, I haven't even suggested an example yet, so you can't really assume so. . It is not an assumption, if you apply a force to yourself which causes you to accelerate then momentum is not conserved. That is the nice thing about conservation laws: the specific example doesn't matter. Mentor Quote by Al68 There is plenty of info about it on the net, but it's normally not very relevant. Interesting. You are right, I looked up the wikipedia Proper Velocity page. That is a really unfortunate name for this concept, it seems to have nothing to do with the usual things associated with the term "proper". Specifically, it is not invariant, and it is not a property measurable in an object's rest-frame. Thanks Al, it is nice to know that I can learn something even in the silliest threads! Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |-----------------------------------------|-------------------------------|---------| | Similar Threads for: two easy questions | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 32 | | | Introductory Physics Homework | 5 | | | Introductory Physics Homework | 15 | | | Introductory Physics Homework | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608958959579468, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/tensors+terminology
# Tagged Questions 2answers 111 views ### What should I call an n>4 dimensional Minkowski metric? I am manipulating an $nxn$ metric where $n$ is often $> 4$, depending on the model. The $00$ component is always tau*constant, as in the Minkowski metric, but the signs on all components might be ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8836790323257446, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4203216
Physics Forums Question about expanding a square root in powers of gradient Hi, I have a quick question about making quantum mechanics relativistic by simply replacing the hamiltonian by a relativistic hamiltonian. If we write the hamiltonian operator as: H = $\sqrt{P2c2 + m2c4}$, schrodinger's equation in position basis becomes: i$\hbar$$\dot{\psi}$ = $\sqrt{-\hbar2c2\nabla2 + m2c4}$$\psi$ If you expand the square root in powers of nabla, you get an infinite number of gradients. I remember reading that an infinite number of spatial gradients acting on psi implies that the theory is non-local (I don't recall where I read this, but it might be in Mark Srednicki's QFT textbook.) I don't get the jump of logic in saying that an infinite number of gradient operators implies a non-local theory. I think I've come across similar arguments in other contexts in QFT (I'm sorry, I don't recall specifically which ones). Could someone please explain to me what I am missing here? Thanks! PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Well,that just goes back to the definition of a derivative For a first derivative,you take two points,calculate the difference of the value of the function at those points and divide by the separation of points.So for the first derivative at a point,you need to consider two points,one the point in which you want the derivative and the other,another point at its vicinity.For second derivative you need one more point and so on,the number of points increases and the points get farther from the point you want the derivative on.So the more the order of derivatives,the behaviour of a point depends on farther and farther points which makes such theories non local. Here's a suggestive argument: By Taylor expansion, we can write f(x + a) = f(x) + a f'(x) + (1/2)a^2 f''(x) + (1/6)a^3 f'''(x) + ... where to get equality we need an infinite number of derivatives. The right hand side looks local (it looks like it only refers to the properties of f at the point f) but is actually nonlocal (it actually depends on the properties of f at some distance from x). Question about expanding a square root in powers of gradient Maybe I misunderstood what you meant in your last post The_Duck but what about the kinetic term in the Lagrangian - ##\partial_{\mu}\phi \partial^{\mu} \phi##? By the same logic, wouldn't this also be non-local? I am curious to the original question as I have thought about the same thing. I can see how a derivative could be viewed as non-local from its definition (we compare infinitesimally close points etc.), so what is the difference when it is inside a square root -- and why isn't the kinetic term non-local? Quote by kloptok Maybe I misunderstood what you meant in your last post The_Duck but what about the kinetic term in the Lagrangian - ##\partial_{\mu}\phi \partial^{\mu} \phi##? By the same logic, wouldn't this also be non-local? I was using the Taylor expansion as something that has an *infinite* number of derivatives and is therefore nonlocal, as we can see from the fact that this infinite number of derivatives combines to give us the value of the function at a finite distance from the original point x. A standard kinetic term with only two derivatives does not give rise to anything like this. Quote by kloptok I can see how a derivative could be viewed as non-local from its definition (we compare infinitesimally close points etc.), No, comparing points at infinitesimal separations is "local" in the sense discussed here. Comparing points at *finite* separations isn't. Alright, that convinces me! I suspected that infinitesimal separations would still be considered "local". Thanks! Thanks Shyan and TheDuck, I understand that infinitesimal distances away from a given point isn't considered non-local, which is why the kinetic term isn't non-local. But if we expand the square root of the gradient, why can't we do the taylor expansion assuming that the displacement from a given point is infinitesimal? So, I guess my point of confusion is that why is it that when there are an *infinite* number of derivatives, the theory is considered non-local, while for a finite number of derivatives, it isn't? I think even finite order derivatives with order greater than 2 can cause non locality.But that's just what I think.Is there a proof about it? Tags qft Thread Tools | | | | |-----------------------------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: Question about expanding a square root in powers of gradient | | | | Thread | Forum | Replies | | | Precalculus Mathematics Homework | 3 | | | Precalculus Mathematics Homework | 1 | | | General Math | 1 | | | Calculus & Beyond Homework | 0 | | | Precalculus Mathematics Homework | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317395687103271, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54316/root-mean-square-speed-of-gas
# Root Mean Square Speed of Gas The RMS speed of particles in a gas is $v_{rms} = \sqrt{\frac{3RT}{M}}$ where $M$ = molar mass; according to this Wiki entry: http://en.wikipedia.org/wiki/Root-mean-square_speed 1. The gas laws state that $pV = nRT$ where $n$ = the number of moles of gas. 2. Further more, the kinetic theory of gases gives the following equation $pV = \frac{1}{3}Nm(v_{rms})^2$ where $N$ = number of particles and $m$ = mass of gas sample. Combining 1 and 2 gives: $nRT = \frac{1}{3}Nm(v_{rms})^2$ which simplifies to: $v_{rms} = \sqrt{\frac{3nRT}{Nm}}$ As $n = \frac{N}{N_{A}}$: $v_{rms} = \sqrt{\frac{3RT}{N_{A}m}}$ Also $m = Mn = \frac{MN}{N_{A}}$. Therefore, $N_{A}m = MN$ Substituting this in: $v_{rms} = \sqrt{\frac{3RT}{MN}}$ However the RMS equation from Wikipedia contains no $N$ or reference to the number of particles. Why does this happen? - ## 1 Answer I think you have an error in assumption 2. If $N$ is the number of molecules, then the mass of the sample would be $N$ multiplied by the mass per molecule, not $N$ multiplied by the total mass of the sample. You are kind of "overcounting" mass. If you take $m$ to be the mass per molecule (molecular mass), then I believe it works out. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8824940919876099, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/309468/inner-product-between-certain-vectors-on-a-simplex
# Inner product between certain vectors on a simplex. For $n\geq 2$, let $\Delta^n$ be a regular $n$-dimensional simplex in $\mathbb{R}^n$ centered at the origin ${\bf 0}$ and inscribed in the unit sphere $S^{n-1}$. Let ${\bf v}_0,{\bf v}_1,\ldots,{\bf v}_n\in S^{n-1}$ be the vertices of $\Delta^n$. It is well known that the angle $\alpha_n$ subtended by any two vertices of $\Delta^n$ through its center (i.e. ${\bf 0}$) is given by $$\alpha_n=\textrm{arc}\cos\Big(-\frac{1}{n}\Big).$$ For $j=0,\ldots,n$, let $\Delta_j^n$ denote the convex hull in $\mathbb{R}^{n}$ of the $n+1$ points ${\bf 0},{\bf v}_0,{\bf v}_1,\dots,{\bf v}_{j-1},\widehat{{\bf v}_j},{\bf v}_{j+1},\ldots,{\bf v}_n$ (where the hat means omission). Thus $\bigcup_{j=0}^n\Delta_j^n=\Delta^n$. Take two nonzero vectors ${\bf x},{\bf y}\in\mathbb{R}^n$ such that ${\bf x},{\bf y}\in\Delta_j^n$ for some $j=0,\ldots,n$. Question 1: Is it true that $$\frac{{\bf x}}{|{\bf x}|}\cdot \frac{{\bf y}}{|{\bf y}|}\geq-\frac{1}{n}?$$ Question 2: If so, does equality occur in the last inequality only if the vectors ${\bf x},{\bf y}$ are multiples of some ${\bf v}_i,{\bf v}_k$ (necessarily for $i\neq j\neq k\neq i$)? Thank you. - ## 1 Answer Yes to both questions. Let's fix $j$ and let $S=\{x/|x|:x\in \Delta_j^n\}$. The minimum in question 1 is equal to $\min\{x\cdot y:x,y\in S\}$. For a fixed $y$ the map $x\mapsto x\cdot y$ is the orthogonal projection onto the line through $y$. Every hyperplane of the form $\{x:x\cdot y=a\}$ meets either both $S$ and $\Delta_j^n$, or neither. (This can be justified by decomposing $x=ay+n$ with $y\cdot n=0$ and then scaling $n$ appropriately.) Therefore, we can consider $\min\{x\cdot y:x,y\in \Delta_j^n\}$ instead. But this is the minimum of a linear function on a polyhedron, so it must be attained at some vertex $v_k$. Since for every $y$ the minimum is realized by $x$ at a vertex, it follows that the minimum over both $x$ and $y$ is attained by a pair of vertices. This answers Q1. Regarding Q2, suppose that the minimum is also attained by some $x',y'$ where $y'$ is not a vertex. We can still assume that $x$ is a vertex $v_k$, by the above. Since $y\mapsto x\cdot y$ also attains the minimum at some vertex $v_\ell$, it follows that $\Delta_j^n$ contains a line segment that is orthogonal to $v_k$ and contains $v_\ell$. This is ruled out by inspection of $\Delta_j^n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520033001899719, "perplexity_flag": "head"}
http://nrich.maths.org/6610/note
### Be Reasonable Prove that sqrt2, sqrt3 and sqrt5 cannot be terms of ANY arithmetic progression. ### Janusz Asked In y = ax +b when are a, -b/a, b in arithmetic progression. The polynomial y = ax^2 + bx + c has roots r1 and r2. Can a, r1, b, r2 and c be in arithmetic progression? ### Summats Clear Find the sum, f(n), of the first n terms of the sequence: 0, 1, 1, 2, 2, 3, 3........p, p, p +1, p + 1,..... Prove that f(a + b) - f(a - b) = ab. # Speedy Summations ### Why do this problem? This problem provides an introduction to summing arithmetic series, and allows students to discover for themselves the formulae used to calculate such sums. By seeing a particular case, students can perceive the structure and see where the general method for summing such series comes from. The problem could be used to introduce $\sum$ notation. ### Possible approach You may wish to show the video, in which Alison works out $\sum_{i=1}^{10} i$ in silence, or you may wish to recreate the video for yourself on the board. Then write up $\sum_{i=1}^{100} i$ and ask students to adapt Alison's method to work it out. Share answers and explanations of how they worked it out. Next, give students the following questions: $2+4+6+\dots+96+98+100$ $\sum_{k=1}^{20} (4k+12)$ $37+42+47+52+\dots+102+107+112$ "Can you adapt the method to work out these three sums? In a while I'm going to give you another question like these and you'll need to be able to work it out efficiently" While students are working, listen out for useful comments that they make about how to work out such sums generally. Then bring the class together to share answers and methods for the questions they have worked on. Make up a few questions like those above, and invite students out to the board to work them out 'on the spot', explaining what they do as they go along. Next, invite students to create a formula from their general thinking: "Imagine a sequence that starts at $a$ and goes up in equal steps to the $n^{th}$ term which is $l$. Can you use what you did with the numerical examples to create a formula for the sum of the series?" Give students time to think and discuss in pairs and then share their suggestions. "What if you were asked to find the sum of the first $n$ terms of the sequence $a, (a+d), (a + 2d), (a + 3d)$ and so on - can you adapt your formula?" Again, allow some time for discussion before bringing the class together to share what they did. Finally, "Can you use your formula to work out after how many terms would $17+21+25+\dots$ be greater than $1000$?" ### Key questions What can you say about the sum of the first and last, and the second and penultimate terms of an arithmetic sequence? How do you know these sums of pairs will always be the same? ### Possible extension Challenge students to find the sum of all the integers less than $1000$ which are not divisible by $2$ or $3$. Summats Clear would make a nice extension challenge for students who have found this problem straightforward. ### Possible support Slick Summing explores the same content as this problem but introduces new ideas more slowly and does not use $\sum$ notation. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361464381217957, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=946306
Physics Forums ## Eigenblades and the Geometric Algebra of Spinors I've been looking into Geometric Algebra approaches to linear transformations and have found it to be MUCH nicer than the conventional matrix approaches for certain kinds of transformations. Moreover, I find it much more intuitive, particularly in its way of dealing with complex numbers. For instance, consider some linear operator $$M$$ from $$R^n$$ to $$R^n$$. If all its eigenvalues are real, it is easy enough to see how it acts on linear subspaces. But how are we to geometrically interpret complex eigenvalues and their corresponding eigenvectors? If $$n=2$$, this is relatively simple. We can treat complex eigenvalues as scalings and rotations on the plane. In fact, we can use the following isomorphism between $$C$$ and $$2 \times 2$$ antisymmetrical matrices over $$R \;$$: $$a + i b \longleftrightarrow \left(\begin{array}{cc}a&-b\\b&a\end{array}\right)$$ But the use of eigenvectors with complex-valued coordinates can get quite ugly - especially when dealing with spaces of greater dimension than 2. Especially if we're only interested in how the operator acts on real vectors. However, while we cannot generally identify rotations with real eigenvectors, we can identify rotations with real eigenbivectors where the eigenbivectors represent plane elements rather than line elements. The corresponding eigenvalues then express a scaling of areas rather than lengths. This then extends naturally to higher dimensions. Moreover, GA provides a way to express any linear map as a geometric product without the use of any matrices. Certain kinds of maps, such as rotations and reflections, afford extremely simple representations in this way that not only more clearly illustrate the essence of the map but also are much less tedious to work with than matrices. These ideas are so extremely powerful I'm surprised they are seldom mentioned in the literature. I was wondering if anyone here is familiar with these methods, and even if not, if anyone would be interested in looking further into these methods with me. Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Thread Tools | | | | |-----------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: Eigenblades and the Geometric Algebra of Spinors | | | | Thread | Forum | Replies | | | Linear & Abstract Algebra | 29 | | | General Physics | 26 | | | Linear & Abstract Algebra | 0 | | | Linear & Abstract Algebra | 1 | | | Linear & Abstract Algebra | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011543989181519, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/31354/theorems-in-euclidean-geometry-with-attractive-proofs-using-more-advanced-methods/31404
## Theorems in Euclidean geometry with attractive proofs using more advanced methods ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The butterfly theorem is notoriously tricky to prove using only "high-school geometry" but it can be proved elegantly once you think in terms of projective geometry, as explained in Ruelle's book The Mathematician's Brain or Shifman's book You Failed Your Math Test, Comrade Einstein. Are there other good examples of simply stated theorems in Euclidean geometry that have surprising, elegant proofs using more advanced concepts? Such examples are valuable pedagogically since they illustrate the power of the advanced methods. - Every theorem from classical geometry has short and elegant proof for "high-school students". But one can find very difficult proof which use algebraic topology or category theory. – akopyan Jul 11 2010 at 6:03 ## 11 Answers Somebody has this as a hobby. - Another post where he does the same thing: sbseminar.wordpress.com/2007/07/28/… – Qiaochu Yuan Jul 11 2010 at 5:11 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A nice example is Pascal's theorem for the circle: If a hexagon is inscribed in a circle then the intersections of opposite sides are collinear. Plücker gave an elegant proof of Pascal's theorem as a consequence of Bézout's theorem. View the unions of alternate sides of the hexagon as cubic curves $l_{135}=0$ and $l_{246}=0$. They meet in 9 points, 6 of which are the vertices on the circle $c=0$. But we can choose constants $a,b$ so that the cubic $al_{135}+bl_{246}=0$ passes through any point. Taking this point on the circle, the circle and the cubic have at least 7 points in common. By Bézout's theorem, the curves have a common component, necessarily the circle $c=0$, since $c$ is irreducible. Hence $al_{135}+bl_{246}=cp$, for some polynomial $p$, which must be linear. Since $al_{135}+bl_{246}=0$ contains all 9 points common to $l_{135}=0$ and $l_{246}=0$, while $c=0$ contains only 6, the remaining 3 (intersections of opposite sides of the hexagon) must lie on the line $p=0$. - Yes. Actually, Pascal's (and Pappus' too) theorem is partial case of associativity low for (degenerated) elliptic curve. – Fedor Petrov Jul 11 2010 at 7:35 This is one of my favorites! Cf mathoverflow.net/questions/24913/… – Victor Protsak Jul 11 2010 at 7:44 What about the proof of the isoperimetric inequality using Fourier analysis? - 1 The isoperimetric inequality hardly is more "Euclidean geometry" than Fourier analysis... – darij grinberg Feb 1 2012 at 18:31 @darij grinberg: This answer may not satisfy the letter of the question, but I think it very much satisfies the spirit. The isoperimetric inequality is a classic result dating back to the fourth century, if not earlier (mathdl.maa.org/mathDL/46/…), while Fourier analysis is a 19th-century method based on very modern sensibilities (function spaces, new notions of convergence and integration, etc.). – Vectornaut Feb 1 2012 at 21:01 It does not satisfy the spirit either, even though the isoperimetric inequality appeared in the fourth century. The problem is that every formalization of the isoperimetric inequality already requires most of the advanced tools needed for its proof. Otherwise, the Jordan curve theorem would be an equally good example. – darij grinberg Feb 2 2012 at 11:55 I like the following result which plays a role in Marden's Theorem: Given any triangle, there is an inscribed ellipse which meets the midpoints of all three edges. Proving this directly is rather difficult (in fact, I'm not sure how to), but it is very easy to do if you know anything about linear transformations. - Linear transformations also give a nice proof of en.wikipedia.org/wiki/Steiner_chain – Gjergji Zaimi Jul 11 2010 at 5:12 Gjergij, Steiner chain deserves its own entry, but the key issue is that the transformation be $\textit{conformal},$ so that it preserves circles, and not merely linear. – Victor Protsak Jul 11 2010 at 7:46 Yes, sorry, I meant fractional linear :) – Gjergji Zaimi Jul 12 2010 at 20:27 Generalization: Let $ABC$ be a triangle. Let $X$ and $X'$ be points on the line $BC$. Let $Y$ and $Y'$ be points on the line $CA$. Let $Z$ and $Z'$ be points on the line $AB$. Then, the points $X$, $X'$, $Y$, $Y'$, $Z$, $Z'$ lie on one conic (possibly degenerate) if and only if $\frac{BX}{XC}\cdot\frac{BX'}{X'C}\cdot\frac{CY}{YA}\cdot\frac{CY'}{Y'A}\cdot\frac{AZ}{ZB}\cdot\frac{AZ'}{Z'B}=1$, where the segments are directed. This is easy to prove using Pascal and Menelaos. When two points like $X$ and $X'$ coincide, a conic passing through $X$ and $X'$ has to be understood as a conic ... – darij grinberg Feb 1 2012 at 18:36 ... touching the line $BC$ at $X$, and so on, and thus you obtain as a particular case the following fact: Let $X$, $Y$, $Z$ be points on the sidelines $BC$, $CA$, $AB$ of a triangle $ABC$. Then, there exists a conic touching $BC$, $CA$, $AB$ at $X$, $Y$, $Z$ if and only if either the lines $AX$, $BY$, $CZ$ concur or the points $X$, $Y$, $Z$ are collinear (in which case it is a degenerate conic). This uses Ceva and Menelaos. Not to say this is simpler than the affine-transformation proof, but in my eyes it shows that the problem has nothing to do with midpoints of edges. – darij grinberg Feb 1 2012 at 18:38 A very nice example is given by the Villarceau circles: a revolution torus is cut by a bitangent hyperplane along the union of two circles. You can of course make the computation, but when you know some projective algebraic geometry you can prove it in a few words. Roughly: 1. The revolution torus has an algebraic equation of degree four, so that it intersects any plane along a degree four curve. 2. If the plane is bitangent, then this curve has two double points so that it must be the union of two ellipses. 3. It is easily checked that in the complex world, the torus as well as the plane contain the circular points at infinity, so that in fact the ellipses are circles. - Take a triangle with a circle $\Gamma_0$ tangent to two of three sides (you may also think that sides of the triangle are made out of circle arcs). Construct a chain of circles $\Gamma_1,\Gamma_2,\dots$ on such a way that $\Gamma_{n+1}\not=\Gamma_{n-1}$ is tangent to $\Gamma_n$ and two of the sides of triangle. Prove that $$\Gamma_6=\Gamma_0.$$ I do not know the proof, but I was told that it is hard to do without knowing elliptic functions. P.S. I do not know the references --- please feel free to add it :) - 1 This problem from IMO Shortlist. You can find solution for high school students in Prasolov book. But this problem is not so easy in hyperbolic geometry. – akopyan Jul 11 2010 at 17:19 1 Just a small note: your description is missing the constraint that the two sides of the triangle have to change - otherwise you could just take for the $\Gamma_n$ a Pappus chain converging towards one of the triangle's vertices and tangent to the two sides incident on that vertex. A fuller description of the six-circles theorem (with a couple of references to elementary-looking proofs) is at mathworld.wolfram.com/SixCirclesTheorem.html . – Steven Stadnicki Jul 12 2010 at 20:11 The elementary proof of Morleys theorem is rather incomprehensible compared to the non elementary ones. - There is a recent, simple and elementary proof in a recent volume of Elemente der Mathematik but I do not remember the name of its author. You start with an equilateral triangle and show in a few line that you can realise it as the Morley triangle of a triangl with arbitrary angles. – Benoît Kloeckner Jul 11 2010 at 12:56 That sounds like a proof I read 25 years ago in a book from 1960s. Maybe it got shortened? – Victor Protsak Jul 11 2010 at 13:05 5 Oh thanks for reminding me how Conway managed to essentially stop all graduate research for a few days by telling us about the theorem and that he has a simple proof which he won't tell us. – Willie Wong Jul 11 2010 at 13:32 LOL. Please don't cite Zeilberger's troll as an example of an elementary proof. – darij grinberg Feb 1 2012 at 18:43 The "most elementary theorem of Euclidean geometry" as proved in http://www.math.psu.edu/tabachni/prints/grid.pdf (p.9). The Poncelet's Porism in Gjergji Zaimi's answer is also proved there. - (Edited after Willie's comment) Many theorems about a triangle can be proved easily using the fact that all triangles are linearly equivalent. One example is that the segments that join each vertex to the midpoint of the opposite side intersect in a single point. (This actually appeared on a written qualifying exam I took as a graduate student, and I did in fact use the linear algebra approach, since there was no chance I could recall the Euclidean geometric proof.) - Deane, I am a bit confused: I don't think angle bisection is preserved under linear transformations. Did I misunderstand what you mean by "triangles being linearly equivalent"? – Willie Wong Jul 11 2010 at 14:13 Ouch. Willie, you appear to be right. It's only the theorem about segments joining the vertices to the opposite midpoint that works using this approach. – Deane Yang Jul 11 2010 at 14:41 Let's see. Most of the results of projective geometry are much harder to show by more elementary methods. I think the reason for this is that they rest more on the incidence axioms while the elementary methods play more with the metric properties. An interesting question that arises for any of these examples is to detect why is that the case that the elementary proofs are harder. A classical example is the constructibility of regular polyhedral with ruler and compass. - The plural of the noun "proof" is "proofs". – Victor Protsak Jul 12 2010 at 18:41 Fixed, thanks you. – Franklin Jul 12 2010 at 18:58 The classification of conics would be an example, but I don't know if you count matrix reduction as a "more advanced concept". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293245673179626, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/116425/the-class-of-all-classes-not-containing-themselves
# The class of all classes not containing themselves In ZF classes are used informally to resolve Russells Paradox, that is the collection of all sets that do not contain themselves does not form a set but a proper class. But doesn't the same paradox manifest itself when discussing the class of all classes that do not contain themselves? - ## 1 Answer Classes in ZF are merely collections defined by a formula, that is $A=\{x\mid \varphi(x)\}$ for some formula $\varphi$. It is obvious from this that every set is a class. However proper classes are not sets (as that would induce paradoxes). This means, in turn, that classes are not elements of other classes. Thus discussion on "the classes of all classes that do not contain themselves" is essentially talking about sets again, which we already resolved. Of course if you allow classes, and allow classes of classes (also known as hyper-classes or 2-classes) then the same logic applies you have have another level of a collection which you can define but is not an object of your universe. - Once you allow the notion of 2-classes, then I assume you can define n-classes for any n a natural number=finite ordinal. Does this mean this construction can be carried through at limit ordinals? – Mozibur Ullah Mar 4 '12 at 20:05 @Mozibur: I don't really know. I suppose you can. Simply by saying that $\omega$-classes are classes whose elements are $n$-classes for unbounded $n$. Then you'll have the problem in $\omega+1$-classes. The problem is that even if you allow classes for every $\alpha$ then you still get stuck with objects which are definable by recursion for every ordinal. Be forewarned that what said in this comment might just as well be a load of manure. I'll see my advisor tomorrow and ask him, then I'll have a better answer to give you here about this question. – Asaf Karagila Mar 4 '12 at 20:10 – Ilmari Karonen Mar 4 '12 at 20:26 @Ilmari: I'd think there is a lot of fine points to the New Foundation theory which $\alpha$-classes do not necessarily agree upon. In fact, it would seem to me that NF is "the other way around" which resolves the paradoxes by allowing only "uncomplicated formulas" to define classes (and thus sets). However, I don't know a lot about NF so I cannot really answer that. – Asaf Karagila Mar 4 '12 at 20:33 1 @Mozibur: Indeed, which is why in set theory you cannot really have the two. In the NBG set theory you can have classes, but not 2-classes and you cannot have collections of classes (unless those were sets to begin with). – Asaf Karagila Mar 4 '12 at 21:00 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9642749428749084, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Network_controllability
# Network controllability Controlling a simple network. Network Controllability is concerned about the structural controllability of a network. Controllability describes our ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs. This definition agrees well with our intuitive notion of control. The controllability of general directed and weighted complex networks has recently been the subject of intense study by a number of groups, worldwide. ## Background Consider the canonical linear time-invariant dynamics on a complex network $\dot{\mathbf{X}}(t) = \mathbf{A} \cdot \mathbf{X}(t) + \mathbf{B}\cdot \mathbf{u}(t)$ where the vector $\mathbf{X}(t)=(x_1(t),\cdots,x_N(t))^\mathrm{T}$ captures the state of a system of $N$ nodes at time $t$. The $N \times N$ matrix $\mathbf{A}$ describes the system's wiring digram and the interaction strength between the components. The $N \times M$ matrix $\mathbf{B}$ identifies the nodes controlled by an outside controller. The system is controlled through the time dependent input vector $\mathbf{u}(t) = (u_1(t),\cdots,u_M(t))^\mathrm{T}$ that the controller imposes on the system. To identify the minimum number of driver nodes, denoted by $N_\mathrm{D}$, whose control is sufficient to fully control the system's dynamics, Liu et. al.[1] attempted to combine the tools from structural control theory, graph theory and statistical physics. They showed[1] that the minimum number of inputs or driver nodes needed to maintain full control of the network is determined by the 'maximum matching’ in the network, that is, the maximum set of links that do not share start or end nodes. They tried[1] to develop an analytical framework, based on the in-out degree distribution, which predicts $n_\mathrm{D} =N_\mathrm{D}/N$ for scale-free and Erdős–Rényi Graphs. It is however notable, that their formulation[1] would predict same values of ${n_\mathrm{D}}$ for a chain graph and for a weak densely connected graph. Obviously, both these graphs have very different in and out degree distributions. A recent unpublished work,[2] questions whether degree, which is a purely local measure in networks, would completely describe controllability and whether even slightly distant nodes would have no role in deciding network controllability. Indeed, for many real-word networks, namely, food webs, neuronal and metabolic networks, the mismatch in values of ${n_\mathrm{D}}^{real}$ and ${n_\mathrm{D}}^\mathrm{rand\_degree}$ calculated by Liu et. al.[1] is notable. It is obvious that if controllability is decided mainly by degree, why are ${n_\mathrm{D}}^{real}$ and ${n_\mathrm{D}}^\mathrm{rand\_degree}$ so different for many real world networks? They argued [1] (arXiv:1203.5161v1), that this might be due to the effect of degree correlations. However, it has been shown[2] that network controllability can be altered only by using betweenness centrality and closeness centrality, without using degree (graph theory) or degree correlations at all. A schematic digram shows the control of a directed network. For a given directed network (Fig. a), one calculates its maximum matching: a largest set of edges without common heads or tails. The maximum matching will compose of a set of vertex-disjoint directed paths and directed cycles (see red edges in Fig.b). If a node is a head of a matching edge, then this node is matched (green nodes in Fig.b). Otherwise, it is unmatched (white nodes in Fig.b). Those unmatched nodes are the nodes one needs to control, i.e. the driver nodes. By injecting signals to those driver nodes, one gets a set of directed path with starting points being the inputs (see Fig.c). Those paths are called "stems". The resulting digraph is called U-rooted factorial connection. By "grafting" the directed cycles to those "stems", one gets "buds". The resulting digraph is called the cacti (see Fig.d). According to the structural controllability theorem,[3] since there is a cacti structure spanning the controlled network (see Fig.e), the system is controllable. The cacti structure (Fig.d) underlying the controlled network (Fig.e) is the "skeleton" for maintaining controllability. ### Structural Controllability The concept of the structural properties was first introduced by Lin (1974)[3] and then extended by Shields and Pearson (1976)[4] and alternatively derived by Glover and Silverman (1976).[5] The main question is whether the lack of controllability or observability are generic with respect to the variable system parameters. In the framework of structural control the system parameters are either independent free variables or fixed zeros. This is consistent for models of physical systems since parameter values are never known exactly, with the exception of zero values which express the absence of interactions or connections. ### Maximum Matching In graph theory, a matching is a set of edges without common vertices. Liu et al.[1] extended this definition to directed graph, where a matching is a set of directed edges that do not share start or end vertices. It is easy to check that a matching of a directed graph composes of a set of vertex-disjoint simple paths and cycles. The maximum matching of a directed network can be efficiently calculated by working in the bipartite representation using the classical Hopcroft–Karp algorithm, which runs in O(E√N) time in the worst case. For undirected graph, analytical solutions of the size and number of maximum matchings have been studied using the cavity method developed in statistical physics.[6] Liu et al.[1] extended the calculations for directed graph. By calculating the maximum matchings of a wide range of real networks, Liu et al.[1] asserted that the number of driver nodes is determined mainly by the networks degree distribution $P(k_\mathrm{in}, k_\mathrm{out})$. They also calculated the average number of driver nodes for a network ensemble with arbitrary degree distribution using the cavity method. It is interesting that for a chain graph and a weak densely connected graph, both of which have very different in and out degree distributions; the formulation of Liu et. al.[1] would predict same values of ${n_\mathrm{D}}$. Also, for many real-word networks, namely, food webs, neuronal and metabolic networks, the mismatch in values of ${n_\mathrm{D}}^{real}$ and ${n_\mathrm{D}}^\mathrm{rand\_degree}$ calculated by Liu et. al.[1] is notable. If controllability is decided purely by degree, why are ${n_\mathrm{D}}^{real}$ and ${n_\mathrm{D}}^\mathrm{rand\_degree}$ so different for many real world networks? It remains open to scrutiny whether control robustness" in networks is influenced more by using betweenness centrality and closeness centrality[2] over degree (graph theory) based metrics. While sparser graphs are more difficult to control,[1][2] it would obviously be interesting to find whether betweenness centrality and closeness centrality[2] or degree heterogeneity[1] plays a more important role in deciding controllability of sparse graphs with almost similar degree distributions. ## Future Directions This recent immense activity inspires hope of new breakthroughs in structural controllability of complex networks. ## References 1. Y.-Y. Liu, J.-J. Slotine, A.-L. Barabási, Nature 473 (2011). 2. SJ Banerjee and S Roy, ARXIV:1209.3737 3. ^ a b C.-T. Lin, IEEE Trans. Auto. Contr. 19 (1974). 4. R. W. Shields and J. B. Pearson, IEEE Trans. Auto. Contr. 21 (1976). 5. K. Glover and L. M. Silverman, IEEE Trans. Auto. Contr. 21 (1976). 6. L. Zdeborova and M. Mezard, J. Stat. Mech. 05 (2006).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310041666030884, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/53345?sort=oldest
## Cohomological dimension of a group, fibration and local coefficients ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I want to show that the cohomological dimension (say over Z or R) of some group $K$ is 1. $K$ occurs in an exact sequence $1 \to K \to \pi_1(X) \to \pi_1(C) \to 1$, where $\pi_1(X)$ has cohomological dimension 3 (in the same coefficients) and $C$ is a curve of genus greater than 2. So I want a kind of additivity but this is not true in general. If I look at the associated fibration $BK \to B\pi_1(X) \to C$ and use Leray-Serre spectral sequence, I have some information on the cohomology of $BK$ and in fact can solve the problem if I assume that the action of the fundamental group of $B$ on the cohomology of the fiber is trivial. But I'm not familiar with cohomology with local coefficients and don't manage to show the general case. Someone can help me ? (or solve this problem more directly ?) (or this is false in general ?) mister_jones - An obvious restatement of your question would be: does every epimorphism $G\to S$ from a group of cohomological dimension 3 to a (non-abelian) surface group have free kernel? – Mark Grant Jan 26 2011 at 16:21 Yes and in fact this is my original problem, where G is a Kähler group and the epimorphism is induced by the Albanese map (G has 1-dimensional Albanese image). – mister_jones Jan 26 2011 at 16:35 Is arxiv.org/abs/0709.4350 relevant? – Mark Grant Jan 26 2011 at 16:56 I don't think so because in a way I try to prove something stronger. In fact we can adapt the proof in this article to show that if the cohomology of G satisfies 3-dimensional Poincaré duality, then we have a contradiction. What I want to prove is that there is no Kähler group of cohomological dimension one, without assumptions of Poincaré duality. – mister_jones Jan 26 2011 at 18:02 I won't claim it's false, but it not obvious that it should be true Mr J. If the (outer) action of $\pi_1(C)$ on $K$ is sufficiently complicated, then it's conceivable that $H^j(K,M)\not= 0$ for $j>1$ but that $H^i(\pi_1(C), H^j(K,M))=0$ (so that it dies in Hochsild-Serre). – Donu Arapura Jan 26 2011 at 19:05 show 5 more comments ## 1 Answer It is false. The spectral sequence shows that cohomological dimension of group extensions is subadditive. It is not additive in general as every group is resolved by free groups, eg, $F_\infty\to F_3\to \mathbb Z^3$. For your hypotheses, let $G=A*B$ be the free product of a three dimensional group $A$, say, $\mathbb Z^3$, and a surface group $B$. The dimension of the free product is the maximum of the dimensions of the factors, so 3. There is a natural map $G\to B$ that is the identity on $B$ and trivial on $A$. The kernel is 3-dimensional because it contains $A$, which is 3-dimensional. - In the OP's case of interest, the group has one end. Are there counterexamples there? – Richard Kent Jan 26 2011 at 20:18 1 A one-ended example: $\mathbb Z\times(B*B)\to B$. – Ben Wieland Jan 26 2011 at 20:56 Oh, right. Thanks. – Richard Kent Jan 26 2011 at 20:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9101333618164062, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/112124-circle.html
# Thread: 1. ## Circle You have : $<br /> OA=\sqrt{50},<br /> AB=6,<br /> BC=2<br />$ Calculate : OB Attached Thumbnails 2. Step 1. By Pythagoras' theorem, $AC = \sqrt{40}$. Let M be the midpoint of AC. Then $AM = \sqrt{10}$. Step 2. OM is perpendicular to AC. By Pythagoras' theorem in the triangle OAM, $OM=\sqrt{40}$. Step 3. Write $\alpha$ for the angle OAB, and $\beta$ for the angle BAC. From triangle OAM, $\cos(\alpha+\beta) = 1/\sqrt5$ and $\sin(\alpha+\beta) = 2/\sqrt5$. From triangle ABC, $\cos\beta = 3/\sqrt{10}$ and $\sin\beta = 1/\sqrt{10}$. Step 4. Therefore $\cos\alpha = \cos\bigl((\alpha+\beta) - \beta\bigr) = 1/\sqrt2$ (using the trig formula for the cosine of the difference of two angles). Step 5. Now apply the cosine rule in triangle OAB to find $OB = \sqrt{26}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7813112735748291, "perplexity_flag": "middle"}
http://openwetware.org/index.php?title=User:Pranav_Rathi/Notebook/OT/2010/08/18/CrystaLaser_specifications&diff=654965&oldid=654962
# User:Pranav Rathi/Notebook/OT/2010/08/18/CrystaLaser specifications ### From OpenWetWare (Difference between revisions) | | | | | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | () | | | Line 83: | | Line 83: | | | | ====Analysis==== | | ====Analysis==== | | | Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M<sup>2</sup>. | | Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M<sup>2</sup>. | | | | + | | | | | + | [[Image:Beamwaistexp.png|700x600px|Diameter Vs Z]] | | | | + | | | | =====M<sup>2</sup>===== | | =====M<sup>2</sup>===== | | | The beam propagation factor M<sup>2</sup> was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM<sub>00</sub> beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. | | The beam propagation factor M<sup>2</sup> was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM<sub>00</sub> beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. | | Line 103: | | Line 106: | | | | | | | | | This definition of M<sup>2</sup> allows us to make simple change to optical formulas by taking M<sup>2</sup> factor as multiplication, to account for the actual beam divergence. This is the reason why M<sub></sub>2 is also sometimes referred to as the “times diffraction limit number”. | | This definition of M<sup>2</sup> allows us to make simple change to optical formulas by taking M<sup>2</sup> factor as multiplication, to account for the actual beam divergence. This is the reason why M<sub></sub>2 is also sometimes referred to as the “times diffraction limit number”. | | | | | | | | | | | | | | | | | | [[Category:1064crystalaser]] | | [[Category:1064crystalaser]] | ## Specifications We are expecting our laser any time. To know the laser more we are looking forward to investigate number of things. These specifications are already given by the maker, but we will verify them. ### Polarization Laser is TM (transverse magnetic) or P or Horizontal linearly polarized (in the specimen plane laser is still TM polarized; when looking into the sample plane from the front of the microscope). We investigated these two ways: 1) by putting a glass interface at Brewster’s angle and measured the reflected and transmitted power. At this angle all the light is transmitted because the laser is P-polarized, 2) by putting a polarizing beam splitter which uses birefringence to separate the two polarizations; P is reflected and S is transmitted, by measuring and comparing the powers, the desired polarizability is determined. We performed the experiment at 1.8 W where P is 1.77 W and S is less than .03 W* ### Beam waist at the output window We used knife edge method (this method is used to determine the beam waist (not the beam diameter) directly); measure the input power of 1.86W at 86.5 and 13.5 % at the laser head (15mm). It gave us the beam waist (Wo) of .82mm (beam diameter =1.64mm). ### Possible power fluctuations if any The power supply temperature is really critical. Laser starts at roughly 1.8 W but if the temperature of the power supply is controlled very well it reaches to 2 W in few minutes and stay there. It’s really stupid of manufacturer that they do not have any fans inside so we put two chopper fans on the top of it to cool it and keep it cool. If no fans are used then within an hour the power supply reaches above 50 degrees of Celsius and then, not only the laser output falls but also the power supply turns itself off after every few minutes. ### Mode Profile Higher order modes had been a serious problem in our old laser, which compelled us to buy this one. The success of our experiments depends on the requirement of TEM00 profile, efficiency of trap and stiffness is a function of profile.So mode profiling is critical; we want our laser to be in TEM00. I am not going to discuss the technique of mode profiling; it can be learned from this link: [1] [2]. As a result it’s confirmed that this laser is TEM00 mode. Check out the pics: A LabView program is written to show a 3D Gaussian profile, it also contains a MatLab code[3]. ## Specs by the Manufacturer All the laser specs and the manual are in the document: [Specs[4]] ## Beam Profile The original beam waist of the laser is .2mm, but since we requested the 4x beam expansion option, the resultant beam waist is .84 at the output aperture of the laser. As the nature of Gaussian beam it still converges in the far field. We do not know where? So there is a beam waist somewhere in the far field. There are two ways to solve the problem; by using Gaussian formal but, for that we need the beam parameters before expansion optics and information about the expansion optics, which we do not have. So the only way we have, is experimentally measure the beam waist along the z-axis at many points and verify its location for the minimum. Once this is found we put the AOM there. So the experimental data gives us the beam waist and its distance from the laser in the z-direction. We use scanning knife edge method to measure the beam waist. ### Method • In this method we used a knife blade on a translation stage with 10 micron accuracy. The blade is moved transverse to the beam and the power of the uneclipsed portion is recorded with a power meter. The cross section of a Gaussian beam is given by: $I(r)=I_0 exp(\frac {-2r^2}{w_L^2})$ Where I(r) is the Intensity as function of radius (distance in transverse direction), I0 is the input intensity at r = 0, and wL is the beam radius. Here the beam radius is defined as the radius where the intensity is reduced to 1/e2 of the value at r = 0. This can be seen by letting r = wL. setup Power Profile The experiment data is obtained by gradually moving the blade across from point A to B, and recording the power. Without going into the math the intensity at the points can be obtained. For starting point A $\mathbf{I_A(r=0)}=I_0 exp(-2)=I_0*.865$ For stopping point B $\mathbf{I_B}=I_0 *(1-.865)$ By measuring this distance the beam waist can be measured and beam diameter is just twice of it: $\mathbf{\omega_0}=r_{.135}-r_{.865}$ this is the method we used below. • Beam waist can also be measured the same way in terms of the power. The power transmitted by a partially occluding knife edge: $\mathbf {p(r)}=\frac{P_0}{\omega_0} \sqrt{\frac{2}{\pi}} \int\limits_r^\infty exp(-\frac{2r^2}{\omega^2}) dr$ After integrating for transmitted power: $\mathbf {p(r)}=\frac{P_0}{2}{erfc}(2^{1/2}\frac{r}{\omega_0})$ Now the power of 10% and 90% is measured at two points and the value of the points substituted here: $\mathbf{\omega_0}=.783(r_{.1} - r_{.9})$ The difference between the methods is; the first method measures the value little higher than the second method (power), but the difference is still under 13%. So either method is GOOD but the second is more accurate. Here is a link of a LabView code to calculate the beam waist with knife edge method[5]. #### Data We measured the beam waist at every 12.5, 15 and 25mm, over a range of 2000mm from the output aperture of the laser head. The measurement is minimum at 612.5 mm from the laser, thus the beam waist is at 612.5±12.5mm from the laser. And it is to be 1.26±.1 mm. #### Analysis Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M2. ##### M2 The beam propagation factor M2 was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM00 beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. M2 is defined as the ratio of a beam’s actual divergence to the divergence of an ideal, diffraction limited, Gaussian, TEM00 beam having the same waist size and location. Specifically, beam divergence for an ideal, diffraction limited beam is given by: $\theta_{th}=\frac{\lambda}{\pi w_0}$ this is theoretical half divergence angle in radian. $\theta_{ac}=M^2\frac{\lambda}{\pi w_0}$ this is actual half divergence angle in radian. Where: λ is the laser wavelength w0 is the beam waist radius (at the 1/e2 point) M2 is the beam propagation factor This definition of M2 allows us to make simple change to optical formulas by taking M2 factor as multiplication, to account for the actual beam divergence. This is the reason why M2 is also sometimes referred to as the “times diffraction limit number”. The more information about M2 is available in these links:[6][7]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259855151176453, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/186098-find-equation-graph.html
# Thread: 1. ## Find an equation from this graph. Hello, Thanks, Attached Files • Poly graph.doc (79.5 KB, 17 views) 2. ## Re: Find an equation from this graph. Since there are 40 points, it would fit a polynomial of order 39 exactly. Let $\displaystyle y = c_0 + c_1x + c_2x^2 + ... + c_{39}x^{39}$. When you substitute each point, you'll get 40 equations in 40 unknowns which you can solve using technology or MUCH difficulty by hand. 3. ## Re: Find an equation from this graph. Originally Posted by Prove It Since there are 40 points, it would fit a polynomial of order 39 exactly. Let $\displaystyle y = c_0 + c_1x + c_2x^2 + ... + c_{39}x^{39}$. When you substitute each point, you'll get 40 equations in 40 unknowns which you can solve using technology or MUCH difficulty by hand. Thanks for that. I should tell you that I am not a maths student nor any other student but I am using this equation for my own purposes. I'll give it a go though. Could you show me the start of it and then I'll do the rest of the points? Thanks much appreciated. 4. ## Re: Find an equation from this graph. Originally Posted by pieman91 Hello, Thanks, That last point looks like an extreme outlier, do you need that one included? Ignoring the last point a quadratic looks a reasonable fit: $y=16.467+0.004939x+0.00002781x^2$ CB 5. ## Re: Find an equation from this graph. Originally Posted by CaptainBlack That last point looks like an extreme outlier, do you need that one included? Ignoring the last point a quadratic looks a reasonable fit: $y=16.467+0.004939x+0.00002781x^2$ CB Thanks for that. I don't want to seem nit-picky or ungrateful but y=16.467+0.004939x+0.00002781x^2 is just a bit too high for it to match the trendline and therefore I can't use it. I have attached a new graph that shows y=16.467+0.004939x+0.00002781x^2, the trendline and the actual tax percentage. And if that last data becomes an extreme outlier just get rid of it. I really appreciate everyone's help with this and Captain Black, if you don't want to do it (fair enough), you can show me how to do it and then I can do it. Attached Files • New Polynomial.doc (118.5 KB, 11 views) 6. ## Re: Find an equation from this graph. Originally Posted by pieman91 Thanks for that. I don't want to seem nit-picky or ungrateful but y=16.467+0.004939x+0.00002781x^2 is just a bit too high for it to match the trendline and therefore I can't use it. I have attached a new graph that shows y=16.467+0.004939x+0.00002781x^2, the trendline and the actual tax percentage. And if that last data becomes an extreme outlier just get rid of it. I really appreciate everyone's help with this and Captain Black, if you don't want to do it (fair enough), you can show me how to do it and then I can do it. Well if it does not match there is probably a typo in my typing since it passed straight through the data when I was doing the fitting, unfortunately I am not on the machine with the file at present so I cannot check. However the red curve in your document is not a plot of the quadratic (which is smooth) Checking my notes, the typo is a wrong sign, it should be: $y=16.467-0.004939x+0.00002781x^2$ Also since you already have the equation of the trend line why are you continuing with this thread? CB 7. ## Re: Find an equation from this graph. You could use interpolation polynomial in the Lagrange form: Lagrange polynomial - Wikipedia, the free encyclopedia. For example: For k=2: $x_0y_0 \quad x_1y_1 \quad x_2y_2$ $p(x)=y_0\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}+y_1\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}+y_2\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}$ $p(x_0)=y_0\;\cdot 1+y_1\;\cdot 0+y_2\;\cdot 0=y_0$ $p(x_1)=y_0\;\cdot 0+y_1\;\cdot 1+y_2\;\cdot 0=y_1$ $p(x_0)=y_0\;\cdot 0+y_1\;\cdot 0+y_2\;\cdot 1=y_2$. For p(x) you don't need to find k coefficients but you have k+1 terms of order k. You could write a little program for p(x). 8. ## Re: Find an equation from this graph. Originally Posted by CaptainBlack Well if it does not match there is probably a typo in my typing since it passed straight through the data when I was doing the fitting, unfortunately I am not on the machine with the file at present so I cannot check. However the red curve in your document is not a plot of the quadratic (which is smooth) Checking my notes, the typo is a wrong sign, it should be: $y=16.467-0.004939x+0.00002781x^2$ Also since you already have the equation of the trend line why are you continuing with this thread? CB The trendline is from excel but the polynomial of the trendline that excel gives me does not fit the data. The trendline fits but its the wrong polynomial. The polynomial is the same shape but it does not fit the data. I would use it if it was the actual equation but its not. 9. ## Re: Find an equation from this graph. Originally Posted by CaptainBlack Well if it does not match there is probably a typo in my typing since it passed straight through the data when I was doing the fitting, unfortunately I am not on the machine with the file at present so I cannot check. However the red curve in your document is not a plot of the quadratic (which is smooth) Checking my notes, the typo is a wrong sign, it should be: $y=16.467-0.004939x+0.00002781x^2$ Also since you already have the equation of the trend line why are you continuing with this thread? CB I just checked your equation and it fits just fine. Thanks again for doing that. It is muchly appreciated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953090250492096, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/174547/finding-nash-equilibria-with-calculus?answertab=active
# Finding Nash Equilibria with Calculus The problem is summarized as: There are two players. Player 1's strategy is `h`. Player 2's strategy is `w`. Both of their strategy sets are within the range [0,500]. Player 1's payoff function is: $P_h(h, w) = 50h + 2hw-\frac{1}{2}(h)^2$ Player 2's payoff function is: $P_w(h, w) = 50w + 2hw - \frac{1}{2}(w)^2$ Find a Nash Equilibrium. I was taught to solve these problems in the following way. Find the first derivative of Player 1's payoff function with respect to `h`, equate it to 0, then solve for h, and then repeat for Player 2 but with respect to `w` and solving for `w` instead. However, I found the first derivatives to be: $P_h(h, w)^\prime = 50 + 2w - h$ $P_w(h, w)^\prime = 50 + 2h - w$ Now after equating these first derivatives to 0 and solving for `h` and `w`, we get that `h = -50` and `w = -50`. The issue now is that these strategies aren't within the strategy set [0,500] as mentioned in the problem question. Where am I going wrong? - 1 As with any optimization problem, you should check the boundary. The optimal strategy is (500,500), but the players would choose higher numbers if they were allowed to do so. (That is, as Paxinum pointed out, the derivatives are not 0 at (500,500)). – Théophile Jul 24 '12 at 20:00 @Théophile For this particular problem, what do you mean by checking the boundary? Do you mean checking at 0 and at 500 for both players, or one for each, or am I completely missing the point here? (I believe it's the last one.) – Kevin Jul 25 '12 at 7:30 1 Almost, Kevin: don't just check the four corners, though, but all four edges. One of the edges, for example, is $h = 0$. The derivative for Player 1 is then $P_h(h,w)^\prime = 50 + 2w$, which is positive regardless of the value of $w$. In other words, Player 1 has no incentive to stay at $0$. As for Player 2, $P_w(h,w)^\prime = 50 - w$, which is $0$ when $w=50$. However, this latter calculation wasn't really necessary because Player 1's dissatisfaction here means that there are no equilibria along $h=0$. Try similar calculations along the other edges. Does that make sense? – Théophile Jul 25 '12 at 16:18 ## 1 Answer (I have not studied Nash equilibria before, but I'll take a stab at this anyway). Ok, so say h is on the x axis, and w is on the y axis. Both players are trying to maximize the profit, i.e. they change their strategies according to the derivatives of the payout functions. Plotting this stream plot, we get the graph below: We see that in the entire region, at least one of the derivatives is always positive. Thus, at least one player gains on increasing w or h. (We see this because each arrow points either right or up or both). From this, we can see that the point (h,w)=(500,500) is an equilibrium, and you can verify this by seeing that both the derivatives in this point are positive, and even more, $P_h(h,500)>0$ for all $h\in[0,500]$, and similarly $P_w(500,w)>0$ for all $w \in [0,500].$ Thus, no player would gain on changing the strategy if they are in (500,500). - Thanks for your help! This answer seems correct, but, if you don't mind, I'd like to wait to see if anyone else has a solution that doesn't require graphing (because I won't have the time to graph the functions during an exam). – Kevin Jul 24 '12 at 16:00 Ah, well, I think the general way to solve it then, should be to look for points you mentioned. If there aren't any in the allowed region, you need to examine the edges of the region, and then the corners, just like in a 2-variable optimization problem on a compact set. – Paxinum Jul 24 '12 at 20:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9614163637161255, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/30423/law-of-excluded-middle-in-logic-proof
# Law of Excluded Middle in Logic Proof I'm having some difficulty doing a proof for the following: $$\neg A \vee \neg(\neg B \wedge (\neg A \vee B))$$ It is said that you could use the law of excluded middles. Any help or guidance would be appreciated. Thanks in advance! - 1 Are you just trying to simplify the expression given? Otherwise, I don't see what to prove. – yunone Apr 2 '11 at 0:05 @yunone: I think you have to prove that the expression is a tautology. – alejopelaez Apr 2 '11 at 0:09 I think you can just apply De Morgan's laws and the distributive laws, and arrive at a tautology of the form $A \vee \sim A$ – alejopelaez Apr 2 '11 at 0:10 @Pel, ah ok, thanks for the explanation. – yunone Apr 2 '11 at 0:14 2 @Kerx, you can right click on the characters and click show source. This will show you the typesetting code. Enclose these in \$ to get them to format. For example, `$\neg A \vee \neg(\neg B \wedge (\neg A \vee B))$` is what is written above. – yunone Apr 2 '11 at 0:19 show 2 more comments ## 2 Answers Consider this: $$\begin{align*} \neg A\lor\neg(\neg B\land(\neg A\lor B)) &\equiv \neg A\lor(B\lor\neg(\neg A\lor B))\\ &\equiv \neg A\lor(B\lor (A\land\neg B)) \\ &\equiv \neg A\lor((B\lor A)\land(B\lor\neg B)) \\ &\equiv \neg A\lor((B\lor A)\land \top) \\ &\equiv \neg A\lor(B\lor A) \end{align*}$$ where $B\lor\neg B\equiv\top$ by the law of excluded middle. Applying it again should show the original expression is a tautology, which I believe is what you want to prove. - yunone, can you please teach me how you used those characters to post this answer? Thanks! – KerxPhilo Apr 2 '11 at 0:17 2 – yunone Apr 2 '11 at 0:22 thanks again! – KerxPhilo Apr 2 '11 at 0:24 I understand your proof. I guess the weird part is that in my logic course, we are using special ways to do our proofs. We use tools such as Conjunction Introduction, Elimination; Disjunction Introduction, Elimination; Contradiction Introduction, etc. And we have to cite the lines associated with them. – KerxPhilo Apr 2 '11 at 0:26 1 @kerx, I hope you can fill it in, as I've never really explicitly listed the tools I'm using, and I'm not too familiar with their names. I guess I first use De Morgan's laws, and double negative elimination in the first and second lines, distributivity of disjunction in the third, and eventually the domination laws. – yunone Apr 2 '11 at 0:32 Using distributivity, $\neg A \bigvee \neg((\neg B \bigwedge \neg A) \bigvee (\neg B \bigwedge B))$ $\equiv \neg A \bigvee \neg (\neg B \bigwedge \neg A)$ $\equiv \neg A \bigvee (B \bigvee A)$ $\equiv \neg A \bigvee A$ as required. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915499746799469, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/47035/limsup-and-liminf?answertab=votes
# limsup and liminf I came across the following problems on limit supremums and infimums: Let $(A_n)$ be a sequence of subsets of $X$. Define $$\text{lim sup} \ A_n = \{x \in X: x \in A_n \ \text{frequently} \}$$ and $$\text{lim inf} \ A_n = \{x \in X: x \in A_n \ \text{ultimately} \}$$ Show that $$\text{lim inf} \ A_n \subset \text{lim sup} \ A_n$$ $$\text{lim inf} \ A_n = \bigcup_{n=1}^{\infty} \bigcap_{k=n}^{\infty} A_k$$ $$(\text{lim sup} \ A_n)' = \text{lim inf} \ A_{n}'$$ $$\text{lim sup} \ A_n = \bigcap_{n=1}^{\infty} \bigcup_{k=n}^{\infty} A_k$$ Note that $x_n \in A_n$ frequently means that $(\forall N) \ \exists n \geq N \ni x_n \in A_n$. Also $x_n \in A_n$ ultimately means that $\exists N \ni n \geq N \Rightarrow x_n \in A$. The first follows since for $x \in \text{lim inf} \ A_n$ then $x \in A_n$ ultimately which means that it is contained in $\text{lim sup} \ A_n$. For the last three, would I just use DeMorgan's laws and the definition of unions and intersections to deduce that they are the same as the definitions given above? - 2 Yes that's all correct. Note that your union-intersection equality of $\liminf A_n = \bigcup \bigcap A_k$ is missing the $A_k$s – t.b. Jun 23 '11 at 0:41 ## 1 Answer (1) Your proof that $\liminf A_n\subseteq \limsup A_n$ is correct. Exercise 1: Under what conditions does equality hold in the above inclusion, i.e., under what conditions is it true that $\liminf A_n=\limsup A_n$? (2) Note that $x\in \liminf A_n$ if and only if there exists a positive integer $N$ such that $x\in A_n$ for all $n\geq N$ if and only if there exists a positive integer $N$ such that $x\in \bigcap_{k=N}^{\infty} A_k$ if and only if $x\in \bigcup_{n=1}^{\infty} \bigcap_{k=n}^{\infty} A_k$. I will leave the remaining questions as easy exercises (with similar solutions). Exercise 2: Let $\{A_n\}$ be a sequence of measurable subsets of a measurable space $(X,\mu)$. Assume that $\Sigma_{n=1}^{\infty} \mu(A_n)<\infty$. Let $A=\limsup A_n$. Prove that $\mu(A)=0$. (Hint: Use the fourth assertion in your question, namely, use the characterization of $\limsup A_n$ in your question.) Exercise 3: Give an example (in the context of Exercise 2) where $\lim_{n\to\infty} \mu(A_n)=0$ but that $\mu(A)>0$. Do not assume that $\Sigma_{n=1}^{\infty} \mu(A_n)<\infty$. (Hint: let $\{A_n\}$ be an appropriate sequence of intervals in $[0,1]$, for example.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411333799362183, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/172420-fixed-point-theorem-p-groups.html
Thread: 1. Fixed point theorem p-groups I have no idea where to even start. 2. Hint: By Burnside's Lemma (aka, the Lemma Which Is Not Burnside's), the order of G must divide the sum of the orders of the orbits. 3. If G acts on a finite set S, then S can be written as a disjoint union (See Hungerford's algebra p 93) $S=S_0 \cup \bar{x_1} \cup \bar{x_2} \cup \cdots \cup \bar{x_n}$, where $S_0=\{x \in S: g\cdot x=x \text{ for all } g \in G\}$, $|\bar{x_i}|=|G:G_{x_i}|$, and $|\bar{x_i}|>1$ for all i. We see that $|\bar{x_i}|$ is divisible by p by Lagrange's theorem, since a p-group G acts on S. By assumption, S is not divisible by p, so $S_0$ is not empty and $p \nmid|S_0|$. An orbit of $x \in S$, denoted $\bar{x}$, has a single element iff $x \in S_0$. Can you conclude it from here?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404351711273193, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4152030
Physics Forums Page 1 of 3 1 2 3 > ## Primacy of conservation laws Hello, I am hoping for some input concerning shall we say the philosophy of mechanics, particularly with relation to the concept of force. Most text books treat force as being something of a primary concept. A fundamental aspect of nature much like energy and mass and therefore something which cannot be readily defined, merely related to other concepts by rote application of Newton's laws. To me it's not really. To me the conservation laws, and the quantities to which they refer, are the more sensible choice of foundation upon which to lay a conceptualisation of reality. The explanation I have sketched out of force, then, is roughly as follows - certain quantities are found by experiment to be conserved. For now, mass-energy, momentum and charge will do - conserved quantities are really useful because we can think of them as being transferable, and transferable with quantifiable amounts and rates - force is just a name for the rate of transfer of momentum - "a" force is an influence upon the rate of transfer of momentum To me this - is simple - makes Newton's laws seem like stating the obvious (2nd by definition, 1st=special case of 2nd, 3rd by definition as every transfer has a source and a sink) - explains jet force in a way that is much more direct than appeal to Newton's 3rd, and reduces jet "force" to just a way of talking about straightforward momentum transfer - is consistent with the standard model of particle physics and more broadly (by conservation of 4-momentum) with general relativity To me the primacy of force is a long-obsolete hangover from the days when "contact force" was still considered a viable proposition. Recasting contact forces in terms of the Colomb interaction to me kind of forces the issue towards the above conceptualisation. My only quibble is I'm kind of making this up as I go, wilfully ignorant of accepted views of the matter. I have not heard of such a conceptualisation clearly stated from any other source, although to me it seems implicit in the Standard Model -- or perhaps explicit, if you read the right books? But then again I've just picked up Feynman Lectures on Physics Vol 1, which includes a chapter on Characteristics of Force. Feynman -- who, of all people, you might expect to view force as nothing more than a transfer of momentum by gauge bosons or otherwise -- says nothing of the sort and just really seems to throw up his hands and say it's all too complicated(!) So I guess I would like to know whether a) this view makes any sense and b) where are some good places to start reading about this approach as used by smarter people than me? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor Your reasoning is essentially correct, and that kind of thinking is part of what got us from Newtonian Mechanics to more advanced physics. If you look at Lagrangian Mechanics, Classical Field Mechanics, or Quantum Field Theory, you deal primarily with energy and momentum - the conserved quantities. Forces can be extracted from solution if you need them, but the concept of a "force" is not directly used in solving a problem. If you look deeper, into things like Quantum Electrodynamics and Quantum Chromodynamics, you see that even conservation laws aren't fundamental. What's fundamental are certain symmetries. From each fundamental symmetry arises a conserved current and gauge fields that carry forces. So you end up with both conservation laws and forces as a consequence of symmetries. The mathematical formulation of this comes from Noether's Theorem. In classical physics, invariance of the laws of physics under translation gives you conservation of momentum. Under time - energy. Under rotation - angular momentum. Since only global symmetries are considered, you don't really get a gauge field. In Quantum Electrodynamics, the U(1) local symmetry gives you both the conservation of electrical charge and the electromagnetic forces that couple to this charge. If you conceptualize force as rate of transfer of momentum then you run into difficulty in scenarios where force is present but no transfer of momentum is taking place. An example would be leaning against a wall. There is a force between me and the wall, and an equal and opposite force between my shoes and the floor. The net force on me, as a whole, is zero but the forces are not applied in the same place which creates strain on both me and the floor/wall. Recognitions: Science Advisor ## Primacy of conservation laws And there is a corresponding momentum flow as a result. You->wall->floor->you. And there is a corresponding momentum flow as a result. You->wall->floor->you. Would you like to put up some maths to back up this proposal? Quote by K^2 And there is a corresponding momentum flow as a result. You->wall->floor->you. Lovely -- I started to draft a reply but got bogged down in detail. You've said it in a nutshell. Balanced forces are balanced momentum flows. As with flows of mass or charge, you can have flows occurring without any net accumulation or depletion in any body. As for the strain - it's essentially Hooke's Law. The (directed) rate of momentum flow by the Coulomb interaction between molecules depends on their separation. Their equilibrium separation (strain) therefore depends on the (directed) rate of momentum flow they are required to sustain (tension or compression). Thanks K^2 for your earlier post, too -- yet more impetus for me to get around to learning about symmetry in physics. Nice to know I'm intuitively on the right track (if only in a tiny way) Any suggestions for introductory books or materials, suited to a time-poor person, would be gratefully received! Quote by K^2 And there is a corresponding momentum flow as a result. You->wall->floor->you. True, but modeling strain in terms of momentum flow is awkward. At least it is to me. Modeling inelastic collisions in terms of force would be equally awkward. It's best not to get stuck on one idea. Be familiar with a wide variety of conceptual tools so you can pick the best one for any given job. Recognitions: Science Advisor Quote by Studiot Would you like to put up some maths to back up this proposal? So dp/dt = F isn't enough? I understand that you might be tempted to point out that dp/dt = ƩF, and ƩF=0. But do keep in mind that d/dt is a linear operator, and we need to decompose it if we want to look at individual flows rather than the net change. Or do you want me to go deeper and write out some Feynman diagrams for a virtual photon exchange, showing that any application of force results in momentum flow? Or do you need me to go a step deeper and derive relation between stress tensor and momentum flow? If you want me to be creative, I can do that using the fact that stress tensor is a conserved quantity under coordinate transformation, while angular momentum is a conserved quantity under rotation specifically. When you are asking about something so fundamental as momentum flow due to an interaction, I really don't know what sort of level of math you are looking for. Well, I don't think you need go as far as a person leaning. Just a 1Kg mass sitting on a table in a gravitational field will do, although the analysis presumably also applies to any body force. And granted that the table initially deforms minutely when first placed, but considering only the subsequent steady state. And please look again at the title of the subforum and ask yourself if at least some of your responses would be more appropriate if this were one of the more advanced physics subforums. Recognitions: Science Advisor Valid point on section. I'll stick to classical. For a mass supported by a normal force in a gravitational field, again, you are looking at two flows. Body receives upwards momentum to normal force and looses it to gravitational force. That lost momentum is transferred to Earth via gravity and is returned back via forces of deformation into the support surface. There is still a net flow, even if you consider this classically. but considering only the subsequent steady state. That lost momentum is transferred to Earth via gravity and is returned back via forces of deformation into the support surface. Are you implying that there can only be continuous deformation without limit? I fail to see what this extra layer of complication adds. Quote by Studiot I fail to see what this extra layer of complication adds. It simplifies, not complicates. Define momentum, posit that it is conserved, and that's all you need. Newton's laws become 1) stating the obvious, 2) a redundant definition and 3) stating the obvious. Where it doesn't seem to help, just substitute the word "force" where you see "flow of momentum". They mean the same thing. The word "force", while redundant, is still a useful shorthand for sources of momentum flow, since many problems can be analysed in terms of forces (discrete sources of momentum flow) where an expression can be found relating the size and direction of force (i.e. rate and direction of flow) to other quantities. It's also more than just a simplifying framework. It says that momentum transfer is primary, and force is just a name for it. Compare to the usual Newtonian view that force is primary and that force causes the transfer of momentum. You all know a lot more about modern physics than I do, but I figure this is closer to the current understanding ... isn't that why we now speak of "fundamental interactions" and not "fundamental forces"? Recognitions: Science Advisor Quote by Studiot I fail to see what this extra layer of complication adds. difficulty in scenarios where force is present but no transfer of momentum is taking place. It resolves this "difficulty". If there is an interaction force, there is a transfer of momentum taking place. Just like net force can be zero, net transfer can be zero. Naturally, if the net transfer is zero, we don't need to actually worry about each individual interaction. And that's precisely how it would be formulated in, say, Hamiltonian mechanics. $\dot{p}=\{p,H\}$. No need for concept of force as such. And yeah, no need for complication of tracking momentum transfer due to each interaction individually. Thank you both for telling me that at some underlying and fundamental level that the use of a self consistent system of mechanics, developed over several centuries, is either somehow wrong or can be simplified. There are other subforums here for such discussion. This subforum is all about the classical system I just described. I do not know of any text on structural mechanics or bridge design that follows this modern route to success. In fact I cannot recall any bridge design text using the word momentum at all. Many fluid mechanics texts use the phrases "destruction of horizontal momentum" and "appearance of vertical momentum" to descibe what happens when a flowing fluid is directed at a wall. I ask for a mathematical description, equivalent to the classical analysis, using the modern theory to compare and see if it is actually simpler and easier, as claimed. Quote by russell2pi Hello, I am hoping for some input concerning shall we say the philosophy of mechanics, particularly with relation to the concept of force. Most text books treat force as being something of a primary concept. A fundamental aspect of nature much like energy and mass and therefore something which cannot be readily defined, merely related to other concepts by rote application of Newton's laws. [..] I don't understand your "not readily defined". In classical mechanics these things except for energy are (were?) rather well defined, with balances, scales and rulers - thus there were not so long ago the kg (for mass) and the kgf (the force by one kg on Earth). Perhaps that was the reason for the primacy of those in classical mechanics. "Energy" is a hard to measure concept, it started out as a bookkeeping value. Quote by K^2 [..] For a mass supported by a normal force in a gravitational field, [..] you are looking at two flows. Body receives upwards momentum to normal force and looses it to gravitational force. [..] There is still a net flow, even if you consider this classically. Momentum ("quantity of motion"): p=mv "The Quantity of Motion is the measure of the same, arising from the velocity and quantity of matter conjuctly." https://en.wikisource.org/wiki/The_M...finitions#Def2 In classical mechanics, stationary v=0 => p=0. Or if in inertial motion, p=constant. No "momentum flow" in classical mechanics: it is a conserved quantity, not a fluid. Quote by russell2pi - conserved quantities are really useful because we can think of them as being transferable, and transferable with quantifiable amounts and rates - force is just a name for the rate of transfer of momentum - "a" force is an influence upon the rate of transfer of momentum [..] Not force in general, but what Newton called "impressed motive force" was defined just as you propose to do: F=dp/dt "The alteration of motion is ever proportional to the motive force impressed". Of course, the conservation laws that were developed from classical mechanics played an increasingly important role in history; indeed one can turn things around in a convenient way, based on modern knowledge. To me the primacy of force is a long-obsolete hangover from the days when "contact force" was still considered a viable proposition. Recasting contact forces in terms of the Colomb interaction to me kind of forces the issue towards the above conceptualisation. Please elaborate: how does the Coulomb interaction affect Hooke's law? [..] I've just picked up Feynman Lectures on Physics Vol 1, which includes a chapter on Characteristics of Force. Feynman -- who, of all people, you might expect to view force as nothing more than a transfer of momentum by gauge bosons or otherwise -- says nothing of the sort and just really seems to throw up his hands and say it's all too complicated(!) [..] where are some good places to start reading about this approach as used by smarter people than me? I don't have that book at hand, it will be interesting to look up. And I did not look into that myself more than standard textbooks, but perhaps this is a good starter ("what else"!): http://en.wikipedia.org/wiki/Conservation_law Recognitions: Science Advisor Lagrangian and Hamiltonian Mechanics are topics in Classical Mechanics. I'm not sure what your complaint is. You are trying to artificially limit discussion to a static case. First of all, yes, any structural mechanics problem can be solved using Lagrange Multipliers without talking about forces. Of course, what you are actually analyzing is stress, so you have no choice but to involve forces at some point, and you might as well start balancing forces from the beginning. Dynamics problems, however, are greatly simplified by use of Lagrangian and Hamiltonian Mechanics in generalized coordinates. That's kind of why you usually learn them in a Classical Mechanics course. But hey, if you want Lagrangian analysis of a mass supported by the floor, here it is. Lagrangian and constraint. $$L = \frac{1}{2}m\dot{y}^2 - mgy + \lambda f(y)$$ $$f(y) = y-H = 0$$ Equations of motion. $$\frac{\partial L}{\partial y} - \frac{d}{dt}\frac{\partial L}{\partial \dot{y}} = \lambda - mg - m\ddot{y} = 0$$ $$\ddot{y} = 0$$ Solution. $$\lambda = mg$$ Why is this simpler? Because I did not have to stop and ask what forces are acting on the body and what I need to do to have them add up to zero. Sure, with one body it's easier to just balance forces. What if you have a dozen different bodies with different constraints between them? Still feel like setting up all of these equations? In contrast, I can just write down the Lagrangian, write down the list of constraints, and just feed the whole mess into Mathematica to be solved. This method completely eliminates the need to go through each degree of freedom by hand. And by the way, this is how you solve a problem with constraints. That is the standard approach used for the past 200 years. I don't know if that qualifies as "modern theory" in your books. Maybe you're still going by Principia as the only text on classical mechanics. Quote by K^2 [..] Why is this simpler? [..] I can just write down the Lagrangian, write down the list of constraints, and just feed the whole mess into Mathematica to be solved.[..] (sorry couldn't help it!) Page 1 of 3 1 2 3 > Thread Tools | | | | |---------------------------------------------------|-------------------------------|---------| | Similar Threads for: Primacy of conservation laws | | | | Thread | Forum | Replies | | | General Physics | 8 | | | Classical Physics | 13 | | | Special & General Relativity | 13 | | | Advanced Physics Homework | 2 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495809078216553, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/132572/how-do-i-solve-x-1x-2x-3x-4-3/132584
# How do I solve $(x-1)(x-2)(x-3)(x-4)=3$ How to solve $$(x-1) \cdot (x-2) \cdot (x-3) \cdot (x-4) = 3$$ Any hints? - ## 3 Answers Look at $(x-1)(x-4)$ and $(x-2)(x-3)$ they multiply as $(x^{2}-5x+4)$ and $x^{2}-5x+6$. Now put $t= x^{2}-5x$ and reduce it to a quadratic equation. - 7 Alternately, $z = x-\frac{5}{2}$, then the equation becomes $(z^2-\frac{1}{4})(z^2-\frac{9}{4})=3$. But this is basically the same solution.... – N. S. Apr 16 '12 at 16:31 Note that solving said equation for $t$ yields roots $t = -3, -7.\:$ Then you solve $\:x^2-5x = t,\:$ which amounts to solving the two quadratics I gave. Rather than this roundabout way, it's simpler to notice that subtracting $3$ preserves the difference of squares form, as in my answer. – Gone Apr 16 '12 at 17:13 1 @N.S.: Equivalent it may be, but your substitution $z=x-\frac{5}{2}$ is a natural move, in that it brings out the symmetry. – André Nicolas Apr 16 '12 at 17:30 1 Several solution methods (and some motivation behind them) for the equation $(x-r)(x-2r)(x-3r)(x-4r) = a$ are given in the following sci.math thread: groups.google.com/group/sci.math/browse_thread/thread/… (Google) mathforum.org/kb/message.jspa?messageID=6344054 (Math Forum) – Dave L. Renfro Apr 16 '12 at 18:11 1 To elaborate on what N.S. and André are saying: that substitution depresses the quartic, in that it kills the cubic term, and very luckily, the linear term as well... – J. M. Apr 17 '12 at 11:31 Hint $\$ The LHS is a difference of squares $\rm\:y^2\!-\!1,\:$ hence so too is $\rm\:(y^2\!-\!1)-3\: =\: y^2\!-\!2^2,\:$ viz. $\rm\qquad\ \:\! (x\!-\!1)(x\!-\!4) (x\!-\!2)(x\!-\!3)\ =\ (x^2\!-\!5x+4)(x^2\!-\!5x+6)\ =\ (x^2\!-\!5x+5)^2 \!-\! 1^2$ $\rm\ \ \Rightarrow\ (x\!-\!1)(x\!-\!4) (x\!-\!2)(x\!-\!3)\!-\!3\ =\ (x^2\!-\!5x+5)^2 \!-\! 2^2\ =\ (x^2\!-\!5x+3)(x^2\!-\!5x+7)$ - I assume the hints would already have given you the answer. If not, here is the full answer: Let y = x-2.5 (y+1.5)(y-1.5)(y+0.5)(y-0.5) = 3. so $(y^2-2.25)(y^2-0.25) = 3$. Let $z = y^2-1.25$. (z-1)(z+1) = 3. So $z^2-1 = 3$. Hence $z^2 = 4$. z = -2 gives $y^2 = z + 1.25 = -0.75$. So $y = \pm \sqrt{0.75}i$. Clearly, this should be ignored if you only want real roots. $x = 2.5 + y = 2.5 \pm \sqrt{0.75}i$. z = 2 gives $y^2 = z + 1.25 = 3.25$. So $y = \pm \sqrt{3.25}$. Clearly, this should be ignored if you only want real roots. $x = 2.5 + y = 2.5 \pm \sqrt{3.25}$. If you want all the terms in the product to be positive, then obvious ly $x = 2.5 + \sqrt{3.25}$ is the only one that works. This is roughly 4.30277564. - 1 In writing mathematics, once should be encouraged to use rationals 3/2 and not decimals 1.5 ... but of course rationals are less convenient for on-line writing. – GEdgar Apr 17 '12 at 12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913964033126831, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/42940?sort=votes
## The growthrate of the homology of $H_*(M^{\otimes_A n})$ for a DG-bimodule $M$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose you have an DG-algebra $A$, and a DG-bimodule $M$ over $A$. Under which conditions will the rank of the bimodules $H_*(M^{\otimes_A n})$ will grow exponentially in terms of $n$? Here $M^{\otimes_A n}$ is the A-bimodule $M\otimes_A M\otimes_A\cdots\otimes_A M$ (with $n$ terms). Anything, even a counterexample would be interesting.. - ## 1 Answer I don't know in general, but in Heegaard Floer theory, there are bimodules naturally associated to a mapping class of a surface self-homeomorphism. The rank of $H_*(M^{\otimes_A n})$ grows exponentially iff the underlying mapping class group element is pseudo-Anosov. See our paper at http://front.math.ucdavis.edu/1012.1032. In particular, it's easy to give examples where the growth rate is linear, although I'm sure there are also more elementary constructions. (There are also other, earlier constructions for the braid group, by Khovanov-Seidel and Khovanov-Thomas. I don't know if they explicitly stated the fact about growth rates, but it follows directly from their results.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178231358528137, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/54052/is-it-possible-to-reproduce-double-slit-experiment-by-myself-at-home
# Is it possible to reproduce Double-slit experiment by myself at home? I want to reproduce this experiment by myself. What I need for this. What parameters of slits and laser/another light source it needs? Is it possible to make DIY-detector? - The detector could be a whole separate question, especially if you are thinking of building a photomultiplier or something. Honestly, a successful experiment will show itself qualitatively upon visual inspection. If you want a more quantitative analysis, perhaps taking a RAW picture and looking at pixel values will suffice. – Chris White Feb 15 at 19:24 – Raindrop Feb 16 at 18:18 ## 3 Answers It's actually quite easy to perform the experiment in the comfort of your own home. The simplest setup I have seen (as depicted in this, and other youtube videos) is two use a laser pointer and pencil lead, but you can certainly be more systematic and cut slits in some opaque material as well. I would encourage you to experiment to answer the question of how far apart the slits need to be etc., but some basic math behind this is as follows: If the slits are a distance $d$ apart, if the light has wavelength $\lambda$, and if the distance between the slits and the screen is $L$, then the spacing $\Delta y$ between successive fringes on the wall will approximately be $$\Delta y \approx \frac{\lambda L}{d}$$ So let's say the laser is red so that $\lambda\approx 700 \mathrm{nm}$, the slits are $1\,\mathrm{mm}$ apart, and the screen is $1.5\,\mathrm m$ away from the slits, then we have $$\Delta y \approx \frac{(700\,\mathrm{nm})(1.5\,\mathrm{m})}{1\,\mathrm{mm}} = 1.05\,\mathrm{mm}$$ So you can actually try this and see if your results agree! (I might actually try this myself come to think of it; thanks for the question!) Cheers! - 2 Gah. You beat me by about a minute. – Emilio Pisanty Feb 15 at 17:05 Haha...apologies Emilio. – joshphysics Feb 15 at 17:05 2 – Emilio Pisanty Feb 15 at 17:07 What's about detector near one slit? – Robotex Feb 15 at 18:06 I would add from my own experience that cutting slits in an opaque material often produces slits that are not entirely negligible in width (compared to $d$), so there may be a broad single-slit pattern modulating the double-slit pattern that is sought. – Chris White Feb 15 at 19:21 show 1 more comment Laser pointer, nit comb, bit of cardboard from a cereal box to control the number of slits. Works perfectly! - Absolutely, though the result does depend somewhat upon your definition of "at home". Simply seeing the interference pattern is as simple as a laser pointer and a few narrow apertures (see the other answer(s)) People have successfully even done single-photon interference at home! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289182424545288, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/63470?sort=oldest
## Third bordism group of BG, where G is an arbitrary compact Lie group. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is anything known about $\Omega_3(BG)$, where $G$ is an arbitrary compact Lie group; i.e., is it possible to describe the structure of $\Omega_3(BG)$ for any compact Lie group? I know that $H_3(BG)$ consists completely of torsion when $G$ is compact and I would like (if possible) a similar type of statement for $\Omega_3(BG)$. - ## 1 Answer If you think about oriented bordism, the answer is that $\Omega_3 (BG) \cong H_3 (BG)$. This is true for any space $X$ instead of $BG$, because of the Atiyah-Hirzebruch spectral sequence and because $\Omega_i (pt)=0$ for $i=1,2,3$. - Thank you! Can you say anything as nice for the unoriented case? – Kevin Wray Apr 29 2011 at 22:30 Is it obvious that Atiyah-Hirzebruch applies to the associated homology of a spectrum in the same way as the associated cohomology? – Dylan Wilson Apr 30 2011 at 2:54 @Dylan: I think the exact couple set up makes it reasonably clear. Have you looked at Adams? – Sean Tilson Apr 30 2011 at 3:30 The unoriented case is even simpler as unoriented bordism is just homology with coefficients in the (unoriented) bordism ring. – Torsten Ekedahl Apr 30 2011 at 6:57 Wait, I thought I'd read somewhere that $\Omega_3(BG)$ was equivalent to $H_3(BG)$ up to torsion. – Kevin Wray Apr 30 2011 at 19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443656206130981, "perplexity_flag": "middle"}
http://www.onemathematicalcat.org/Math/Algebra_II_obj/parabolas.htm
PARABOLAS DEFINITION parabola A parabola is the set of points in a plane that are the same distance from a fixed point (called the focus), and a fixed line (called the directrix). To explore this definition, investigate the diagram at right. Pick any point on the parabola (say, $\,\text{P}1\,$). Travel to the focus, and record this distance ($\,3.28 \text{ cm}\,$). Go back to $\,\text{P}1\,$, and now travel to the directrix ($\,3.28 \text{ cm}\,$). (Note that the distance from a point to a line is the shortest distance from the point to the line, which is measured along the perpendicular.) These two distances are the same! Pick a different point on the parabola (say, $\,\text{P}2\,$). Travel to the focus, and record this distance ($\,6.50 \text{ cm}\,$). Go back to $\,\text{P}2\,$, and now travel to the directrix ($\,6.50 \text{ cm}\,$). Again, these two distances are the same! No matter what point you choose on a parabola, its distances to the focus and the directrix will be equal. You can play with parabolas here. (Drag the focus and directrix around; watch the parabola change!) Parabolas have some beautiful geometric properties that make them very important in real-life applications. REFLECTING PROPERTY OF PARABOLAS Rays emanating from the focus will always be reflected perpendicular to the directrix. To explore this property, investigate the diagram at right. Place a light at the focus. The beams will go out and hit the parabola. They will always be reflected perpendicular to the directrix, as shown. This ability of parabolas to generate straight, focused beams of light makes them valuable in applications as varied as laser surgery and the Hollywood beams of light! This same geometric property also allows parabolas to act as collectors: COLLECTING PROPERTY OF PARABOLAS Rays entering the parabola perpendicular to the directrix are always reflected so that they pass through the focus. Think about the satellite dish on the outside of a house. Beams enter the dish at all possible angles, but those that come in perpendicular to the directrix are all focused on a single point, where a device collects the signal, amplifies it, and sends it into the house. Thus, the name focus is appropriate for this special point! VERTEX OF A PARABOLA The vertex of a parabola is the point that is exactly halfway between the focus and the directrix. It is the point where the parabola turns (i.e., changes direction). The distance between the focus and the vertex affects the shape of the parabola, as shown below. As the focus moves farther away from the vertex, the parabola gets wider (flatter). As the focus moves closer to the vertex, the parabola nets narrower (steeper). In any orientation of a parabola, the focus is always inside the parabola. When the focus is above the vertex, then the parabola holds water (i.e., is concave up). When the focus is below the vertex, the parabola sheds water (i.e., is concave down). | | | | |--------------------------------------|----------------------------------------|--------------| | | | | | wide parabola; focus far from vertex | narrow parabola; focus close to vertex | concave down | Master the ideas from this section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035437703132629, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/96415/polyline-averaging
## Polyline Averaging ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to find a method that can take a collection of polylines, each described by a list of connected points on a plane, and find an "average" path through them. The input lines do not loop. Essentially, I want to do something similar to that of taking multiple GPS routes which are individually subject to noise and producing a single smoothed average of them. Are there any existing algorithms which can do this? - By polyline I presume you mean a piecewise linear curve? Or a more general marked curve? And you want a path through what, precisely? – David Roberts May 9 2012 at 8:46 Yes, a piecewise linear curve although not monotonically increasing on either axis. I'm aware that an "average" line is not well defined but I'm looking for a single path which is representative of the input lines. At present I have a scheme in place which finds the start and end points for the "average" line and then iteratively bisects the line and fits the mid point to the input lines. This works well for situations when the lines are straight or 'L' shaped but doesnt work for 'S' type shapes or nything more complex. – Chris May 9 2012 at 9:10 ## 1 Answer One of the most attractive distance measures between two curves is the Fréchet distance, which is the smallest leash length between a dog on one curve and its owner on the other. Algorithms for computing it have been studied since the mid-90's, perhaps starting with this paper: H. Alt and M. Godau. Computing the Fréchet distance between two polygonal curves. Intl. J. Computational Geometry and Applications, 5:75-91, 1995. [Image from Wouter Meulemans' web page.] Once you have committed to this distance measure, it is natural to define a median curve as that which minimizes the maximum Fréchet distance between it and the curves in your collection. And indeed this has just been explored in a recent Ms. thesis: Benjamin Raichel and Sariel Har-Peled. "The Frechet Distance Revisited and Extended." 2012. (conference paper link). The exact median curve of $k$ $n$-vertex polygonal chains can be computed in $O(n^k)$ time. But under a natural restriction that the curves are $c$-packed," the exponential time complexity is reduced to $O(n \log n)$ for a $(1+\epsilon)$-approximation. All of this is detailed in Raichel's thesis. I doubt there are existing implementations (because this is so new), but examining this literature should at the least provide you with one natural model of an "average" curve. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465939998626709, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/7579/representation-of-quantum-transformations-as-matrices
# Representation of quantum transformations as matrices I was reading Quantum Computation explained to my mother, which makes the following claim Postulate 2 A closed physical system in state V will evolve into a new state W , after a certain period of time, according to W = UV where U is a n × n unit matrix of complex numbers. Here, `V` is a column matrix with `n` rows. Can anybody justify this assumption? - ## 2 Answers Dear Casebash, the statement above is a "postulate of quantum mechanics" (note that it is even called in this way) - a fundamental "axiom" or "assumption" of any theory that wants to be called "a quantum mechanical theory". There are about 5-10 general postulates of quantum mechanics and the logic of quantum mechanics only starts to make sense to someone once he understands all of them. So it is counterproductive to take individual postulates out of the context. This particular postulate, referred to as the "linearity of the wave functions", guarantees that the wave functions can only be constrained by linear equations. The set of allowed values of the wave functions is known as the "Hilbert space" and the postulate above guarantees that the set (the Hilbert space) must be a linear vector space and all physically meaningful operations, including the values of observables and evolution, have to be given by linear operators on this Hilbert space. For infinitesimal evolution in time, the evolution of a wave function or, more generally, a "state vector" is dictated by Schrödinger's equation, $$i\hbar \frac{d}{dt} |\psi\rangle = H | \psi\rangle.$$ If the linear equation above is right, one can also mathematically prove that $$|\psi(t_1)\rangle = U | \psi(t_0)\rangle.$$ In fact, we may exactly calculate what $U$ is in terms of the Hamiltonian $H$: it is $$U = \exp(H(t_1-t_0) / i \hbar)$$ where we have to exponentiate an operator - which is possible. More generally, if the norm $\langle \psi| \psi\rangle$ is conserved, and it should be because it may be interpreted as the total probability (100 percent) that the physical system is found in any of the mutually excluding states, it must be true that the operator $H$ has to be Hermitian and, equivalently, $U$ has to be unitary or antiunitary. (Only the unitary option is possible if $U$ is obtained from $H$ by a continuous evolution.) There doesn't exist a single experiment in which the linearity of quantum mechanics would be violated. In fact, by pure thought, one may see that such a deviation from linearity would require one to readjust the equations for $|\psi\rangle$ or their interpretations so that the total probability is returned back to 100 percent. Such readjustments would inevitably violate locality. Moreover, they would make the "collapse" of a wave function (and its precise moment) observable, at least in principle. These are de facto physical inconsistencies because they contradict the special theory of relativity. At any rate, as of 2011, the postulates of quantum mechanics including linearity of the Hilbert space (of allowed wave functions) seem to be 100 percent valid and there doesn't even exist a logically plausible proposed way to "deform" them that wouldn't lead to other problems that are viewed as physical inconsistencies. - If we assume time evolution preserves the Hilbert space norm, then Wigner had shown it can only be a linear unitary transformation, or an antilinear antiunitary transformation. If time evolution is continuous in time, it can't possibly be antiunitary as there's no continuous deformation of the identity operator to an antiunitary transformation. - 3 I don't know anything about Hilbert spaces. I guess that means that I have more reading to do – Casebash Mar 26 '11 at 6:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542981386184692, "perplexity_flag": "head"}
http://torus.math.uiuc.edu/cal/math/cal?year=2012&month=03&day=06&interval=day
Seminar Calendar for events the day of Tuesday, March 6, 2012. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. ``` February 2012 March 2012 April 2012 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 1 2 3 1 2 3 4 5 6 7 5 6 7 8 9 10 11 4 5 6 7 8 9 10 8 9 10 11 12 13 14 12 13 14 15 16 17 18 11 12 13 14 15 16 17 15 16 17 18 19 20 21 19 20 21 22 23 24 25 18 19 20 21 22 23 24 22 23 24 25 26 27 28 26 27 28 29 25 26 27 28 29 30 31 29 30 ``` Tuesday, March 6, 2012 Ergodic Theory 11:00 am   in 347 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by jathreya. Ilya Gekhtman (University of Chicago)Dynamics of Convex Cocompact Subgroups of Mapping Class GroupsAbstract: Convex cocompact subgroups of mapping class groups are subgroups of the mapping class group whose orbits in Teichmueller space are quasi-convex. We develop an analogue of Patterson-Sullivan theory for the action of subgroups G of Mod(S) on Teichmuller space and its boundary the space of projective measured foliations and use it to compute multiplicative asymptotics for the number of orbit points of G in a ball of radius R in Teichmueller space and the number of pseudo-Anosovs in G with dilatation at most R. Topology Seminar 11:00 am   in 243 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by franklan. Ayelet Lindenstrauss (Indiana University)K-theory of formal power seriesAbstract: (Joint with Randy McCarthy.) We study the algebraic K-theory of parametrized endomorphisms of a unital ring R with coefficients in a simplicial R-bimodule M, and compare it with the algebraic K-theory of the ring of formal power series in M over R. Waldhausen defined an equivalence from the suspension of the reduced Nil K-theory of R with coefficients in M to the reduced algebraic K-theory of the tensor algebra TR(M). Extending Waldhausen's map from nilpotent endomorphisms to all endomorphisms, our map has to land in the ring of formal power series rather than in the tensor algebra, and is no longer in general an equivalence (it is an equivalence when the bimodule M is connected). Nevertheless, the map shows a close connection between its source and its target: it induces an equivalence on the Goodwillie Taylor towers of the two (as functors of M, with R fixed), and allows us to give a formula for the suspension of the invariant W(R;M) (which can be thought of as Witt vectors with coefficients in M, and is what the Goodwillie Taylor tower of the source functor converges to) as the inverse limit, as n goes to infinity, of the reduced algebraic K-theory of TR(M)/ (Mn). Harmonic analysis and differential equations 1:00 pm   in 347 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by vzh. Taras Lakoba, (University of Vermont, Math)Unusual properties of numerical instability of the split-step method applied to NLS solitonAbstract: The split-step method (SSM) is widely used for numerical solution of nonlinear evolution equations. Its idea and implementation are simple. Namely, it is common that the evolution of variable $u$ is governed by: $u_t = A(u,t) + B(u,t)$ where both individual'' evolutions $u_t = A(u,t) \qquad \mbox{and} \qquad u_t = B(u,t)$ can be solved exactly (or at least easily''). Then the numerical approximation of the full solution is sought in steps that alternatingly solve each equation. The SSM has long been used to simulate the NLS: $$i \, u_t - \beta u_{xx} + \gamma u|u|^2 = 0$$ ( so here $A=-\beta u_{xx}$ and $B=\gamma u|u|^2$). However, until recently, a possible development of numerical instability of the SSM has been studied only in one simplest case, which does not include the soliton or multi-soliton solutions of the NLS. In this talk I will present recent results concerning the development of the numerical instability of the SSM when it is used to simulated a near-soliton solution of NLS. Properties of this instability are stunningly different from instability properties of most other numerical schemes. I will not assume prior familiarity of the audience with instabilities of numerical methods and therefore will first review a couple of basic examples of such instabilities. This will set a benchmark for the subsequent exposition of the instability properties of the SSM. I will show how those properties, and --- more importantly --- their analysis, are different from the instability properties and analysis for most other numerical schemes. Logic Seminar 1:00 pm   in 345 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by ssolecki. Lou van den Dries (Department of Mathematics, University of Illinois at Urbana-Champaign)The structure of approximate groups according to Breuillard, Green, TaoAbstract: Roughly speaking, an approximate group is a finite symmetric subset A of a group such that AA can be covered by a small number of left-translates of A. Last year the authors mentioned in the title established a conjecture of H. Helfgott and E. Lindenstrauss to the effect that approximate groups are finite-by-nilpotent''. This may be viewed as a sweeping generalisation of both the Freiman-Ruzsa theorem on sets of small doubling in the additive group of integers, and of Gromov's characterization of groups of polynomial growth. Among the applications of the main result are a finitary refinement of Gromov's theorem and a generalized Margulis lemma conjectured by Gromov. Prior work by Hrushovski on approximate groups is fundamental in the approach taken by the authors. They were able to reduce the role of logic to elementary arguments with ultra products. The point is that an ultraproduct of approximate groups can be modeled in a useful way by a neighborhood of the identity in a Lie group. This allows arguments by induction on the dimension of the Lie group. I will give two talks: the one on Tuesday will describe the main results, and the sequel on Friday will try to give a rough idea of the proofs. Probability Seminar 2:00 pm   in Altgeld Hall 347,  Tuesday, March 6, 2012 Del Edit Copy Submitted by kkirkpat. Jonathon Peterson   [email] (Purdue)Large deviations and slowdown asymptotics for excited random walksAbstract: Excited random walks (also called cookie random walks) are self-interacting random walks where the transition probabilities depend on the number of previous visits to the current location. Although the models are quite different, many of the known results for one-dimensional excited random walks have turned out to be remarkably similar to the corresponding results for random walks in random environments. For instance, one can have transience with sub-linear speed and limiting distributions that are non-Gaussian. In this talk I will prove a large deviation principle for excited random walks. The main tool used will be what is known as the "backwards branching process" associated with the excited random walk, thus reducing the problem to proving a large deviation principle for the empirical mean of a Markov chain (a much simpler task). While we do not obtain an explicit formula for the large deviation rate function, we will be able to give a good qualitative description of the rate function. While many features of the rate function are similar to the corresponding rate function for RWRE, there are some interesting differences that highlight the major difference between RWRE and excited random walks. Graph Theory and Combinatorics 3:00 pm   in 241 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by west. Andrew Treglown (Charles University, Prague)Embedding spanning bipartite graphs of small bandwidthAbstract: A graph $H$ on $n$ vertices has bandwidth at most $b$ if there exists a labelling of the vertices of $H$ by the numbers $1,\ldots,n$ such that $|i-j|\le b$ for every edge $ij$ of $H$. Boettcher, Schacht, and Taraz gave a condition on the minimum degree of a graph $G$ on $n$ vertices to ensure that $G$ contains every $r$-chromatic graph $H$ on $n$ vertices having bounded degree and bandwidth $o(n)$, thereby proving a conjecture of Bollobás and Komlós. We strengthen this result in the case where $H$ is bipartite. Indeed, we give an essentially best-possible condition on the degree sequence of a graph $G$ on $n$ vertices that forces $G$ to contain every bipartite graph $H$ on $n$ vertices having bounded degree and bandwidth $o(n)$. This also implies an Ore-type result. In fact, we prove a much stronger result where the condition on $G$ is relaxed to a certain robust expansion property. (Joint work with Fiachra Knox.) Mathematical Biology 3:00 pm   in 345 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by zrapti. Spencer Hall   [email] (Indiana University, Department of Biology )Five reasons why resources matter for diseaseAbstract: We could produce more powerful theory to predict disease outbreaks if we took an approach rooted in community ecology. I want to argue this point by focusing on resources of hosts and a case study of fungal disease in a planktonic grazer (Daphnia). Using this system and a combination of observations of epidemics in lakes, experiments, and mathematical (differential equation) models, I will show five reasons why resources matter for disease. (1) Key epidemiological traits (think transmission rate, or yield of parasites from infected host) vary plastically with resources - and how hosts acquire and use them. (2) If we embrace this plasticity, we can better predict variation in disease in time and space. I'll illustrate with a case study of potassium as the resource. A model will also reveal some counter-intuitive predictions that stem from interactions of disease with a dynamic resource. (3) Resources can strongly influence how other species (predators, competitors) inhibit or fuel epidemics. For example, I'll show how a predator might spread disease through a trophic cascade. (4) Variation in feeding rate (i.e., resource acquisition) among clonal genotypes of hosts can create key tradeoffs in life history vs. epidemiological traits (e.g., transmission rate vs. fecundity). (5) This tradeoff can then help us understand how hosts might evolve to become more resistant or more susceptible to their parasites during epidemics of different sizes. All of these ecological and evolutionary outcomes for disease hinge on explicitly thinking about resources of hosts. As a result, host-resource interactions should play a much more prominent role in rapidly growing theory for disease ecology. Mathematics Colloquium --Trjitzinsky Memorial Lectures 4:00 pm   in 314 Altgeld Hall,  Tuesday, March 6, 2012 Del Edit Copy Submitted by kapovich. Robert Ghrist (University of Pennsylvania)Sheaves and the Global Topology of Data, Lecture IAbstract: This lecture series concerns Applied Mathematics -- the taming and tuning of mathematical structures to the service of problems in the sciences. The Mathematics to be harnessed comes from algebraic topology -- specifically, sheaf theory, the study of local-to-global data. The applications to be surveyed are in the engineering sciences, but are not fundamentally restricted to such. Beginning with a gentle introduction to algebraic topology and its modern applications, the series will focus on sheaves and their recent utility in sensing, coding, optimization, and inference. No prior exposure to sheaves required. Robert Ghrist is the Andrea Mitchell Penn Integrating Knowledge Professor in the Departments of Mathematics and Electrical/Systems Engineering at the University of Pennsylvania. A reception will be held in AH 314 immediately following the lecture.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907640278339386, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/7792/why-does-an-equiangular-spiral-become-logarithmic-intuitively?answertab=oldest
Why does an equiangular spiral become logarithmic (intuitively)? One of the most famous 2D-curves are logarithmic spirals (or Spira mirabilis). They can be constructed by using a machinery that ensures a constant angle between the tangent and the radial lines all the time while plotting it. My question I can see it in the picture that the spiral arms are getting bigger each turn and I see the math. What I don't understand intuitively is where the logarithm comes from - or put differently: Why does a geometric progression arise just by holding an angle constant. Could you give me some hints? Thank you, - 1 Answer Much is explained by looking at the polar equation of the spiral: $$r=\exp(\theta\cot\alpha)$$ Here, $\alpha$ is the constant angle any tangent to the curve makes with the radius vector (a line segment joining the origin and the point of tangency). This explains the adjective equiangular (the verification of this property from the defining equation is left as an exercise). As an aside, insects flying towards a point light source like a candle or a light bulb follow the path of an equiangular spiral, since the usual strategy of an insect flying at the daytime to get their bearing is to fly at a constant angle from the sun's rays, and this strategy works against them when encountering man-made light. Now, suppose we have an arithmetic progression of angles $\theta,\theta+\Delta\theta,\theta+2\Delta\theta,\dots$; if we get the corresponding values of the radius vector using the defining equation for the logarithmic spiral (geometrically speaking, this corresponds to a clockwise rotation by $0,\Delta\theta,2\Delta\theta,\dots$ radians), we get $$\exp(\theta\cot\alpha),\exp((\theta+\Delta\theta)\cot\alpha),\exp((\theta+2\Delta\theta)\cot\alpha),\dots$$ which can be re-expressed as $$\exp(\theta\cot\alpha),\exp(\theta\cot\alpha)\cdot\exp(\Delta\theta\cot\alpha),\exp(\theta\cot\alpha)\cdot\exp(\Delta\theta\cot\alpha)^2,\dots$$ which as you can see is a geometric progression; that is to say, the logarithms of the members of this sequence form an arithmetic progression. This is where the logarithmic adjective arises from. - M.: Thank you (also for the book tip!) To be honest with you: Now I get the part with the geometric progression - but I still don't see the connection between the const. angle and the logarithm :-( – vonjd Oct 25 '10 at 16:35 1 @vonjd: I'd chalk it up as a coincidence... the spiral happens to both have the constant angle property and radius vectors in a geometric progression. One way to proceed would be to derive the equation of the equiangular spiral from one property and then use it to derive the other. – J. M. Oct 25 '10 at 22:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251977801322937, "perplexity_flag": "head"}
http://mathoverflow.net/questions/25625?sort=oldest
## Reference request: representation theory of the hyperoctahedral group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was wondering if someone knows a good reference for the representation theory of the hyper-octahedral group $G$. The hyper-octahedral group $G$ is defined as the wreath product of $C_2$ (cyclic group order $2$) with $S_n$ (symmetric group on $n$ letters). I understand that the representations of $G$ are in bijection with bi-partitions of $n$. I am looking for a reference which explains the details of why the representations of $G$ are in bijection with bi-partitions of $n$, and constructs the irreducible representations of $G$ (I imagine this is vaguely similar to the construction of Specht modules for $S_n$). So far, the only reference I have is an Appendix of MacDonald's "Symmetric functions and Hall polynomials" (2nd version), which deals with the representation theory of the wreath product of $H$ with $S_n$ (for $H$ being an arbitrary group, not $C_2$). - 3 "MacDonald" here (and often elsewhere) refers to Ian G. Macdonald, whose work has been highly influential in representation theory, combinatorics... His contemporary Ian D. MacDonald (sometimes I.D. Macdonald) had a quirky professional career and did much less influential work in conventional group theory. Anyway, I.G. Macdonald wrote an interesting note Some irreducible representations of Weyl groups (Bull. LMS, 1972) with reference to W. Specht's 1937 paper Darstellungstheorie der Hyperoktaedergruppe. But more recent references are suggested in the answers here. – Jim Humphreys May 23 2010 at 13:20 ## 3 Answers I liked the references of Kerber listed in the wikipedia article. The most relevant chapter is available online, along with both volumes which were quite useful. Kerber's presentation focusses on the idea that H is going to be cyclic and specifically handles H of order 2, but like MacDonald handles general H abstractly. GAP handles the hyper-octahedral group this way too, using generic code for wreath products written more or less solely for the hyper-octahedral group. The "bi" in bi-partitions just refers to the two conjugacy classes of C2, and the general theory replaces "bi" by however many conjugacy classes H has. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The theory extends to the wreath product of a cyclic group $C$ with the symmetric group $S_n$. Then we are looking at a list of $|C|$ partitions with a total of $n$ boxes. This is in MacDonald so I expect you know this. Now we can deform these group algebras analogously to deforming the group algebras of $S_n$ to Hecke algebras. The group algebra of the cyclic groups is deformed to $K[x]/p(x)$ where the degree of $p$ is $|C|$ where originally we had $x^{|C|}=1$. These are known as Ariki-Koike algebras and there is an extensive literature on these. Even if you are only interested in hyperoctahedral groups you may find papers on Ariki-Koike algebras which give you what you want by specialising. For example, I have seen the semi-normal form for Ariki-Koike algebras but not for hyperoctahedral groups. - As Bruce Westbury suggested at another question, the following book might be mentioned here. A. V. Zelevinsky, Representations of Finite Classical Groups A Hopf algebra approach; Lecture Notes in Mathematics 869 (1981) It is unfortunately probably hard to get anymore, and its typography is not very attractive, but I find the treatment very elegant. The representation theory of $S_n$ gives rise to an algebraic structure called Positive Self-Adjoint Hopf-algebra, whose simplest nontrivial model is the ring of symmetric functions with a comultiplication. Its combinatorial structure is entirely deduced from the axioms, and it provides models for among other things the representations rings for finite linear groups and for wreath products of the symmetric groups. Thus the hyperoctahedral groups are treated nicely as a simple special case. - Would you be so kind to comment - what is comultiplication on symmetric functions ? and where does it come from ? – Alexander Chervov Nov 26 2011 at 18:11 I like Zelevinsky's approach very much, but it diverges early from the usual treatments of the representation theory of the symmetric group, so I think it would be better to look at this after understanding one of the more straightforward treatments. – Tom Church Nov 26 2011 at 20:32 Symmetric functions are in infinitely many variables, and order doesn't matter. Now rename the variables $x_0,y_0,x_1,y_1,x_2,\ldots$ and decompose as a sum of products of a symmetric function in the $x$'s and one in the $y$'s. For instance $e_k$ gives $\sum_{i+j=k}e_i\otimes e_j$ since the monomials can be arbitrarily spread across the $x$'s and $y$'s, while $p_k$ gives $p_k\otimes1+1\otimes p_k$ since the monomials involve an $x_i$ or an $y_i$, but the two cannot mix in a power sum. – Marc van Leeuwen Nov 26 2011 at 20:33 @Tom: Yes I agree, this might not be the best introduction to the representations of the symmetric groups (especially if these are among the first groups you study representations of); a more concrete approach would be in place. But once you've seen a bit of that and you wonder if there is any higher perspective that explains why the details fall in place as they do, Zelevinsky's approach is a real revelation. – Marc van Leeuwen Nov 26 2011 at 20:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391786456108093, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/23307/list
## Return to Answer 2 minor correction If a group $G$ acts on an affine variety $X$, and $W$ is a $G$-module, a covariant on $X$ with values in $W$ is a regular function $X\to V$ W$which is$G\$-equivariant. In the special case in which $G=\mathrm{SL}(V)$ is the special linear group on a vector space $V$, $X=\mathrm{Pol}_{d_1}(V)\otimes\cdots\otimes\mathrm{Pol}_{d_s}(V)$ and $W=\mathrm{Pol}_{d}(V)$, with natural actions of $G$, a covariant $X\to W$ is called a concomitant of degree $d$. The canonical example of a concomitant is the resultant $R(f_1,\dots,f_s)$ of $s$ homogeneous polynomial functions $f_1\in\mathrm{Pol}_{d_1}(V), \dots, f_s\in\mathrm{Pol}_{d_s}(V)$ of degrees $d_1,\dots,d_s$, which has degree $0$. A simpler example is the Jacobian of $n$ homogeneous forms in $n$ variables. 1 If a group $G$ acts on an affine variety $X$, and $W$ is a $G$-module, a covariant on $X$ with values in $W$ is a regular function $X\to V$ which is $G$-equivariant. In the special case in which $G=\mathrm{SL}(V)$ is the special linear group on a vector space $V$, $X=\mathrm{Pol}_{d_1}(V)\otimes\cdots\otimes\mathrm{Pol}_{d_s}(V)$ and $W=\mathrm{Pol}_{d}(V)$, with natural actions of $G$, a covariant $X\to W$ is called a concomitant of degree $d$. The canonical example of a concomitant is the resultant $R(f_1,\dots,f_s)$ of $s$ homogeneous polynomial functions $f_1\in\mathrm{Pol}_{d_1}(V), \dots, f_s\in\mathrm{Pol}_{d_s}(V)$ of degrees $d_1,\dots,d_s$, which has degree $0$. A simpler example is the Jacobian of $n$ homogeneous forms in $n$ variables.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988919854164124, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/265155/confused-about-the-answer-of-ap-bc-2010-3a
# Confused about the answer of AP BC 2010 # 3a A particle is moving along a curve so that its position at time $t$ is $(x(t),y(t)),$ where $x(t) = t^2-4t+8$ and $y(t)$ is not explicity given. Both $x$ and $y$ are measured in meters, and $t$ is measured in seconds. It is known that $\frac{\mathrm{d}y}{\mathrm{d}t} = te^{t-3} - 1$. (a) Find the speed of the particle at time $t = 3$ seconds. Answer given: Speed = $\sqrt{\left(x^{\prime}\left(3\right)\right)^2 + \left(y^{\prime}\left(3\right)\right)^2} = 2.828$ meters per second. Why is the answer not $$\frac{\mathrm{d}y}{\mathrm{d}x}\Bigg|_{x=3}$$ - The formatting in your answer messed up a \$. – Amzoti Dec 26 '12 at 1:12 ## 3 Answers Definition: Let the position vector for a particle in the xy-plane be $r(t) = xi + yj = f(t)i + g(t)j,$ where $t$ is the time, and the scalar functions $f$ and $g$ have first and second derivatives. The velocity, speed and acceleration of the particle at time $t$ are as follows: Velocity: $v(t) = r'(t)$ = $\frac{dx}{dt}i$ + $\frac{dy}{dt}j$ Speed: $s(t) = \|v(t)\|$ = $\|r'(t)\|$ = $\sqrt{\mathstrut\left(\frac{dx}{dt}\right)^{2} + \left(\frac{dy}{dt}\right)^{2}}$ Acceleration: $a(t)$ = $v'(t)$ = $r''(t)$ = $\frac{dx^{2}}{dt}i$ + $\frac{dy^{2}}{dt}j$ In your problem, you are already given $\frac{dy}{dt}$, so you need to compute $\frac{dx}{dt}$ and use the formula for speed that is given above at time $t = 3$. You will get the answer you cited in the problem. Clear? Regards - +1 Hello!! TGIF!! $\;\;$ :$\tiny\nabla$)$\;\;$ – amWhy May 11 at 0:15 @amWhy: Indeed! Got home about an hour ago and it is good being home! This reminded me how much I miss the University environment as a place of exploration, learning and growth. I feel warped, but just answered a question and another one is in my purview. Hope you are having a brilliant day, as I currently feel warped from 350 miles of driving! Regards – Amzoti May 11 at 0:17 Because speed at time $t=3$ it given by $$\sqrt{[x'(t)]^2+[y'(t)]^2}\Bigg\vert_{t=3}=\sqrt{(2t-4)^2+(te^{t-3}-1)^2}\Bigg\vert_{t=3}=2\sqrt{2}\approx 2.82843.$$ Since speed is inherently a time derivative, why would you take ${dy\over dx}$? This would be used if you were interested, for example, in the slope of the parametric curve $(x(t),y(t))$ in the $x$-$y$ plane. - Just to make sure I understand. dy/dx would give you the velocity at a point, but in this reguard, we have the factor of time, so we must use the head-tail method to find the lenght of the derivative vectors. – yiyi Dec 26 '12 at 1:25 Just remember $dx/dt$ and $dy/dt$ represent the rate of change of the $x$-coordinate and $y$-coordinate with respect to $t$. Then the slope of the curve in the $x$-$y$ plane is $dy\over dx$, but we can get at this via ${dy\over dx}={dy/dt\over dx/dt}$, for $dx/dt\not=0$. Finally, I'm not sure in what sense you mean velocity here. Recall that the speed of a particle along the parametric curve $(x(t),y(t))$ was defined as ${ds\over dt}$ where $s$ was arc length. This is where the proof that speed equals $\sqrt{[x'(t)]^2+[y'(t)]^2}$ came from. – JohnD Dec 26 '12 at 1:41 Yes, it was. Just forgot which "Space" i was in. parametric not the normal x y plane – yiyi Jan 2 at 12:56 Note: You do not need $y(t)$. $$x' = \dfrac{dx}{dt}\quad \text{and} \quad y' = \dfrac{dy}{dt}.$$ You are given $y'$; you need only compute $x'$ and use the formula for speed: $$\text{speed}\;= \sqrt{(x')^{2} + (y')^{2}}.\tag{1}$$ Then evaluate at time $t = 3$. $(1)\;$ Recall $\dfrac{dx}{dt}$ and $\dfrac{dy}{dt}$ represent, respectively, the rate of change of the $x$-coordinate and $y$-coordinate with respect to time $t$. The slope of the curve in the $x$-$y$ plane is $\dfrac{dy}{dx}$, and this can be computed as $\dfrac{dy}{dx}=\dfrac{dy/dt}{dx/dt}$, when $dx/dt\ne 0$. But recall that the speed of a particle along the parametric curve $(x(t),y(t))$ is defined by $\dfrac{ds}{dt},\;$ where $\;s= \textrm{arc length}$. This is how we can get that $\text{speed} =\dfrac{ds}{dt} = \sqrt{[x′(t)]2+[y′(t)]} = \sqrt{\left(\frac{dx}{dt}\right)^2 + \left(\frac{dy}{dt}\right)^2}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552783966064453, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/32029/why-must-psi-x-t-go-to-zero-faster-than-frac1-sqrtx
# Why must $\Psi (x,t)$ go to zero faster than $\frac{1}{\sqrt{|x|}}$? Why must $\Psi (x,t)$ go to zero faster than $\frac{1}{\sqrt{|x|}}$ as $|x|$ goes to $\infty$? According to Griffiths' Introduction to Quantum Mechanics, it must. I don't understand why, and this is in his footnote (while talking about normalizability), so there's no explanation as to why this must be so. - 1 $\Psi(x,t)$ should be such that integration of its mode square on real line (which represents probability of finding the particle on real line) is 1. If the condition which Griffith mentions is not satisfied the integral would diverge. – user10001 Jul 14 '12 at 11:34 1 Isn't this simply because the integral of $1/x$ diverges, whereas the integral for $1/x^{1+\delta}$ converges? As the probability must be finite, this sets the limit for $\Psi$. – Alexander Jul 14 '12 at 11:34 But the integral of $\frac{1}{\sqrt(|x|)}$ puts an $x^{1/2}$ term in the numerator, and the integral still blows up as $x$ goes to $\infty$. So, I assume he could have been more stringent if he had wanted to be. – Joebevo Jul 14 '12 at 12:12 1 You can have a square integrable wave function $\Psi (x)$ such that $\Psi (x) sqrt{|x|}$ doesn't converge for $|x| \rightarrow \infty$ : Choose $\Psi (x) = 1$ for $n \leq x \leq n+1/n^{2}, n \epsilon \mathbb{N}$ and $0$ otherwise. – jjcale Jul 14 '12 at 15:39 ## 1 Answer Otherwise one could eventually bound the integral by $1/x$, which diverges to infinity as $\ln(x)$, and the function could not be normalized. Ofcourse, there is nothing special about $1/\sqrt{|x|}$, he could equally well have chosen $1/\sqrt{|x\ln(x)|}$. And in case you were wondering, there is no function, such that all eventually slower growing functions converge, and all faster growing functions diverge. - Thanks. You read my mind with that last sentence. As an aside, I plugged your $1/\sqrt{|x\ln(x)|}$ function into wolfram alpha and it says "no result found in terms of standard mathematical functions" for the integral. How can you tell about its convergence properties? – Joebevo Jul 14 '12 at 12:40 I based it on the n'th prime beeing ~ nln(n), and their reciprocals diverge. – Holowitz Jul 14 '12 at 12:47 Griffiths' condition is the most general possible. $1/|xln|x||$ goes to zero somewhat faster than $1/|x|$ – user10001 Jul 14 '12 at 13:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454789161682129, "perplexity_flag": "head"}
http://mathoverflow.net/questions/44047/localizing-an-arbitrary-additive-category/44155
## Localizing an arbitrary additive category ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Under which conditions localizing an additive category by some class S of morphisms yields and additive category? It seems easy to define certain addition on morphisms if we fix their representatives as zig-zags (i.e. compositions of 'old' morphisms with inverses of morphisms in S; here I use the fact that 'my' S is closed with respect to direct sums of morphisms), but I am not sure at all that this addition will not depend on the choice on representatives. Is there any reasonable condition that will ensure this? I definitely do not want to restrict myself to abelian or triangulated categories. It seems that in the situations I am interested in, any morphism is a composition of the embedding of a direct summand, an inverse of a morphism from S, and an 'old' morphism (i.e. it is 'almost a fraction'). The Ore conditions are not fulfilled (in general, probably); yet some weakening of them could hold. I would be deeply grateful for any associations here! My examples are: For an additive (pseudo-abelian) category B consider some full triangulated (thick) subcategory D of $K^b(B)$; then my S for B is the set of morphisms in B that yield objects of D (if considered as complexes of length 1). In particular, S is always closed with respect to compositions and direct sums of morphisms. In fact, I am interested in all aspects of this setup! - Presumably Ore conditions are not satisfied in your example? It would help (me, at least) if this is made explicit. – Sheikraisinrollbank Oct 29 2010 at 11:45 I have updated the question. – Mikhail Bondarko Oct 29 2010 at 14:35 ## 3 Answers This is a very elementary problem. To solve it, it is better not to try to understand the localization explicitely, but to work only with the universal property of the localization (there is no need for any calculus of zig-zags of any kind). You should also think of finite sums not as something defined on each family of objects, but as a left adjoint to the diagonal functor $C\to C^n$ for each $n\geq 0$. Let us look at a more general situation first. Let $C$ and $C'$ be categories, and $S$ and $S'$ be a class of maps in $C$ and $C'$ respectively, which contains all the identities. Then, it is an easy exercise to check that the canonical functor $$(S\times S')^{-1}(C\times C')\to S^{-1}C\times {S'}^{-1}C'$$ is an equivalence of categories (or, if you prefer, an isomorphism, depending on whether you prefer to consider the localized category $S^{-1}C$ as the solution of a universal problem in the $2$-category of categories, or in the $1$-category of categories, respectively). Hint: just check that the two categories have the same universal property. Another elementary exercise is that, given any adjunction $$L:C\rightleftarrows D:R$$ if $S$ (resp. $T$) is a class of maps in $C$ (resp. in $D$), such that $L(S)\subset T$ and $R(T)\subset S$, then we get a canonical adjunction $$L:S^{-1}C\rightleftarrows T^{-1}D:R$$ It follows rather immediately from this that, if $C$ admits finite sums (resp. finite products), and if the class $S$ is closed under finite sums (resp. finite products), then the localized category $S^{-1}C$ admits finite sums (resp. finite products), and the canonical functor $\gamma: C\to S^{-1}C$ commutes with them. It is now obvious that, if $C$ is an additive category, and if $S$ is a class of maps which contains the identities and which is closed under finite sums (hence also under finite products), then the category $S^{-1}C$ is additive, and the canonical functor $\gamma$ is additive. Indeed, an additive category is nothing but a category with finite sums as well as finite products, such that, the initial and terminal object coincide, such that $X\amalg Y\simeq X\times Y$ for any objects $X$ and $Y$, and such that any object has the structure of an internal group object (which is necessarily unique). As the functor $\gamma$ preserves finite products, it preserves group objects, and, as $\gamma$ is essentially surjective, any object of $S^{-1}C$ has a canonical structure of group object... - Thank you! My only question is: if this is so easy, why nobody had formulated this before?:) – Mikhail Bondarko Oct 29 2010 at 17:53 1 Well, even if I don't know any precise reference, I cannot believe that no one has formulated this before! Did you try the books/papers of Peter Gabriel for instance? – Denis-Charles Cisinski Oct 29 2010 at 20:37 Not yet; I will try, thank you! – Mikhail Bondarko Oct 29 2010 at 23:10 At least, in "Gabriel, M. Zisman, Calculus of Fractions and Homotopy Theory" the additivity of the localization is proved only when $S$ satisfies the Ore condition. – Mikhail Bondarko Oct 29 2010 at 23:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In a particular context, I have given a criterion in section 6 of "On the 3-arrow calculus for homotopy categories" (available at http://www.math.rwth-aachen.de/~Sebastian.Thomas/publications/). In this context, every morphism is represented by a 3-arrow, that is, a formal inverse followed by an morphism in the original category followed by a formal inverse. Could you give more details of your example? - Thank you; that's interesting! I am not sure that this paper will help, but this is quite possible. Actually, I am interested in a rather general family of examples. An important part of them can be constructed as follows: for an additive (pseudo-abelian) category B consider some full triangulated (thick) subcategory $D$ of $K(B)$; then my S for B is the set of morphisms in B that yield objects of D (if considered as complexes of length 1). In particular, S is always closed with respect to compositions and direct sums of morphisms (so there exists a certain addition of 'zig-zags'). – Mikhail Bondarko Oct 29 2010 at 14:27 See the Book "CAtegories" by Hors Shubert (Springer 1972) Prop.19.5 pag. 272. - The Ore conditions is not fulfilled – Mikhail Bondarko Nov 18 2010 at 18:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332922101020813, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27399/when-is-a-stack-not-geometric
## When is a stack (NOT) geometric? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Following the terminology of $n$-Lab, a geometric stack $\mathcal{X}$ on a site $\mathcal{(C,J)}$ is a stack for which there exists a representable epimorphism $X \to \mathcal{X}$ from an object $X$ of $\mathcal{C}$ (viewed as a representable presheaf). Equivalently, $\mathcal{X}$ is geometric if and only if there exists a (nice enough) groupoid object $\mathcal{G}$ in $\mathcal{C}$ such that $\mathcal{X}$ is (2-iso to) the stackification of the strict presheaf of groupoids $Hom(blank,\mathcal{G})$ (where nice enough essentially means that you can take enough iterated pullbacks in $\mathcal{C}$ to form a $\mathcal{C}$-enriched nerve). My question is, is there a more intrinsic definition of geometric stack? By "more-intrinsic" I mean a definition that does not use the existential quantifier. For example, if our site is topological spaces, we know a presheaf is representable if and only if it sends colimits in $Top$ to limits in $Set$. Since geometric stacks are in some sense a natural generalization of representable presheaves, it would seem natural to expect a similar characterization of geometric stacks (at least in the case when our site is nice enough, like $Top$). I ask this mostly because, although in some circumstances there is a natural atlas or a natural choice of representing groupoid object around to try to prove that something is a geometric stack, proving that a stack is NOT geometric becomes very difficult when the definition involves the EXISTENCE of a nice atlas. If someone only knows the answer for certain sites, this is still interesting to me. - 1 Let $\mathcal{X}$ be any stack that isn't fibered in groupoids. – Harry Gindi Jun 7 2010 at 22:30 3 @Harry, stack means stack of groupoids. – David Carchedi Jun 7 2010 at 22:32 3 Harry, are you kidding? Just about whenever the phrase "algebraic stack" is used, it means stack of groupoids. While stacks are obviously used in other contexts, I'm fairly confident that this (algebraic stacks) is their most common context. – Mike Skirvin Jun 8 2010 at 0:18 3 Under reasonable finiteness hypotheses (quasi-sept'd stacks locally of finite presentation over an excellent noetherian ring), Artin gave sufficient criteria in terms of deformation/obstruction theory, in his paper "Versal deformations..." (with base ring of finite type over field or excellent Dedekind domain; can take it to be any excellent ring). This is needed to give substance to the theory (i.e., to make it something other than semantic nonsense). Olsson more recently proved that Artin's conditions are also necessary, not just sufficient (again, under some mild finiteness hypotheses). – BCnrd Jun 8 2010 at 1:41 4 @Harry, please stop arguing in my comments. If you have an idea for the answer to the question, then feel free to post it, but please stop arguing with people. – David Carchedi Jun 8 2010 at 10:38 show 9 more comments ## 1 Answer One problem -- or at least one characteristic aspect -- of the notion of geometric stack is of course that it makes explcit reference to a fixed chosen site. Different sites may give rise to equivalent toposes and still to different notions of geometric stacks. One approach is to make that extra information an explicit extra piece of data in a controlled way. This is effectively what is achieved by the notion of geometry for a structured topos. In terms of this one can then characterize geometric stacks fairly intrinsically. For instance in Structured Spaces it is shown that with a standard choice for "geometry" a Deligne-Mumford stack is precisely a "2-scheme" in a suitable sense. Going beyond that, one could ask which "geometries" in this sense are naturally associated to a given topos, without choosing them by hand, such that the corresponding 2-schemes are the natural notion of geometric stack. I think a big step in that direction is achieved in Bertrand Toen's work Champs affine. As reviewed at rational homotopy theory in an (oo,1)-topos, Toen there shows that for stacks or higher stacks on the algebraic site, one can characterize "affine stacks" intrinsically, as the objects of the reflective sub-(oo,1)-category on objects that are local with respect to morphisms that induce isomorphisms in "rational cohomology", where "rational" is as seen by the ground field. Using that intrinsic notion of "affine stack", Toen then gives in section 4 a definition of geometric oo-stacks. This may or may not be exactly what you are asking for, but I think it does provide some noteworthy indications of the kind of approach that one should think about. - Thanks for the link Urs. I'll have a look. Before I look, one comment: I did of course realize I can't expect to have a characterization that does not involve the site in some shape or form, but the same goes for characterizing representable functors. I was hoping knowing that $\mathcal{X}$ is not just an object of a 2-topos, but that it's actually a weak functor $\mathcal{X}:C^{op} \to Gpd$, would do this trick, just as in case of representable functors. – David Carchedi Jun 8 2010 at 18:38 No, that is simply a pseudofunctor into groupoids (i.e. a category fibered in groupoids). Obviously these are not all representable. – Harry Gindi Jun 9 2010 at 1:08 Harry, you misunderstood me. I meant that you might be able to characterize necessary conditions on such a pseudofunctor (such as it satisfying descent, behaving well with limits etc) to guarantee it is a geometric stack. – David Carchedi Jun 9 2010 at 16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381123185157776, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/10983-trig-proofing-print.html
# trig proofing Printable View • February 1st 2007, 04:46 AM 24680 trig proofing Hello, I am currently trying to make a difficult trig proof for my maths lesson. If someone could help me make one, that would be very helpful. I have been given a left hand side, ((cosec(x))^2)\(2+cot(x)). I need it to be changed into something as different as possible from the original, with as many steps as possible, without a very long right hand side. Feel free to use all trig rules. If someone could do this for me it would be great. Thanks in advance, James • February 1st 2007, 05:18 AM ThePerfectHacker Tell you class to show that, $\tan \frac{x}{2} = \frac{2\sin x}{1-\cos x}$ All times are GMT -8. The time now is 04:21 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611977338790894, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78228?sort=votes
Are cubic four-folds containing a quartic scroll pfaffians? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X\subset \mathbb{P}^5$ a smooth pfaffian smooth cubic fourfold hypersurface. It is easy to see that such a hypersurface must contain a quartic scroll surface. I wonder about the inverse question. If a cubic fourfold $X$ contains a quartic scroll, is it a pfaffian? - How do the (possibly naïve) parameter counts compare? – Noam D. Elkies Oct 15 2011 at 21:28 I believe this question is answered in Beauville-Donagi (although I don't have the article at the moment to verify). – Jason Starr Oct 16 2011 at 1:41 Thank you for your comments. @Noam: In fact, given a quartic scroll in $P^5$, I haven't computed the dimension of the space of cubics in the ideal of the scroll... is it what you meant? @Jason: in fact the question came up to me while reading Be-Do. I seem to understand that une implication is easy, but they don't seem to prove the other. Am I wrong? – IMeasy Oct 16 2011 at 14:11 4 If you look in Hassett's thesis, then he carefully does the parameter counts that Noam is suggesting. It follows that the Pfaffian cubic fourfolds form a dense Zariski open subset of the moduli space of all smooth cubic fourfolds containing a quartic scroll. But Hassett does not seems to discuss whether every smooth cubic fourfold containing a quartic scroll is Pfaffian. – Jason Starr Oct 16 2011 at 14:15 1 Answer Part (a) of Proposition 9.2 in Beauville's "Determinantal Hypersurfaces" paper (Michigan Mathematical Journal 48, 2000) says that a cubic fourfold is linear Pfaffian precisely when it contains a quintic del Pezzo surface. One path to settling your question is to determine whether every cubic fourfold $X$ containing a quartic scroll $Q$ also contains a 2-plane $P$ for which $Q \cup P$ is a degeneration of a quintic del Pezzo in $X.$ - 1 Joe Harris used to have an (unpublished) atlas of cubic fourfolds, which surfaces imply the existence of which other surfaces, etc. When I dig it up, I will let you know if this is discussed in his atlas. – Jason Starr Oct 16 2011 at 18:58 Thank you for your help! @Jason: If you happen to take a look at that Atlas, it would be of great help, thank you. – IMeasy Oct 17 2011 at 9:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9040163159370422, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/142008/how-do-you-solve-the-following-trigonometric-equation
# How do you solve the following trigonometric equation? How do you solve the following trigonometric equation? $$\tan x - 1 = -\frac{1}{\sqrt{3}} + \frac{1}{\sqrt{3}} \cot x.$$ - ## 2 Answers (You solve an equation, not a function.) Perhaps the easiest approach is to let $u=\tan x$ and use the fact that $\cot x=\frac1{\tan x}$ to rewrite it as $$u-1=-\frac1{\sqrt3}+\frac1{\sqrt3u}=\frac{1-u}{\sqrt3u}\;.$$ Now solve for $u$; at that point you'll have $\tan x$, and the rest is plain sailing. - We have $\displaystyle \cot(x) = \frac1{\tan(x)}$. Hence, the equation gives us $\displaystyle \tan(x) - 1 = - \frac1{\sqrt{3}} + \frac1{\sqrt{3}} \frac1{\tan(x)}$. Multiplying by $\tan(x)$, we get that $$\tan^2(x) - \tan(x) = - \frac{\tan(x)}{\sqrt{3}} + \frac1{\sqrt{3}}$$ Setting $\displaystyle m = \tan(x)$, we get the quadratic, $$m^2 + \left( \frac1{\sqrt{3}} - 1\right)m - \frac1{\sqrt{3}} = 0$$ which gives us that $$m = \frac{-\left( \frac1{\sqrt{3}} - 1\right) \pm \sqrt{\left(\frac1{\sqrt{3}} - 1\right)^2 + \frac4{\sqrt{3}}}}{2} = \frac{-\left( \frac1{\sqrt{3}} - 1\right) \pm \left(\frac1{\sqrt{3}} + 1\right)}{2} = -\frac1{\sqrt{3}}, 1$$ $\displaystyle m = 1$ gives us $\displaystyle x = n \pi + \frac{\pi}{4}$ while $\displaystyle m = - \frac1{\sqrt{3}}$ gives us $\displaystyle x = n \pi - \frac{\pi}{6}$ where $n \in \mathbb{Z}$. - 3 A hint or incomplete solution would have been preferable, since this is homework. – Brian M. Scott May 7 '12 at 1:13 4 @BrianM.Scott I used to give hints. But I have now become indifferent to homework problem. – user17762 May 7 '12 at 1:14 3 @Brian: Blunty, you should probably just accept it. I fought the "pedagogical" style of spoon-feeding for a long time... maybe I should still be in the fight, but it seems we're outnumbered! – The Chaz 2.0 May 7 '12 at 1:25 4 @BrianM.Scott, Dear Brian, please consider deleting you last comment. I can understand your not sharing Marvis new point of view, but I understand his having it... Since there is no hard policy on the matter, it is acceptable :) – Mariano Suárez-Alvarez♦ May 7 '12 at 1:33 6 @Marvis: Bluntly, I consider that indefensible. It ill serves the people asking the questions (and their teachers, for that matter). – Brian M. Scott May 7 '12 at 1:42 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512874484062195, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45284/examples-of-sequences-whose-asymptotics-cant-be-described-by-elementary-function/45299
## Examples of sequences whose asymptotics can’t be described by elementary functions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is somewhat miraculous to me that even very complicated sequences $a_n$ which arise in various areas of mathematics have the property that there exists an elementary function $f(n)$ such that $a_n = \Theta(f(n))$ or, even better, $a_n \sim f(n)$. Examples include • Stirling's approximation $n! \sim \sqrt{2\pi n} \left( \frac{n}{e} \right)^n$ (and its various implications), • The asymptotics of the partition function $p_n \sim \frac{1}{4n \sqrt{3}} \exp \left( \pi \sqrt{ \frac{2n}{3} } \right)$, • The prime number theorem $\pi(n) \sim \frac{n}{\log n}$, • The asymptotics of the off-diagonal Ramsey numbers $R(3, n) = \Theta \left( \frac{n^2}{\log n} \right)$. What are examples of sequences $a_n$ which occur "in nature" and which provably don't have this property (either the weak or strong version)? Simpler examples preferred. (I guess I should mention that I am not interested in sequences which don't have this property for computability reasons, e.g. the busy beaver function. I am more interested in, for example, natural examples of sequences with "half-exponential" asymptotic growth.) - 4 Let us not forget that something like $\pi(x) \sim \mathrm{Li}(x)$ is a much more accurate statement than $\pi(x) \sim x/\log x$ (as the error term is smaller), where $\mathrm{Li}(x) = \int_{2}^{x}{\frac{dt}{\log t}}$. Now $\mathrm{Li}(x)$ isn't an elementary function, but it is asymptotically equivalent to $x/\log x$ (via integration by parts), so it is worth noting that often a complicated sequence may be more naturally asymptotic to a sum or integral involving elementary functions, and then one can in turn show that such a sum or integral is also asymptotic to some elementary function. – Peter Humphries Nov 8 2010 at 10:25 3 The inverse Ackermann function arises as such an $f$ in various counting problems in Combinatorics. It seems likely to me that it cannot be expressed using elementary functions, though I can't seem to find a reference for this. – Mark Schwarzmann Nov 8 2010 at 10:51 ## 8 Answers Since this is community wiki, I won't feel that bad about not offering any crisp answers, but more of an abstract point of view which might suggest another avenue of pursuit. To me the basic subject matter goes by the name "Hardy fields": fields of germs of real-valued functions at infinity. One of the basic examples is the field of germs of all functions that can be built up from polynomials, exponentials, and logarithms, and closed under the four arithmetic operations and composition. A wonderful fact is the trichotomy law: that such "log-exp functions" are either eventually positive, eventually zero, or eventually negative. This is perhaps along the lines of the monotonicity assumptions Qiaochu mentioned above. It guarantees that the germs at infinity of such functions do indeed form a field $K$. Sitting inside the field is a valuation ring consisting of germs of bounded functions, $O$. There is a corresponding valuation on $K$ whose value group is $K^\ast/O^\ast$. The elements of $K^\ast/O^\ast$ may be called "rates of growth". Indeed, two germs of functions $[f]$, $[g]$ are equivalent mod $O^\ast$ iff both $f/g$ and $g/f$ are bounded, which is to say they are asymptotic (up to a constant) -- remember that by trichotomy, these bounded functions do not oscillate but tend to a definite limit. From this point of view, we are interested in naturally arising Hardy fields whose value groups strictly contain the value group of the log-exp class mentioned above (so that we get "intermediate rates of growth"). The reason I mention all this is that I think there is a pretty big literature on constructions of Hardy fields (which unfortunately I am not very familiar with). One area of research is via model theory and particularly o-minimal structures, where o-minimality guarantees the trichotomy law above. There are experts out there (for example, David Marker, if he's listening) who may be able to give us "natural" examples of o-minimal structures with such intermediate rates of growth at infinity. I'd be interested in this myself. Edit: And now that I've written this, I have a memory that there are theorems in o-minimal structure theory which tend to rule out such intermediate rates of growth! That in itself would be interesting, and anyhow maybe someone like David Marker would know some interesting examples anyway, whether from o-minimal structure theory or not. - 1 Thanks, Todd! This is very interesting. – Qiaochu Yuan Nov 9 2010 at 0:13 "o-minimality guarantees the trichotomy law above": I know almost nothing about o-minimal structures, but I'd be interested to learn. Could you say in a few words what the definition of "o-minimality" is? – André Henriques Jul 18 2011 at 20:42 @Andr\'e: it's short for "order-minimal". Let $T$ be a first-order theory that includes a predicate $\lt$ satisfying the axioms for a linear order. Let $R$ be a model of $T$. For example, $T$ might be the theory of ordered fields, and $R$ might be the real numbers. Call a subset of some $R^n$ "definable" if it can be defined by an $n$-ary predicate expressed in the language of $T$. Then $R$ is o-minimal if every definable subset of $R$ is a finite union of points and (possibly half-infinite) intervals. This condition has surprisingly strong consequences for the structure of definable sets! – Todd Trimble Jul 18 2011 at 22:09 A very readable book (for those who aren't model theorists), which goes into the remarkable consequences of o-minimality, is the book by Lou van den Dries, Tame Topology and O-Minimal Structures. For example, it is shown that if the real numbers $R$ are o-minimal with respect to some theory, such as the theory of ordered exponential fields, then every definable subset of $R^n$ possesses a nice tame (Whitney) stratification into finitely many smooth pieces, of the type one expects from the theory of real algebraic varieties and real semi-algebraic varieties. – Todd Trimble Jul 18 2011 at 22:20 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Several arithmetical functions don't have an equivalent in terms of an elementary function, but only have an equivalent "in the mean". For instance $d(n)$, the number of divisors of $n$, is quite irregular, but satisfies $$\frac1n(d(1)+\cdots+d(n))\sim\log n.$$ Likewise, the sum $\sigma(n)$ of divisors of $n$ is irregular (though a little less than $d(n)$) but satisfies $$\frac1n(\sigma(1)+\cdots+\sigma(n))\sim\frac{\pi^2n}{12}.$$ Finally, the average order of the Euler indicator $\phi(n)$ is $\frac{3n}{\pi^2}$. - 1 Good point. Perhaps I should impose some monotonicity hypotheses... – Qiaochu Yuan Nov 8 2010 at 13:58 1 In a similar vein, one can consider things like the number of equivalent classes of positive definite binary quadratic forms of discriminant $d$. Another example would be $\tau(n)$, the Fourier coefficients of the Ramanujan $\Delta$ function; if you prefer positive numbers, then take $|\tau(n)|^2$. Perhaps the best example is simply the characteristic function of the primes. We tend to think of these sequences as "random" so they don't have asymptotics with elementary functions. – Matt Young Nov 8 2010 at 23:00 3 What unifies $\tau$ and the examples I gave is that they are multiplicative functions: if $n\wedge m=1$, then $f(nm)=f(n)f(m)$. Such functions are unlikely to have an asymptotics, unless each restriction $f_p$ of $f$ to $\{1,p,p^2,p^3,\ldots\}$ have the same asymptotics. – Denis Serre Nov 9 2010 at 6:39 Davenport–Schinzel sequences are related to complexity of arrangements of various geometric shapes (e.g. envelopes of line segments). Their asymptotics is described in terms of the inverse Ackermann function. - The Bell numbers, have asymptotics related to the Lambert W function. EDIT: I was poking around on mathworld today, and found that the Gram Points also have W function related asymptotics. - Is it known whether the asymptotic behavior of the W function can be described using elementary functions? – Qiaochu Yuan Nov 8 2010 at 12:12 It definitely may be described using elementary functions with any prescribed accuracy. (And in general, I would call inverses of elementary functions also "expressible by elementary functions"). – Fedor Petrov Nov 8 2010 at 12:18 @Qiaochu Yuan: Yes, it can, as Fedor wrote. But is not helpful at all in many cases, as the usual series expansion for the W function converges very slowly. – zhoraster Nov 8 2010 at 12:49 4 W(x) is asymptotic to log(x/log x), which is elementary. So W is in the same boat as Li(x), already mentioned. – Gerald Edgar Nov 12 2010 at 21:33 Consider Cantor staircase function $f:\ [0,1]\rightarrow [0,1]$ and the moment function $F(x):=\int_0^1 (f(t))^x dt$. When $x$ tends to $+\infty$, it behaves like $x^{-\sigma}$, $\sigma:=\ln 3/\ln 2$. But the limit $F(x)\cdot x^{\sigma}$ does not exist: this value oscillates very slowly around a constant $1.9967\dots$ See more in Russian here: http://www.math.spbu.ru/analysis/f-doska/lap_can.pdf - 1 I like this example, but again I think the objection can be raised that this is not very natural. – Thierry Zell Nov 8 2010 at 13:32 1 well, it is just the Laplace transform asymptotics of the Cantor function (of course, it is just example, but for similar functions you get similar effects). I would say that is not much less natural then the Cantor function itself. – Fedor Petrov Nov 8 2010 at 22:15 Some algorithms have a running time which involves $\log^* n$, the number of iterations of $\log$ before the result is at most $1$. This is essentially the inverse of tetration base $e$. For example, the Fredman-Tarjan algorithm for finding a minimal weight spanning tree has run time $E ~\log^* V$, and the randomized algorithm by Clarkson et al. for triangulating a simple polygon with $n$ vertices has expected running time $n ~\log^* n$. (In both cases, there are asymptotically faster algorithms by Bernard Chazelle.) - Suppose F is a field of finite characteristic and u,v,w lie in A=F[x,y]. Let a_n be the dimension of the vector space A/(u^n,v^n,w^n). Suppose a_1 is >0 and finite. Then as n grows, (a_n)/(n^2), though bounded above and bounded away from 0 generally has an oscillating "fractal-like" behavior. - Iterated exponentials $\exp^{[n]}(x)$ grow faster than any elementary function. It is possible to construct many functions which grow even faster. - 5 Yes, but in what sense are these natural? – Qiaochu Yuan Nov 8 2010 at 12:12 2 See sci.tech-archive.net/Archive/sci.math.research/… for a thread Physical Applications of Tetration. In brief, iterated exponentials have no known natural occurrence. – Daniel Geisler Nov 8 2010 at 13:40 "In addition, it is worth to note that these numbers appear in combinatorial physics, in the problem of the normal ordering of quantum field theoretical operators." arxiv.org/abs/0812.4047v1 But in what sense do you want it to be natural? – Anixx Nov 8 2010 at 22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291821122169495, "perplexity_flag": "head"}