url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177705049
|
### Sampling Moments of Means from Finite Multivariate Populations
D. W. Behnken
Source: Ann. Math. Statist. Volume 32, Number 2 (1961), 406-413.
#### Abstract
A method is described for deriving the sampling moments of means of random vectors obtained by sampling without replacement from a finite $k$-variate population of $n$ vector members. A table of results is presented listing the moments of order less than or equal to six as a function of the population moments. These moments were originally derived, in a less general form, in the course of developing the Simplex-Sum Designs discussed in [1]. Their possible wider applicability to sampling problems, however, motivated the extension of the work to the general formulas given here.
First Page:
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aoms/1177705049
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8847929239273071, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/201457-root.html
|
# Thread:
1. ## Root
Ok, I just discovered that I am not allowed to use a calculator on my exam.
Now, I have no idea how to calculate roots and sin and cos values manually. And my textbook does not say.
So. If I am left with this:
Cos-1(-4/(sqrt(11)sqrt(14))) how can I solve this?
(I understand that this is not a place to provide a complete explanation so a hint to what I can google for or a website link will be enough to get me started)
Any help is appreciated.
I tried to google a little for myself but I did not find a relatively simple explanation)
2. ## Re: Root
Originally Posted by mariusg
Ok, I just discovered that I am not allowed to use a calculator on my exam.
Now, I have no idea how to calculate roots and sin and cos values manually. And my textbook does not say.
You don't! There are ways to find those "manually" but they very complicated, tedious, and subject to errror. Even in the years "BC" (before calculators) we did not do those (except possibly square roots), we looked values up in tables. I recommend you talk to your teacher, or whoever is in charge of the exam, about this.
So. If I am left with this:
Cos-1(-4/(sqrt(11)sqrt(14))) how can I solve this?
You don't! Not by hand. There are a few "special cases" you could be expected to know, such as $sin(\pi/3)=\frac{\sqrt{3}}{2}$ or $cos(\pi/4)= \frac{\sqrt{2}}{2}$ but it would not be reasonable to expect a person to do that by hand in any restricted time. I recommend, again, that you talk with whomever is in charge about exactly what you will be expected to do on this exam. Perhaps they are planning to give you the values you need. Often, part of the test will be "without calculators" and part "with calculators"..
(I understand that this is not a place to provide a complete explanation so a hint to what I can google for or a website link will be enough to get me started)
Any help is appreciated.
I tried to google a little for myself but I did not find a relatively simple explanation)
That's because there is NO "relatively simple explanation"!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407914876937866, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/91600/list
|
Return to Answer
2 added 44 characters in body
This is an answer to the first question.
Let's indicate with $r(K)$ the ribbon number of a knot, i.e.the minimum number of ribbon singularities needed to realize a ribbon disc spanning $K$. We have $$r(K)\geq g(K)$$ This is shown by Fox here:http://ir.library.osaka-u.ac.jp/metadb/up/LIBOJMK01/ojm10_01_08.pdf
Mizuma has shown that under certain conditions on the Alexander and Jones polynomial you can assume that $r(K)\geq 3$. This is Theorem 1.5 here:http://ir.library.osaka-u.ac.jp/metadb/up/LIBOJMK01/1782ojm.pdf It is a very special situation and I don't think that much more is known in the general case.
Maybe it is worth noting that given a band diagram for a ribbon disc one can add a fake ribbon singularity near each singularity and then eliminate both with a tubing operation. This produces a Seifert surface whose genus equals the number of the original ribbon singularities in the band diagram. The definition of a band diagram and a picture of this trick can be found here:http://etd.adm.unipi.it/theses/available/etd-07062011-061816/unrestricted/Polynomial_invariants_of_ribbon_links_and_symmetric_unions.pdf (pag. 27)
1
Let's indicate with $r(K)$ the ribbon number of a knot, i.e.the minimum number of ribbon singularities needed to realize a ribbon disc spanning $K$. We have $$r(K)\geq g(K)$$ This is shown by Fox here:http://ir.library.osaka-u.ac.jp/metadb/up/LIBOJMK01/ojm10_01_08.pdf
Mizuma has shown that under certain conditions on the Alexander and Jones polynomial you can assume that $r(K)\geq 3$. This is Theorem 1.5 here:http://ir.library.osaka-u.ac.jp/metadb/up/LIBOJMK01/1782ojm.pdf It is a very special situation and I don't think that much more is known in the general case.
Maybe it is worth noting that given a band diagram for a ribbon disc one can add a fake ribbon singularity near each singularity and then eliminate both with a tubing operation. This produces a Seifert surface whose genus equals the number of the original ribbon singularities in the band diagram. The definition of a band diagram and a picture of this trick can be found here:http://etd.adm.unipi.it/theses/available/etd-07062011-061816/unrestricted/Polynomial_invariants_of_ribbon_links_and_symmetric_unions.pdf (pag. 27)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8674957156181335, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/93310?sort=votes
|
## Is there a general result that theorems about finite structures proved in ZFC can be proved in ZF?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The title question is too vague so let me be specific.
Much of modern finite semigroup theory uses profinite semigroups and properties of profinite semigroups that depend on the existence of prime ideals in Boolean algebras, which is a choice principle weaker than AC. Often results are proved first via profinite methods and then explicit proofs using finite means are found later.
To narrow my question down, let me consider the following situation. Let V be what is sometimes called a variety of finite semigroups, that is, a class of finite semigroups closed under finite products, subsemigroups and homomorphic images. A pro-V semigroup is an inverse limit of semigroups in V. It is known, using Tychnoff's theorem for products of finite spaces, that every continuous finite quotient of a pro-V semigroup belongs to V. So if you want to show a variety W of finite semigroups is contained in a variety V it suffices to show each element of W is a continuous quotient of a free pro-V semigroup. Sometimes one has good information on the structure of free pro-V semigroups and can exploit this.
For example, Almeida proved that the smallest variety of finite semigroups containing all finite commutative semigroups and all finite groups is the variety of finite semigroups with central idempotents using the approach sketched above. Later Auinger gave a proof using only finite semigroups.
Question. Is there some general result in logic or set theory that would imply that the existence of a proof in ZF+Boolean prime ideal theoerem that a variety W of finite semigroups is contained in a variety V of finite semigroups implies the existence of a proof in ZF that W is contained in V?
Many results of this sort in finite semigroup theory were motivated by questions in automata theory and in principle one would like to avoid choice in this context.
Caveat: I know very little set theory or model theory so please take that into account in answer.
-
## 2 Answers
As long as your varieties have reasonable definitions, the axiom of choice will be eliminable from proofs like these. The point is that any finite semigroup has an isomorphic copy whose underlying set consists of natural numbers, and that copy will be in Gödel's constructible universe $L$, where the axiom of choice holds. So if there were a semigroup in $W$ that isn't in $V$, then the same would be true in $L$.
Now about that proviso "reasonable definitions": You can sneak a lot of set theory into the definition of a variety. Consider the variety that consists of all groups if the continuum hypothesis holds and consists of all commutative semigroups if the continuum hypothesis fails. With such "cheating" you can surely arrange for $W$ to be included in $V$ iff the axiom of choice holds. My argument in the preceding paragraph tacitly assumed that the definitions of the varieties were absolute, in the sense that the constructible universe $L$ and the whole universe agree as to whether any particular finite semigroup is in the variety. By working harder, one can get by with weaker absoluteness requirements, but one can't get rid of them completely.
-
Most varieties I care about are generated by applying nice operations to already understood varieties. I think your unreasonable definitions are not a problem because it is somehow hidden in the meta theory. There is the variety of finite groups and the variety of finite commutative semigroups. The undecidable issue is which one we are talking about if we define V that way. Thanks for your nice answer. – Benjamin Steinberg Apr 6 2012 at 14:29
Would an inverse limit of finite semigroups indexed by a countable set still live in Godel's constructible universe? – Benjamin Steinberg Apr 6 2012 at 14:36
1
@Benjamin: Not as such, unless all reals are constructible. However, since it is $\Pi^0_1$-definable, I guess you can get around it with tools like Shoenfield absoluteness theorem. – Emil Jeřábek Apr 6 2012 at 16:01
Thanks Emil. One of these days I will know logic well enough to recognize where a particular construction lives. Until then let us be thankful for MO:) – Benjamin Steinberg Apr 7 2012 at 2:14
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a standard method to handle this kind of situation which uses the absoluteness of certain statements over models of set theory.
The basic idea is this. If you're investigating the properties of a certain "well-behaved" set $A$, then instead of working in the full set-theoretic universe $V$, you can work in the smallest universe $L[A]$ that contains $A$ and all the ordinal numbers. By classical results of Gödel and others, this smaller universe $L[A]$ satisfies some very strong forms of choice and many other nice properties. You can then freely use these facts to show that "$A$ has property $X$" holds in $L[A]$. As is this doesn't really mean much, but if "property $X$" is upward absolute, then you can conclude that "$A$ has property $X$" holds in the full set-theoretic universe $V$.
There are several tricks to analyze when a property is absolute or not. Many of them are syntactic in nature, for example Shoenfield's absoluteness theorem states that $\Sigma^1_3$ properties are always upward absolute. Assuming the existence of some large cardinals, a whole lot of statements can be shown to be absolute.
For the situation you describe, I believe everything is absolute for even simpler reasons. The constructible universe $L$ already contains isomorphic copies of all finite objects in the real universe $V$. Since the definition of a variety is finitary, varieties of finite semigroups are essentially the same in $L$ as they are in $V$. Therefore, inclusions of varieties of finite semigroups are absolute between $L$ and $V$. So you can freely use the axiom of choice in $L$ and the conclusions drawn automatically transfer from $L$ to $V$.
-
I looked once at Shoenfield's absoluteness theorem on Wiki but my logical incompetence made it unclear if it helps with my question. Nonetheless your answer does seem to help. – Benjamin Steinberg Apr 6 2012 at 14:25
I'll ask you the same question I asked Andreas. Do inverse limits of finite semigroups indexed by countable sets stay in the constructible universe? – Benjamin Steinberg Apr 6 2012 at 15:08
1
$L$ is a model of ZFC so all the usual constructions work in there as usual. The inverse limits may not be the same in $V$ and in $L$ - $V$ may be able to see more points than $L$ does. However, this doesn't matter since $L$ believes that what it sees is the inverse limit and all facts about inverse limits hold true in $L$. The only thing you can't do is transfer the inverse limit itself from $L$ to $V$, so if your final conclusion doesn't mention the inverse limit itself then you're safe even if you use the inverse limit profusely in getting to that conclusion in $L$. – François G. Dorais♦ Apr 6 2012 at 15:45
I would like to accept both answers but since Andreas was first I have to take his. But your comments were very helpful. – Benjamin Steinberg Apr 6 2012 at 19:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427682757377625, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87941/if-f-x-to-y-is-a-finite-flat-morphism-of-schemes-g-y-to-z-is-a-proper-mo/87945
|
## If $f: X \to Y$ is a finite flat morphism of schemes, $g: Y \to Z$ is a proper morphism of relative dimension one, $Z$ is affine and $E$ is a vector bundle on $Y$ with $R^1g_*E=0$ then $H^1(X,f^*E)=0$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f: X \to Y$ and $g: Y \to Z$ be morphisms of schemes* such that f is flat and finite, g is proper and $R^{> 1}g_*E=0$ for all sheaves and Z is affine.
Let E be a vector bundle on Y such that $R^1 g_* E=0$. Can we say anything about $H^1 (X,f^* E)$? By the projection formula this is the same as $R^1 g_* ( E \otimes f_* \mathcal{O} )$ and as f is flat and finite $f_* \mathcal{O}_X$ is a vector bundle. But I can't seem to be able to say much else.
`*` There are many hypotheses I'd be happy to make: everything is finite type over a field, the field is algebraically closed of characteristic zero, all the schemes involved are integral, Y is regular, g is actually the restriction of a morphism of projective varieties $g': Y' \to Z'$ to an open affine patch $Z$ of $Z'$. The dual of $E$ is globally generated. $X,Y,Z$ have all the same dimension (equal to 3). g is birational. $Rg_* \mathcal{O}_Y = \mathcal{O}_Z$ and $Z$ is Gorenstein.
-
1
Do you assume that all higher direct images under $g$ are $0$? I don't understand your question, then. – Keerthi Madapusi Pera Feb 9 2012 at 0:30
1
Keerthi, I think the point is, he assumes that higher direct images $R^ig_*=0$ for $i>1$, but not $i=1$, e.g., a family of curves. – Sándor Kovács Feb 9 2012 at 0:46
Ah, yes. I was willfully misreading the first sentence, it seems! – Keerthi Madapusi Pera Feb 9 2012 at 1:23
Could you please rewrite question and title to make them consistent? – a-fortiori Feb 9 2012 at 12:28
sorry I fixed the title: is it still confusing you? – Yosemite Sam Feb 9 2012 at 14:10
## 2 Answers
Actually, for any such projective $g$ and any $E$ there exists an $f$ such that this fails. In fact you may even assume that $f$ is a double cover. (I suspect that it also fails without the projective assumption, but this seems convincing enough).
Let $g:Y\to Z$ be a projective morphism, $Y$ smooth, $Z$ quasi-projective and $E$ a coherent sheaf on $Y$. Assume that $R^ig_*(E\otimes N)=0$ for all $N$ line bundles and $i>1$. Then there exists a finite, flat $f:X\to Y$ such that $X$ is smooth and $H^1(X,f^*E)\neq 0$. Notice that this is less than assumed and probably even less is enough for the above claim.
Claim Let $L$ be a $g$-ample line bundle on $Y$. Then `$g_*(E\otimes L^{-m}) = 0$`, but `$Rg_*(E\otimes L^{-m})\neq 0$` for $m\gg 0$.
(Here $Rg_*$ stands for the total push-forward and this statement is equivalent to saying that there exists an $i$ such that `$R^ig_*(E\otimes L^{-m})\neq 0$`).
Proof The first claim is obvious, we can "kill" every section in $E$ by "dividing" with sections of $L$ enough times. The second one is relatively easy using Grothendieck duality and observing that $g_*(F\otimes L^m)\neq 0$ for $m\gg 0$ and any $F$ (and you just have to use an appropriate $F$ that comes from GD). I am sure this can be proved by alternative means. $\square$
So, now take $N=L^m$ such that `$g_*(E\otimes N^{-1}) = 0$` and `$Rg_*(E\otimes N^{-1})\neq 0$`. By the assumption that $R^ig_*(E\otimes N)=0$ for all $i>1$ it follows that `$R^1g_*(E\otimes N^{-1})\neq 0$`. We may assume that $N^2$ is very ample and choose a general section $s\in H^0(Y,N^2)$. Let $\mathscr A=\mathscr O_Y\oplus N^{-1}$ and make it an $\mathscr O_Y$ algebra using $s$ to "wrap back" from $N^{-2}$ to $\mathscr O_Y$ as usual. Finally, let $X=\mathrm{Spec}_Y\mathscr A$ and $f$ the associated morphism. From the construction it is clear that $f$ is flat and finite of degree $2$. Again by the construction `$f_*\mathscr O_X\simeq \mathscr A$` and then by the above, `$R^1g_*(E\otimes f_*\mathscr O_X)=R^1g_*E\oplus R^1g_*(E\otimes N^{-1})\neq 0$`.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This doesn't seem to be true without additional hypotheses. Here is a counterexample:
Take $Z = \mathrm{Spec}(\mathbb{C})$ , $X = Y = \mathbb{P}^1_\mathbb{C}$, $f:X\to Y$ the map sending $[x: y]$ to $[x^2:y^2]$ (then $f_* \mathcal{O}_X = \mathcal{O}_Y\oplus\mathcal{O}_Y(-1)$, in particular $f$ is finite flat). Take $E = \mathcal{O}_Y(-1)$. Then the dual of $E$ is globally generated and $H^1(Y, E) = 0$. We have $f^* E = \mathcal{O}_X(-2)$ which has one-dimensional $H^1$.
-
thanks, I've added some extra assumptions (which I forgot when I first asked the question) – Yosemite Sam Feb 9 2012 at 8:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553120136260986, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/05/26/matrices-iii/?like=1&source=post_flair&_wpnonce=5b12459335
|
The Unapologetic Mathematician
Matrices III
Given two finite-dimensional vector spaces $U$ and $V$, with bases $\left\{e_i\right\}$ and $\left\{f_j\right\}$ respectively, we know how to build a tensor product: use the basis $\left\{e_i\otimes f_j\right\}$.
But an important thing about the tensor product is that it’s a functor. That is, if we have linear transformations $S:U\rightarrow U'$ and $T:V\rightarrow V'$, then we get a linear transformation $S\otimes T:U\otimes V\rightarrow U'\otimes V'$. So what does this operation look like in terms of matrices?
First we have to remember exactly how we get the tensor product $S\otimes T$. Clearly we can consider the function $S\times T:U\times V\rightarrow U'\times V'$. Then we can compose with the bilinear function $U'\times V'\rightarrow U'\otimes V'$ to get a bilinear function from $U\times V$ to $U'\otimes V'$. By the universal property, this must factor uniquely through a linear function $U\otimes V\rightarrow U'\otimes V'$. It is this map we call $S\otimes T$.
We have to pick bases $\left\{e_k'\right\}$ of $U'$ and $\left\{f_l'\right\}$ of $V'$. This gives us a matrix coefficients $s_i^k$ for $S$ and $t_j^l$ for $T$. To calculate the matrix for $S\otimes T$ we have to evaluate it on the basis elements $e_i\otimes f_j$ of $U\otimes V$. By definition we find:
$\left[S\otimes T\right](e_i\otimes f_j)=S(e_i)\otimes T(f_j)=\left(s_i^ke_k'\right)\otimes\left(t_j^lf_l'\right)=s_i^kt_j^le_k'\otimes f_l'$
that is, the matrix coefficient between the index pair $(i,j)$ and the index pair $(k,l)$ is $s_i^kt_j^l$.
It’s not often taught anymore, but there is a name for this operation: the Kronecker product. If we write the matrices (as opposed to just their coefficients) $\left(s_i^k\right)$ and $\left(t_j^l\right)$, then we write the Kronecker product $\left(s_i^k\right)\boxtimes\left(t_j^l\right)=\left(s_i^kt_j^l\right)$.
Like this:
Posted by John Armstrong | Algebra, Linear Algebra
6 Comments »
1. [...] Like we saw with the tensor product of vector spaces, the dual space construction turns out to be a functor. In [...]
Pingback by | May 28, 2008 | Reply
2. [...] is one slightly touchy thing we need to be careful about: Kronecker products. When the upper index is a pair with and we have to pick an order on the set of such pairs. [...]
Pingback by | May 30, 2008 | Reply
3. [...] the monoidal product on objects by multiplication — — and on morphisms by using the Kronecker product. That is, if we have an matrix and an matrix , then we get the Kronecker [...]
Pingback by | June 3, 2008 | Reply
4. [...] wait, there’s more! The functor is linear over , so it’s a functor enriched over . The Kronecker product of matrices corresponds to the monoidal product of linear transformations, so the functor is [...]
Pingback by | June 23, 2008 | Reply
5. [...] can recognize this as a Kronecker product of two [...]
Pingback by | October 5, 2010 | Reply
6. [...] want. And we know that when expressed in matrix form, the tensor product of linear maps becomes the Kronecker product of matrices. We write the character of as , that of as , and that of their tensor product as , [...]
Pingback by | November 4, 2010 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233265519142151, "perplexity_flag": "head"}
|
http://leepike.wordpress.com/2009/08/10/finding-boole/?like=1&_wpnonce=07995c002f
|
Finding Boole
Posted: August 10, 2009 | Author: Lee Pike | Filed under: Hardware, Verification | Tags: model checking, SAL |
Here’s a simple challenge-problem for model-checking Boolean functions: Suppose you want to compute some Boolean function $spec :: B^k \rightarrow B$, where $B^k$ represents 0 or more Boolean arguments.
Let $f_0$, $f_1$, $\ldots$, $f_j$ range over 2-ary Boolean functions, (of type $(Bool, Bool) \rightarrow Bool$), and suppose that $f$ is a fixed composition of $f_0$, $f_1$, $\ldots$, $f_j$. (By the way, I’m going to talk about functions, but you can think of these as combinatorial circuits, if you prefer.)
Our question is, “Do there exist instantiations of $f_0$, $f_1$, $\ldots$, $f_j$ such that for all inputs, $f = spec$?
What is interesting to me is that our question is quantified and of the form, “exists a forall b …”, and it is “higher-order” insofar as we want to find whether there exist satisfying functions. That said, the property is easy to encode as a model-checking problem. Here, I’ll encode it into SRI’s Symbolic Analysis Laboratory (SAL) using its BDD engine. (The SAL file in its entirety is here.)
To code the problem in SAL, we’ll first define for convenience a shorthand for the built-in type, BOOLEAN:
`B: TYPE = BOOLEAN;`
And we’ll define an enumerated data type representing the 16 possible 2-ary Boolean functions:
```B2ARY: TYPE = { False, Nor, NorNot, NotA, AndNot, NotB, Xor, Nand
, And, Eqv, B, NandNot, A, OrNot, Or, True};```
Now we need an application function that takes an element f from B2ARY and two Boolean arguments, and depending on f, applies the appropriate 2-ary Boolean function:
```app(f: B2ARY, a: B, b: B): B =
IF f = False THEN FALSE
ELSIF f = Nor THEN NOT (a OR b)
ELSIF f = NorNot THEN NOT a AND b
ELSIF f = NotA THEN NOT a
ELSIF f = AndNot THEN a AND NOT b
ELSIF f = NotB THEN NOT b
ELSIF f = Xor THEN a XOR b
ELSIF f = Nand THEN NOT (a AND b)
ELSIF f = And THEN a AND b
ELSIF f = Eqv THEN NOT (a XOR b)
ELSIF f = B THEN b
ELSIF f = NandNot THEN NOT a OR b
ELSIF f = A THEN a
ELSIF f = OrNot THEN a OR NOT b
ELSIF f = Or THEN a OR b
ELSE TRUE
ENDIF;```
Let’s give a concrete definition to f and say that it is the composition of five 2-ary Boolean functions, f0 through f4. In the language of SAL:
```f(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B):
[[B2ARY, B2ARY, B2ARY, B2ARY, B2ARY] -> B] =
LAMBDA (f0: B2ARY, f1: B2ARY, f2: B2ARY, f3: B2ARY, f4: B2ARY):
app(f0, app(f1, app(f2, b0,
app(f3, app(f4, b1, b2),
b3)),
b4),
b5);```
Now let’s define the spec function that f should implement:
```spec(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
(b0 AND b1) OR (b2 AND b3) OR (b4 AND b5);```
Now, we’ll define a module m; modules are SAL’s building blocks for defining state machines. However, in our case, we won’t define an actual state machine since we’re only modeling function composition (or combinatorial circuits). The module has variables corresponding the function inputs, function identifiers, and a Boolean stating whether f is equivalent to its specification (we’ll label the classes of variables INPUT, LOCAL, and OUTPUT, to distinguish them, but for our purposes, the distinction doesn’t matter).
```m: MODULE =
BEGIN
INPUT b0, b1, b2, b3, b4, b5 : B
LOCAL f0, f1, f2, f3, f4 : B2ARY
OUTPUT equiv : B
DEFINITION
equiv = FORALL (b0: B, b1: B, b2: B, b3: B, b4: B, b5: B):
spec(b0, b1, b2, b3, b4, b5)
= f(b0, b1, b2, b3, b4, b5)(f0, f1, f2, f3, f4);
END;```
Notice we’ve universally quantified the free variables in spec and the definition of f.
Finally, all we have to do is state the following theorem:
`instance : THEOREM m |- NOT equiv;`
Asking whether equiv is false in module m. Issuing
`$ sal-smc FindingBoole.sal instance`
asks SAL’s BDD-based model-checker to solve theorem instance. In a couple of seconds, SAL says the theorem is proved. So spec can’t be implemented by f, for any instantiation of f0 through f4! OK, what about
```spec(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
TRUE;```
Issuing
`$ sal-smc FindingBoole.sal instance`
we get a counterexample this time:
```f0 = True
f1 = NandNot
f2 = NorNot
f3 = And
f4 = Xor```
which is an assignment to the function symbols. Obviously, to compute the constant TRUE, only the outermost function, f0, matters, and as we see, it is defined to be TRUE.
By the way, the purpose of defining the enumerated type B2ARY should be clear now—if we hadn’t, SAL would just return a mess in which the value of each function f0 through f4 is enumerated:
```f0(false, false) = true
f0(true, false) = true
f0(false, true) = true
f0(true, true) = true
f1(false, false) = true
f1(true, false) = true
f1(false, true) = false
f1(true, true) = true
f2(false, false) = false
f2(true, false) = true
f2(false, true) = false
f2(true, true) = false
f3(false, false) = false
f3(true, false) = false
f3(false, true) = false
f3(true, true) = true
f4(false, false) = false
f4(true, false) = true
f4(false, true) = true
f4(true, true) = false```
OK, let’s conclude with one more spec:
```spec(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
(NOT (b0 AND ((b1 OR b2) XOR b3)) AND b4) XOR b5;```
This is implementable by f, and SAL returns
```f0 = Eqv
f1 = OrNot
f2 = And
f3 = Eqv
f4 = Nor```
Although these assignments compute the same function, they differ from those in our specification. Just to double-check, we can ask SAL if they’re equivalent:
```spec1(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
((b0 AND ((NOT (b1 OR b2)) b3)) OR NOT b4) b5;```
specifies the assignments returned, and
`eq: THEOREM m |- spec(b0, b1, b2, b3, b4, b5) = spec1(b0, b1, b2, b3, b4, b5);`
asks if the two specifications are equivalent. They are.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.861630916595459, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/46144/what-if-only-control-variables-are-significant-in-a-differences-in-differences-a/46153
|
# What if only control variables are significant in a differences-in-differences analysis?
Regarding the standard DID model: $$y=\alpha+\beta_1\text{treat}+\beta_2\text{post}+\beta_3\text{treat⋅post}+u$$ What exactly does it mean if say $\beta_3$ is not statistically significant, but $\beta_1$ is? Does the significance of $\beta_1$ just mean that my control group and my experimental group differed in the very beginning in a statistically significant manner?
In addition, say that you add in additional control variables. Does it matter if the coefficients on any of those control variables is significant, or do we only care about the control variables in how they affect the value and significance of $\beta_3$?
Essentially: how do you interpret the coefficients on your control variables if they end up being significant in a DID regression?
-
## 1 Answer
Graphically, this means that you cannot reject the null that your two groups can be represented with two distinct parallel lines. In the graph below, all the $\beta$s are negative. If $\beta_1$ is significant, you know that the treatment group had a significantly lower outcome to start with. If $\beta_3$ is not significantly different from zero, you basically can't tell if there's really a discontinuity in the treatment group line. If $\beta_2$ was not significant, the lines might actually be horizontal, so there's no downward trend.
If some of the coefficients on other explanatory variables are significant, that just tells you that they are associated with $Y$. Depending on the details, it might be possible to give that association a causal interpretation.
Graph taken from The Tarzan blog
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345021843910217, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/31308/splines-harmonic-analysis-singular-integrals/31310
|
## Splines, harmonic analysis, singular integrals.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)
One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,
$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$
What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.
Thanks.
EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by Kent and Mardia, 1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.
Both are in Statistics at Leeds,
http://www.amsta.leeds.ac.uk/~sta6kvm/
http://www.maths.leeds.ac.uk/~john/
http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html
Scanned article: http://www.gigasize.com/get.php?d=90wl2lgf49c
FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$
LATER, ALSO FROM THE OP: I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?
-
3
The faster a function decays, the more functions you can integrate it against. – Qiaochu Yuan Jul 10 2010 at 17:19
2
I would say that «because with that condition things work as one wants them to» in some contexts. Notice that there are other spaces of test functions which are useful (and in fact, generalized functions are most generally introduced using not the ones you mention but $C^\infty$ functions of compact support) – Mariano Suárez-Alvarez Jul 10 2010 at 17:23
2
Somewhere below in the comments ( mathoverflow.net/questions/31308/… ) the OP gave a link to the paper he is looking at. Somehow I feel like this rather open-ended question here is getting nowhere. I think it may be more productive if the OP points out (in perhaps a new question) the precise statement that is giving him/her trouble. That way he/she is likely to get a more focused and to the point response. Just my 2 pence. – Willie Wong Jul 10 2010 at 22:57
My primary objective is to understand the paper by Kent and Mardia. But in order to begin to do so, I think need to become familiar with the mathematics they use which I "assumed" was from the area of generalized functions and specifically Schwartz spaces, which is why I've been reading up on both subjects. But it would be helpful if someone would kindly confirm my "suspicions" and point me in the right direction. I apologize for any confusion I may have caused. – Olumide Jul 11 2010 at 1:59
1
Let me put it this way: if you intend to make new theories and construct new theorems, then you do need to understand the intricacies of the details of generalized functions, especially what you are allowed to do and what you are not. If you intend to be a user of results, or are just surveying the literature, all that suffices you know is that for tempered distributions (a subset of generalized functions which grows at most polynomially near infinity; sort of opposite of Schwartz functions), many things that are defined for functions can be done for them... – Willie Wong Jul 11 2010 at 11:48
show 7 more comments
## 7 Answers
From the Fourier analysis point of view, the reason is the property of the Fourier transform to interchange derivatives and multiplications, which you can read more about on Wikipedia. The crucial point is that the smoothness of a function is directly related to the decay rate of its (inverse) Fourier transform. So if you want a family of infinitely differentiable functions whose Fourier transform is also infinitely differentiable, you are necessarily led to consider the Schwarz class.
As a by product of the definition, you also have that the Schwarz class is closed under pointwise multiplication and under convolutions.
-
And of course, also the Schwartz class is closed under the Fourier transform. – Robin Chapman Jul 10 2010 at 18:13
1
These are good ideas, but I think your concluding sentence is a bit too strong. One could easily define the space of test functions to be the space of compactly supported smooth functions. The Fourier transform of such a function, being a very special kind of Schwarz function, is also smooth with extreme decay at infinity. The problem is that while any locally integrable function defines a linear functional over $C_c^{\infty}$, the formula $\hat{T}(\phi)=T(\hat{\phi})$ does not make sense since $\hat{\phi}$ will never be compactly supported when $\phi$ is compactly supported. – Peter Luthy Jul 10 2010 at 21:13
@Peter: see also Robin's comment. $C^\infty_c$ is not closed under Fourier transform, which is implicitly what I meant. I also thought about giving the fact that $\mathcal{S}$ is invariant under $\mathcal{F}$ implies that the Fourier transform is well defined for $\mathcal{S}'$, but I got lazy (especially since the OP is a computer scientist and I don't know how much background he has). So no, I don't think the concluding sentence is too strong. I just left a gap. Which you nicely filled. Thanks :) – Willie Wong Jul 10 2010 at 21:54
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you want to extend differentiation to all continuous functions, then (provided you have some convenient mathematical properties of the extension) you are FORCED to use distributions or roughly equivalent things; you have no choice! Similarly, to extend the Fourier transform you are forced to consider tempered distributions.
Speaking as a pure mathematician: the main purpose of general distributions is to extend differentiation, not integration (since integration makes things nicer; it is differentiation which is the nastier operation). They are fine as long as you aren't using the Fourier transform.
Thus, every locally integrable function can be regarded as a distribution, and therefore differentiated; so, when you're considering differential equations, this might be all you need (you don't have to worry whether the functions are differentiable or not, because distributions always are). You find distribution solutions, then try to prove that they're actually functions.
It's similar to solving polynomial equations by using complex numbers; even if all the roots are real, it's still sometimes easier to solve them with complex numbers, then try to prove they're real (e.g. by showing they're self-conjugate).
However, if you want to do Fourier Transforms then you have to consider tempered distributions (or Schwartz distributions), since general distributions are sometimes too nasty to have Fourier transforms.
Note that even genuine locally integrable functions need not represent tempered distributions, so general distributions are not appropriate for Fourier transforms even when you only want to consider functions.
But Fourier inversion works perfectly for tempered distributions, no further restrictions are needed, unlike, say, $L^1$. If $f \in L^1$ then $\widehat{f}$ is usually not in $L^1$, so you can't do Fourier inversion theory nicely on $L^1$ (you would have to assume that also $\widehat{f} \in L^1$, which is often not true!)
Extension in mathematics is very powerful; when you don't have to worry about restrictions and annoying details, it is easier! For example, complex numbers are easier than real numbers, complex analysis is easier than real analysis, and Lebesgue integration is easier than Riemann integration!! Students never believe this, but it's true if you actually want to use it (rather than do toy problems in books)...
-
Just a quick clue. The example you want is essentially the Gaussian normal distribution from probability, $$\frac{1}{\sqrt {2 \pi}} \; \; e^{- x^2 / 2}$$ and probably the simplest motivation is that the Fourier transform of this function is just itself (well, up to a constant multiple, depends on whose definition you have).
These are a stand-in for functions of compact support. A function and its Fourier transform cannot both have compact support, that is a fact of life.
See:
http://en.wikipedia.org/wiki/Schwartz_space
-
Your statement about functions of compact support is the correct problem with defining the Fourier transform of distributions with smooth compactly supported functions as the test functions. – Peter Luthy Jul 10 2010 at 21:14
The Schwartz space $\mathcal{F}$ is just one space, one could use to define distributions. Two other common examples are smooth functions $C^{\infty}$ and smooth functions with compact support $C^{\infty} _c$. Then one has the inclusions $$C^{\infty} _{c} \subseteq \mathcal{S} \subseteq C^{\infty}$$ Now distributions are just taking the topological dual of these spaces, so one has then $$(C^{\infty})' \subseteq \mathcal{S}' \subseteq (C^{\infty} _{c})'$$ So the inclusions get reversed. So imposing a less restrictive decay condition would lead you to a small space of distributions. In fact, $(C^{\infty})'$ consists of distributions of compact support.
The other issue mentioned in the other posts, is that the Fourier transform takes the Schwartz space into itself. It is much less obvious what the Fouriertransform does on $C_c^{\infty}$, and the Fourier transform is not even defined on $\mathcal{C}^{\infty}$.
-
While I'm not saying anything new, I feel the responses thus far either miss the point or are not very complete. Generalized functions (aka distributions) are defined as linear functionals on some class of functions, typically referred to as test functions. To begin with, one usually wants any locally integrable function to be a generalized function. If $f$ is any locally integrable function then the generalized function corresponding to $f$ is just the linear functional $\int f\phi$ when $\phi$ is a test function. So the obvious first choice for the space of test functions is the space of compactly supported functions since integrating a locally integrable function against a smooth compactly supported always makes sense. Then one can define the derivative of a generalized function, say T, to be the functional T' which satisfies $T'(\phi)=-T(d\phi/dx)$ whenever $\phi$ is a smooth, compactly supported function. If T can be represented by a smooth function, then this is just the integration by parts formula, which makes sense since $\phi$ is compactly supported. So the function $e^{e^{e^x}}$ is a perfectly reasonable generalized function in this case.
As said a number of times above, one would also like to define the Fourier transform of a generalized function via the formula $\hat{T}(\phi)=T(\hat{\phi})$. The problem with the space of compactly supported functions is that the Fourier transform of a nonzero compactly supported function is never compactly supported. So $T(\hat{\phi})$ might not make sense if T is allowed to be any locally integrable function. In particular, suppose that $\phi$ was some smooth function of compact support whose Fourier transform goes to zero slower than something like $e^{-x^{10}}$. The function $e^{x^{11}}$ is locally integrable and hence a linear functional on the space of compactly supported smooth functions, but it is easy to see that $\int e^{x^{11}}\hat{\phi}$ isn't going to be a finite number.
The Schwarz space is nice because the Fourier transform of a Schwarz function is a Schwarz function. So given any linear functional T on the Schwarz space (such a T is called a tempered distribution), one can define the Fourier transform $\hat{T}$ of $T$ via the formula $\hat{T}(\phi)=T(\hat{\phi})$ when $\phi$ is a Schwarz function. This formula will always make sense when T is a tempered distribution.
-
Thanks everyone for answers given so far. Now for some really ignorant questions from me. I'm really trying to make sense of generalized functions, so here goes:
Its often said that the concept of generalized functions helps to assign integrals to otherwise integrable functions (pardon my phrasing). What confuses me is why multiplying an otherwise unintegrable function with an "arbitrary test function" and then integrating the product is a valid. This seems to me to be the reason for the Schwartz class of test functions; namely functions that can "cool down" faster than any polynomial can blow up. Or in other words, given an ill-behaved, ready-to-blow-up function, a test function that can "tame it" can always be chosen ...
Is this right?
-
Here is motivation for my question: I'm trying to understand a paper that replaces an integral $\int f(\omega) d\omega$ with $\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} f(\omega) d\omega$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $frac{1}{\omega^2}$. The paper in question is: "The link between kriging and thin-plate splines" by J.T. Kent and K.V. Madria. The paper however makes no mention of Schwartz spaces. – Olumide Jul 10 2010 at 19:07
Google scholar doesn't show a paper by that name. Do you have a link or a journal reference? – Willie Wong Jul 10 2010 at 19:23
Its a paper in the collection: "Probability, statistics, and optimisation: a tribute to Peter Whittle" : Wiley, 1994. I'll send you a link to the scanned copy. – Olumide Jul 10 2010 at 20:14
Here is a link to the scan of the paper: gigasize.com/get.php?d=90wl2lgf49c – Olumide Jul 10 2010 at 20:34
@Will Jagy: I disagree. Splines sometimes have something to do with harmonic analysis. The linked paper has some rudimentary singular integral type stuff, I wouldn't say that it has nothing to do with Fourier transform. @Olumide: if you are trying to understand Schwartz functions, you are taking the roundabout route to understand all the background material before reading the paper. While I laud such work ethic in general, this is not the way to go if you need to understand the paper in a hurry. – Willie Wong Jul 10 2010 at 22:54
show 1 more comment
I believe I now have the answer to the question. The power of $\omega$ appear from the taylor expansion of $e^{i\omega.t_j}$ (in section 2.3 of Kent and Mardia's paper)
Thanks.
(Apologies for the seeming bit of self promotion, but I've tagged this as the correct answer.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347931742668152, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/41606/convex-sets-and-projections
|
## Convex sets and projections
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello!
I recently started (it's purely self-education) reading a "Mathematical programming and optimizations" book, did a vast part of the exercises related to the theoretical part and at one moment I got the following question about convex sets:
I'm almost sure this statement is correct, but unfortunately, couldn't find something similiar on the internet and I tried to prove it, but I couldn't.
Assume we have some set $S \subset \mathbb{R_n}$ and for this set: $S = \overline{S}$ (set closure equals the set itself).
Now, there exists only one projection of arbitrary point $y$ which doesn't belong to the set $S :$
$\forall y \in \mathbb{R_n}, \space y \notin S: \space \exists ! \space p = \pi_S(y)$
This should mean that $S$ is a convex set.
Could someone please point me if I'm wrong (or right, but with some limitations for this statement) and help me proving it if I'm right.
I also understand that this question be a too "basic" to post here, but I've just started educating myself in this sphere and hope that sometimes I'll get smart enough to ask really bright questions :)
Thank you.
-
I think something's missing in your question. What is the projection $\pi_s$? – Thierry Zell Oct 9 2010 at 16:02
My guess is that $\pi_s(y)$ would be the nearest point in $S$ to $y$ (and so would be better notated as $\pi_S(y)$.). – Robin Chapman Oct 9 2010 at 16:05
My fault, sorry. – MasterOfOrion Oct 9 2010 at 16:07
## 1 Answer
I presume what you want to prove is the following. Let $S$ be a nonempty closed subset of $\mathbb{R}^n$. Then if there is a point $y\in\mathbb{R}^n$ and there are at least two points $p$ and $q$ in $S$ with Euclidean distance $d$ from $y$ (where $d$ is the distance of $y$ from $S$), then $S$ is not convex. To see this, note that the midpoint $r$ of the line segment $pq$ is closer to $y$ than $p$ of $q$ is, and so cannot lie in $S$. Hence $S$ isn't convex.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615708589553833, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/124763-prove-f-0-a.html
|
Thread:
1. Prove that f=0
Let be function differentiable on and satisfy conditions
Prove that .
2. Let $g(x)=e^{-Kx}f(x),$ thus the only thing we need to prove is that $g(x)=0.$
We have that $g'(x)=e^{-Kx}\big(f'(x)-Kf(x)\big),$ but given $|f'|\le K|f|$ we get that $g(x)g'(x)\le0$ which is the same as $\left( \frac{g^{2}(x)}{2} \right)'\le 0,$ thus $\big(g(x)\big)^2\le0$ then $g(x)=0$ for $x\ge0.$
We proved a stronger result, given $f:[0,\infty)\to\mathbb R$ differentiable.
3. Yes, this is it. Congratulations for nice solution. And little correction:
from
$<br /> \left( \frac{g^{2}(x)}{2} \right)'\le 0,<br />$
you can't conclude
$<br /> \big(g(x)\big)^2\le0<br />$
but just, by integration, that $g^2(x)=C=const$,
and than from $g(0)=0$ that $C=0$.
4. how can we simply assume?
Originally Posted by Krizalid
Let $g(x)=e^{-Kx}f(x),$ thus the only thing we need to prove is that $g(x)=0.$
We have that $g'(x)=e^{-Kx}\big(f'(x)-Kf(x)\big),$ but given $|f'|\le K|f|$ we get that $g(x)g'(x)\le0$ which is the same as $\left( \frac{g^{2}(x)}{2} \right)'\le 0,$ thus $\big(g(x)\big)^2\le0$ then $g(x)=0$ for $x\ge0.$
We proved a stronger result, given $f:[0,\infty)\to\mathbb R$ differentiable.
assuming for just g(x)? Doesnot it mean that the proof is only for g(x)? please clarify and comment.
5. Originally Posted by Pulock2009
assuming for just g(x)? Doesnot it mean that the proof is only for g(x)? please clarify and comment.
$e^{f(x)}\ne0$
6. Originally Posted by ns1954
Yes, this is it. Congratulations for nice solution. And little correction:
from
$<br /> \left( \frac{g^{2}(x)}{2} \right)'\le 0,<br />$
you can't conclude
$<br /> \big(g(x)\big)^2\le0<br />$
but just, by integration, that $g^2(x)=C=const$,
and than from $g(0)=0$ that $C=0$.
Actually by definite integration I conclude that, so there's no problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471700191497803, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/260596/whats-wrong-with-this-spectrum-of-a-scalar-product-in-l2
|
# What's wrong with this spectrum of a “scalar product” in $l^2$?
Let $T\in B(l^2)$ be s.t. $Tx=(\alpha_1 x_1, \alpha_2 x_2, \cdots )$, where the set of all $\alpha$ is dense in $[0,1]$.
I've shown that the set of all eigenvalues is $A=(\alpha_j)_1^\infty$. The resolvend operator, where it exists, is bounded. Therefore, the continuous spectrum is empty. The range of $T-\lambda I$ is the entire $l^2$. Therefore, the residual spectrum is empty.
But is this true? I never used the fact that $\alpha$ is dense in $[0,1]$, which leads me to believe that at least one of my conclusions above is false.
(If possible, I prefer hints over solutions.)
-
The differense that "A is dense in [0,1]" makes is that I know $||T|| < 1$. – Belen Dec 17 '12 at 9:49
Haven't you learned yet that the spectrum is always closed in $\Bbb{C}$? – Chris Eagle Dec 17 '12 at 9:53
@ChrisEagle Yes, and on a Banach space, compact. – Belen Dec 17 '12 at 9:54
Then you know your answer is incorrect. So you should examine your argument to find the errors. – Chris Eagle Dec 17 '12 at 9:57
Thanks, I think I've found the flaw. (Can't delete my question, though?) – Belen Dec 17 '12 at 10:08
show 2 more comments
## 1 Answer
But the sets are not bounded for all $\lambda\notin A$!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462723135948181, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/26720-question-related-field.html
|
# Thread:
1. ## Question related to field
Why can't a field have exactly 6 elements? I tried to go over the axioms but couldn't figure out anything. Help please
2. The number of elements in a finite field must be $p^k$, where p is a prime and k is a positive integer. 6 is not of the form $p^k$.
3. Originally Posted by namelessguy
Why can't a field have exactly 6 elements? I tried to go over the axioms but couldn't figure out anything. Help please
We can make it even easier than Jane Bennet said. It can be easily proven* that the characteristic of a field (in fact an integral domain) must be $0$ or $p$ (a prime).
*)Define $\phi: \mathbb{Z}\mapsto F$ as $\phi (n) = n\cdot 1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609266519546509, "perplexity_flag": "head"}
|
http://www.cfd-online.com/W/index.php?title=Approximation_Schemes_for_convective_term_-_structured_grids_-_definitions&diff=3607&oldid=2949
|
[Sponsors]
Home > Wiki > Approximation Schemes for convective term - structured grids -...
# Approximation Schemes for convective term - structured grids - definitions
### From CFD-Wiki
(Difference between revisions)
| | | | |
|---------------------------------------|-------------------------------------------------------------------------|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
| Michail (Talk | contribs) | | Michail (Talk | contribs) () | |
| (19 intermediate revisions not shown) | | | |
| Line 1: | | Line 1: | |
| | | + | == Goals of this section == |
| | | + | |
| | Here we shall develop a commone definitions and regulations because of | | Here we shall develop a commone definitions and regulations because of |
| | | | |
| - | * in different articles was used defferent definitions and notations | + | * in different issues was used different definitions and notations |
| | * we are searching for common approach and generalisation | | * we are searching for common approach and generalisation |
| | | | |
| - | == Usual using definition for convected variable == | + | '''Please note:''' ''as we still developing this section, you can find the rest of non-unificated definitions'' |
| | | + | |
| | | + | == Usual used definition for convected variable == |
| | | | |
| | <math> | | <math> |
| Line 12: | | Line 16: | |
| | <math> | | <math> |
| | \boldsymbol{\phi} | | \boldsymbol{\phi} |
| - | </math> | + | </math> |
| | | + | |
| | | + | we shall use here <math>\boldsymbol{\phi}</math> |
| | | | |
| | == definition of considered face, upon wich approximation is applied == | | == definition of considered face, upon wich approximation is applied == |
| | | | |
| - | usually (in the most articles) west face of the control volume <math> | + | usually (in the most articles) west face <math> |
| - | \boldsymbol{w} </math> is considered (''without loss of generality'') | + | \boldsymbol{w} </math> of the control volume is considered ''without loss of generality'' |
| | | + | |
| | | + | for which '''flux is directed from the left to the right''' i.e. <math> \boldsymbol{U_{f} \triangleright 0 } </math> |
| | | | |
| - | for which '''flux is directed from the left to the right''' | | |
| | | | |
| | we shall define it as <math> \boldsymbol{f} </math> | | we shall define it as <math> \boldsymbol{f} </math> |
| | | | |
| - | and convected variable at face as <math> \boldsymbol{\phi_{f}} </math> | + | and convected variable at face of CV as <math> \boldsymbol{\phi_{f}} </math> |
| | | + | |
| | | + | |
| | | + | also you can find in literature such definition as <math> \boldsymbol{i+1/2} </math> , but we suggested it non suitable, because of complication |
| | | | |
| | == indicators of the local velocity direction == | | == indicators of the local velocity direction == |
| | | + | |
| | | + | approximation scheme can be written in the next form |
| | | + | |
| | | + | <table width="100%"><tr><td> |
| | | + | :<math> |
| | | + | \phi_{w}=\sigma^{+}_{w}\phi_{W} + \sigma^{-}_{w}\phi_{P} |
| | | + | </math> |
| | | + | </td><td width="5%">(1)</td></tr></table> |
| | | + | |
| | | + | |
| | | + | where <math>\sigma^{+}_{w}</math> and <math>\sigma^{-}_{w}</math> are the indicators of the local velocity direction such that |
| | | + | |
| | | + | |
| | | + | <table width="100%"><tr><td> |
| | | + | :<math> |
| | | + | \sigma^{+}_{w} = 0.5 \left( 1 + \frac{\left|U_{w} \right|}{U_{w}} \right) |
| | | + | </math> |
| | | + | </td><td width="5%">(1)</td></tr></table> |
| | | + | |
| | | + | <table width="100%"><tr><td> |
| | | + | :<math> |
| | | + | \sigma^{-}_{w} = 1 - \sigma^{+}_{w} |
| | | + | </math> |
| | | + | </td><td width="5%">(1)</td></tr></table> |
| | | + | |
| | | + | and of course |
| | | + | |
| | | + | <table width="100%"><tr><td> |
| | | + | <math> |
| | | + | \left( U_{w} \neq 0 \right) |
| | | + | </math> |
| | | + | </td><td width="5%">(1)</td></tr></table> |
| | | + | |
| | | + | also used such definitions as <math>U^{+}_{w}</math> and <math>U^{-}_{w}</math> |
| | | + | |
| | | + | we offer to use |
| | | + | |
| | | + | <math>U^{+}_{f}</math> and <math>U^{-}_{f}</math> |
| | | + | |
| | | + | therefore unnormalised form of approximation scheme can be written |
| | | + | |
| | | + | <table width="100%"><tr><td> |
| | | + | <math> |
| | | + | \phi_{f}=U^{+}_{f}\phi_{W} + U^{-}_{f}\phi_{P} |
| | | + | </math> |
| | | + | </td><td width="5%">(1)</td></tr></table> |
| | | + | |
| | | + | or in more general form |
| | | + | |
| | | + | <table width="100%"><tr><td> |
| | | + | <math> |
| | | + | \phi_{f}=U^{+}_{f}\phi_{C} + U^{-}_{f}\phi_{D} |
| | | + | </math> |
| | | + | </td><td width="5%">(1)</td></tr></table> |
| | | + | |
| | | + | == definitions for NV diagram == |
| | | + | |
| | | + | |
| | | + | we discovered such definitions as |
| | | + | |
| | | + | <math>\boldsymbol{ \hat{\phi}_{i+1/2} }</math> is a function of <math>\boldsymbol{ \hat{\phi}_{i}} </math> |
| | | + | |
| | | + | <math>\boldsymbol{ \hat{\phi_{w}} }</math> is a function of <math>\boldsymbol{ \hat{\phi}_{W}} </math> |
| | | + | |
| | | + | we shall use here |
| | | + | |
| | | + | <math>\boldsymbol{ \hat{\phi_{f}} }</math> is a function of <math>\boldsymbol{ \hat{\phi}_{C}} </math> |
| | | + | |
| | | + | == node stencil == |
| | | + | |
| | | + | Bear in mind this stencil |
| | | + | |
| | | + | |
| | | + | |
| | | + | ---- |
| | | + | <i> Return to [[Numerical methods | Numerical Methods]] </i> |
| | | + | |
| | | + | <i> Return to [[Approximation Schemes for convective term - structured grids]] </i> |
## Goals of this section
Here we shall develop a commone definitions and regulations because of
• in different issues was used different definitions and notations
• we are searching for common approach and generalisation
Please note: as we still developing this section, you can find the rest of non-unificated definitions
## Usual used definition for convected variable
$\boldsymbol{f}$
$\boldsymbol{\phi}$
we shall use here $\boldsymbol{\phi}$
## definition of considered face, upon wich approximation is applied
usually (in the most articles) west face $\boldsymbol{w}$ of the control volume is considered without loss of generality
for which flux is directed from the left to the right i.e. $\boldsymbol{U_{f} \triangleright 0 }$
we shall define it as $\boldsymbol{f}$
and convected variable at face of CV as $\boldsymbol{\phi_{f}}$
also you can find in literature such definition as $\boldsymbol{i+1/2}$ , but we suggested it non suitable, because of complication
## indicators of the local velocity direction
approximation scheme can be written in the next form
$\phi_{w}=\sigma^{+}_{w}\phi_{W} + \sigma^{-}_{w}\phi_{P}$ (1)
where $\sigma^{+}_{w}$ and $\sigma^{-}_{w}$ are the indicators of the local velocity direction such that
$\sigma^{+}_{w} = 0.5 \left( 1 + \frac{\left|U_{w} \right|}{U_{w}} \right)$ (1)
$\sigma^{-}_{w} = 1 - \sigma^{+}_{w}$ (1)
and of course
$\left( U_{w} \neq 0 \right)$ (1)
also used such definitions as $U^{+}_{w}$ and $U^{-}_{w}$
we offer to use
$U^{+}_{f}$ and $U^{-}_{f}$
therefore unnormalised form of approximation scheme can be written
$\phi_{f}=U^{+}_{f}\phi_{W} + U^{-}_{f}\phi_{P}$ (1)
or in more general form
$\phi_{f}=U^{+}_{f}\phi_{C} + U^{-}_{f}\phi_{D}$ (1)
## definitions for NV diagram
we discovered such definitions as
$\boldsymbol{ \hat{\phi}_{i+1/2} }$ is a function of $\boldsymbol{ \hat{\phi}_{i}}$
$\boldsymbol{ \hat{\phi_{w}} }$ is a function of $\boldsymbol{ \hat{\phi}_{W}}$
we shall use here
$\boldsymbol{ \hat{\phi_{f}} }$ is a function of $\boldsymbol{ \hat{\phi}_{C}}$
## node stencil
Bear in mind this stencil
Return to Numerical Methods
Return to Approximation Schemes for convective term - structured grids
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8071142435073853, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/87372-infinite-board.html
|
# Thread:
1. ## infinite board
on another thread
Attached Files
• infinite chessboard.doc (163.0 KB, 40 views)
2. Originally Posted by Aquafina
I have attached the image of the chessboard
The squares of an infinite chessboard are numbered as follows: in the first row and first column we put 0, and then in every other square we put the smallest non-negative integer that does not appear anywhere below it in the same column or anywhere to the left of it in the same row.
What number will appear in the 1000th row and 700th column? Can i generalize this?
I think the number in the 1000th row and 700th column will be 348, but I don't have a complete proof of that.
The pattern that you see from the first few rows and columns makes it clear that things happen in powers of 2.
$\begin{array}{cccc|cccc|cccccccc}<br /> 6&7&4&5&2&3&0&1&14&15&12&13&10&11&8&9\\<br /> 5&4&7&6&1&0&3&2&13&12&15&14&9&8&11&10\\<br /> 4&5&6&7&0&1&2&3&12&13&14&15&8&9&10&11\\ \cline{1-8}<br /> 3&\multicolumn{1}{c|}{2}&1&0&7&6&5&4&11&10&9&8&15& 14&13&12\\<br /> 2&\multicolumn{1}{c|}{3}&0&1&6&7&4&5&10&11&8&9&14& 15&12&13\\ \cline{1-4}<br /> 1&\multicolumn{1}{c|}{0}&3&2&5&4&7&6&9&8&11&10&13& 12&15&14\\<br /> 0&\multicolumn{1}{c|}{1}&2&3&4&5&6&7&8&9&10&11&12& 13&14&15<br /> \end{array}$
(The TeX compiler used for the Forum won't allow me to display more than the bottom 7 rows. You'll have to add a couple more to make the pattern more obviously visible.)
The bottom left 2×2 block consists of 0s on the southwest–northeast diagonal, and 1s in the off-diagonal positions. The bottom left 4×4 block consists of two copies of the bottom left 2×2 block, on the diagonal; and copies of that same block with 2 added to each element, in the off-diagonal positions. The bottom left 8×8 block consists of two copies of the bottom left 4×4 block, on the diagonal; and copies of that same block with 4 added to each element, in the off-diagonal positions. And so on, building up blocks of size $2^n\times2^n$, each such block consisting of two copies of the previous block, on the diagonal; and two copies of the previous block with $2^{n-1}$ added to each element, in the off-diagonal positions.
So, how do we compute the number in the (m,n)-position? The book-keeping becomes much easier if you start numbering rows and columns from 0 rather than 1. So the bottom left corner of the chessboard is row 0 and column 0. The algorithm that seems to work (though I don't have a proof for it) is to form the binary representation of m and n, and then take their bitwise sum (mod 2). That gives the binary representation of the number in the (m,n)-position.
If it's not clear what that means, here's an example. We want to know the number in the 1000th row and 700th column. But if the numbering starts at 0 rather than 1, that means we should take m=999 and n=699. The first step is write these numbers in base 2, getting m = 11111100111 and n = 1010111011. Now "add" these numbers together, treating 1+1 as 0 (or using what computer scientists call a XOR gate):
Code:
```1111100111
1010111011
0101011100```
The resulting number 101011100 is the base 2 representation of 348.
As I said, I don't have a proof of that, but try it on some smaller numbers and you'll see that it works.
3. thanks
will i require a formal proof to generalise it or can i work with this to be enough?
4. Originally Posted by Aquafina
will i require a formal proof to generalise it or can i work with this to be enough?
You told me (by PM) that this problem comes from a collection of questions that were set for the UK Senior Mathematics Challenge in the 1990s. The format of this competition has changed over the years, but I believe that at that time some of the later questions on the paper were designed to be open-ended investigations where it was not necessarily expected that competitors would obtain a complete, rigorous solution. I may be wrong, and there may be some way of proving this result that escapes me, but I would be surprised if many of the competitors would have been able to provide a formal proof of the general result.
5. oh okay thanks, yeah it has changed a lot but i think the book has most of its own questions too which are to prepare for general challenges. The 'generalising' part i think is probably added as an extra, but thanks
perhaps the answer may be that no it is not able to be generalised?
6. ## something similiar...
Hi, there is something very similiar with the latin squares:
Nim Sum from Interactive Mathematics Miscellany and Puzzles
Any ideas of generalising it using this?
7. the proposition:
Attached Files
• Proof to Chessboard Problem.doc (247.0 KB, 50 views)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512350559234619, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/168717-grade-ten-math.html
|
# Thread:
1. ## Grade ten math
Factor completely:
40x^5y^2 - 32x^7y^6 + 28x^8y^4 - 36x^7y^5
I thought factor completely meant solve for the unknown variable. However, I have no idea what the answer is for this question. I know the greatest common denominator is 4x^5y^2. But do not know how to get on further. What do I do?
14. A rectangular field is to be enclosed on four sides and divided by another section of fence parallel tot he width using 4800m of fencing. What is the maximum area of the entire field? What are the dimensions of the field that maximums the area?
How would I even start this? I know that A = l x w. But other than that I have no idea. I thought it would be something like x (4800 -x) and then solve for x, but I don't think that will give me the correct answer. How do I do this question?
2. Originally Posted by Barthayn
Factor completely:
40x^5y^2 - 32x^7y^6 + 28x^8y^4 - 36x^7y^5
I thought factor completely meant solve for the unknown variable. However, I have no idea what the answer is for this question. I know the greatest common denominator is 4x^5y^2. But do not know how to get on further. What do I do?
Factor completely means take out common factors to get an expression into it's simplest form. You've already found the GCF (F=factor) so take that out front and divide each side by it
$4x^5y^2(10-...)$ - can you finish off?
3. Be careful with your terminology. Highest common factors and lowest common denominators are not the same thing.
$40x^5y^2 - 32x^7y^6 + 28x^8y^4 - 36x^7y^5$
Let's start with the constants. There is clearly a factor of 4 there.
$=4[10x^5y^2 - 8x^7y^6 + 7x^8y^4 - 9x^7y^5]$
As you've said, the highest common term of $x$ is $x^5$
So take this out next:
$=4x^5[10y^2 - 8x^2y^6 + 7x^3y^4 - 9x^2y^5]$
Now try to finish for the y terms. If you're still confused, reply and I'll go into detail.
4. Originally Posted by e^(i*pi)
Factor completely means take out common factors to get an expression into it's simplest form. You've already found the GCF (F=factor) so take that out front and divide each side by it
$4x^5y^2(10-...)$ - can you finish off?
I got $4x^5y^2(10-8x^5y^4+7x^3y^2-9x^2y^3)$
However, this cannot be the final answer. As well how do I do the second question that I asked?
Quacky I am still confused :S
5. Your answer is correct. That is as much as you can do with that function in terms of factorization. I'm trying to understand the second question at the moment - the wording is confusing me.
6. Q.14 is a Calculus question, but since this is Year 10 maths, I doubt the OP would know any. Have you been taught any Calculus yet?
7. No, calculus and vectors is taught in grade 12 math. How would I do the question though?
8. Originally Posted by Barthayn
14. A rectangular field is to be enclosed on four sides and divided by another section of fence parallel tot he width using 4800m of fencing. What is the maximum area of the entire field? What are the dimensions of the field that maximums the area?
How would I even start this? I know that A = l x w. But other than that I have no idea. I thought it would be something like x (4800 -x) and then solve for x, but I don't think that will give me the correct answer. How do I do this question?
I assume the "4800 m of fencing" is for the entire field- the way it is placed makes it look like it is the "fence parallel to the width" but that can't be right. Yes, area equals length times width: A= lw. The total length of fencing used is the two lengths, the two widths, and that additional piece of fencing "parallel to the width" and so the same length as the width: 2l+ 2w+ w= 2l+ 3w= 4800.
You can solve that equations for either l or w as a function of the other. For example, 2l= 4800- 3w so l= 2400- (3/2)w. Now put that into A= lw: $A= (2400- (3/2)w)w= 2400w- (3/2)w^2$. That's a quadratic function in w and you can find its maximum value (at the vertex of the parabola) by "completing the square".
9. Originally Posted by Barthayn
I got $4x^5y^2(10-8x^5y^4+7x^3y^2-9x^2y^3)$
Should be $8x^2y^4$, not $8x^5y^4$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9642978310585022, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/189641-quotient-space-print.html
|
# Quotient Space
Printable View
• October 5th 2011, 03:14 PM
dwsmith
Quotient Space
Let $f:X\to X$ be a linear transformation, and $V\subset X$ an invariant subspace of f, i.e., $f(V)\subset V$.
Prove that f induces a linear transformation $\hat{f}:X/V\to X/V$
Since f is invariant, we know $f(V)=\text{Im} \ f\subset V$. I am struggling with quotient spaces. I know it means X mod V where $\{x\in X: x+ V\}$.
I need some guidance and a good explanation of what is going on if possible.
• October 5th 2011, 03:37 PM
Deveno
Re: Quotient Space
what we would like to do is define $\hat{f}(x+X) = f(x)+X$.
first we need to be sure that this is well-defined, since we are working with cosets, instead of elements.
so suppose y is in x + X (that is, that y + X = x + X), so that y = x + v, for some vector v in X. then f(y) = f(x + v) = f(x) + f(v) = f(x) + v',
(where v' is some element of X) since X is invariant for f, so f(y) is in f(x) + X, thus f(y) + X = f(x) + X.
hence $\hat{f}(y+X) = f(y) + X = f(x) + X = \hat{f}(x+X)$, so $\hat{f}$ is indeed well-defined.
from here. it's all down-hill, the linearity of $\hat{f}$ is a direct consequence of the linearity of f:
$\hat{f}((\alpha u + \beta v) + X) = f(\alpha u + \beta v) + X = \alpha f(u) + \beta f(v) + X$
$= (\alpha f(u) + X) + (\beta f(v) + X) = \alpha \hat{f}(u) + \beta \hat{f}(v)$
*******
it might be helpful to see a simple concrete example.
let $V =\mathbb{R}^2$, the Euclidean plane, with the usual vector operations, and suppose $X = \{(x,0) \in \mathbb{R}^2\}$. what do the elements of V/X look like?
well anything in (x,y) + X has a 2nd coordinate of y, so the elements of V/X are all horizontal lines (we get one for each different real number y).
so suppose f(x,y) = (3x+y, 2y). it should be clear X is an invariant subspace for f.
then $\hat{f}$ is the mapping that takes the line going through y, to the line going through 2y. in other words, $\hat{f}$ acts "just like" the function a-->2a (of one real variable).
the reason being, when we act "mod X", we are "shrinking" the entire x-dimension down to 0. so what f does on the first coordinate becomes irrelelvant, as far as $\hat{f}$ is concerned.
• October 5th 2011, 04:41 PM
dwsmith
Re: Quotient Space
Quote:
Originally Posted by Deveno
what we would like to do is define $\hat{f}(x+V) = f(x)+V$.
first we need to be sure that this is well-defined, since we are working with cosets, instead of elements.
so suppose y is in x + V (that is, that y + V = x + V), so that y = x + v, for some vector v in V. then f(y) = f(x + v) = f(x) + f(v) = f(x) + v',
(where v' is some element of V) since V is invariant for f, so f(y) is in f(x) + V, thus f(y) + V = f(x) + V.
hence $\hat{f}(y+V) = f(y) + V = f(x) + V = \hat{f}(x+V)$, so $\hat{f}$ is indeed well-defined.
from here. it's all down-hill, the linearity of $\hat{f}$ is a direct consequence of the linearity of f:
$\hat{f}((\alpha u + \beta v) + V) = f(\alpha u + \beta v) + V = \alpha f(u) + \beta f(v) + V$
$= (\alpha f(u) + V) + (\beta f(v) + V) = \alpha \hat{f}(u) + \beta \hat{f}(v)$
*******
it might be helpful to see a simple concrete example.
let $V =\mathbb{R}^2$, the Euclidean plane, with the usual vector operations, and suppose $X = \{(x,0) \in \mathbb{R}^2\}$. what do the elements of V/X look like?
well anything in (x,y) + X has a 2nd coordinate of y, so the elements of V/X are all horizontal lines (we get one for each different real number y).
so suppose f(x,y) = (3x+y, 2y). it should be clear X is an invariant subspace for f.
then $\hat{f}$ is the mapping that takes the line going through y, to the line going through 2y. in other words, $\hat{f}$ acts "just like" the function a-->2a (of one real variable).
the reason being, when we act "mod X", we are "shrinking" the entire x-dimension down to 0. so what f does on the first coordinate becomes irrelelvant, as far as $\hat{f}$ is concerned.
How did you know how to define $\hat{f}\mbox{?}$
I also don't really understand your example either.
• October 5th 2011, 04:56 PM
Deveno
Re: Quotient Space
that is the usual definition of an "induced map". the map V--->V/X given by v-->v + X is "canonical", in that it does not depend on the choice of a basis for V.
you can also easily show it is linear, let's say we call it T. thus f (via T) "induces" the map $\hat{f}$ by setting:
$\hat{f} \circ T = T \circ f$
the reason we need X to be invariant for f, is because if x is in X, but f(x) is not in X, then f(v + X) might not be in the coset f(v) + X (although it will be in the coset f(v) + f(X), but this is not "the same" canonical map T).
in this problem, all we have to work with is the map v-->v+X, and the map f, so our answer must only depend on those.
as for the example, the situation is like this: linear spaces are very well-behaved, and have natural geometrical interpretations. a 1-dimensional space is a line, a 2-dimensional space is a plane, a 3-dimensional space is like the world we live in (ignoring the curvature introduced by the minkowski metric). higher-dimensional spaces are hard to "visualize" but their behavior is "the same" as lower dimensional spaces, just...more basis vectors to specify.
so 3-space mod a line, would be like partitioning space into a set of parallel "straws" (really skinny ones). the result is 2-dimensional, by specifying a point in a plane which cuts through all the straws, we can tell "which" straw we're at...that is, which coset of the line (straw) that goes through the origin.
similarly, 3-space mod a plane, is like partitioning space into sheets parallel to a plane that goes through the origin. if we have a line that crosses all the "sheets", specifying a point on that line, tells us which "sheet" (that is coset of the plane going through the origin) we are in.
in vector spaces, "modding" via a subspace is analogous to "modding mod n" in the integers. when we take an integer mod n, we are essentially declaring "all multiples of n are 0" which just leaves a cycle between 0 and n-1. when we take V/X, we are essentially setting the entire subspace X as {0}, leaving only the remaining dim(V) - dim(X) dimensions as relevant.
in fact, what the rank-nullity theorem says is: dim(V) = dim(X) + dim(V/X). this is just the first isomorphism theorem of group theory in the context of vector spaces, and can be proved the same way.
• October 5th 2011, 05:18 PM
dwsmith
Re: Quotient Space
So $\hat{f}$ isn't monic then, correct?
What would be an example that shows $\hat{f}$ isn't monic?
• October 5th 2011, 06:16 PM
Deveno
Re: Quotient Space
it might be, it might not be. in the example i gave, $\hat{f}$ was monic, but if f is the 0-map, then so is $\hat{f}$.
even if f isn't monic, $\hat{f}$ might be: suppose f(x,y,z) = (x,x,z) in $\mathbb{R}^3$.
now f certainly isn't monic, f(1,2,3) = f(1,4,3), for example.
but if our subspace X was all vectors of the form (0,y,0), (which is 1-dimensional, with basis {(0,1,0)},
then certainly f(0,y,0) = (0,0,0) is in X.
so $\hat{f}((x,y,z)+X) = (x,x,z) + X = (x,0,z) + X$ (since (x,y,z) - (x,0,z) = (0,y,0) is in X).
i claim $\hat{f}$ is monic. suppose $\hat{f}((x,y,z) + X) = \hat{f}((x',y',z') + X)$.
then (x,0,z) + X = (x',0,z') + X, so we must have x' = x, and z' = z.
thus (x,y,z) - (x',y',z') = (x,y,z) - (x,y',z) = (0,y-y',0) is in X, so
(x,y,z) + X = (x',y',z') + X.
• October 6th 2011, 05:36 AM
dwsmith
Re: Quotient Space
For this problem then, $\hat{f}$ is monic.
$\hat{f}(x+V)=\hat{f}(y+V)\Rightarrow f(x)+V=f(y)+V\Rightarrow (f(x)-f(y))+V=0$
So $f(x)-f(y)\in X/V$
$f(x)-f(y)=0\Rightarrow f(x)=f(y)\Rightarrow x=y$
Is this correct?
• October 6th 2011, 06:17 AM
Deveno
Re: Quotient Space
NO!
how can you deduce that x = y from f(x) = f(y)?
IF (and that's a big if, at high decibal levels) f is monic, then $\hat{f}$ will be, too (you can't make a map "less monic" by factoring out information).
also f(x) + X = f(y) + X, merely implies f(x) and f(y) are in the same coset of V, that is:
(f(x) - f(y)) + X = X, not "0" (of course 0 + X is the 0-vector (coset) in the coset space V/X, but is is unwise to denote this by simply "0", it's not the same type of vector
as a vector in V, it's a vector composed of a SET of vectors in V (just as a residue class in the integers mod n isn't an integer, but an equivalence class of a SET of integers)).
saying f(x) - f(y) is in X/V is meaningless....what is X/V? even if you meant f(x) - f(y) is in V/X, this is still wrong, f(x), f(y) are elements of V,
NOT elements of V/X.
part of your confusion is due to a bad typo in my orginal post, which i have edited.
the cosets in V/X are cosets of X in V. i am more used to the notation U/V (which is more common), so most of the "stuff + V" expressions in my original post should have been "stuff + X". embarrassing o.o
• October 6th 2011, 04:57 PM
dwsmith
Re: Quotient Space
$\hat{f}$ isn't monic. If I define f as $f:X\to X$ by $f(x)$, then f is a linear transformation.
But $\hat{f}$ isn't monic since every element in the domain is mapped to $0+V$
Is this correct?
• October 6th 2011, 05:44 PM
Deveno
Re: Quotient Space
again, no. i don't know why you're fixated on whether or not $\hat{f}$ is monic. sometimes it is, sometimes it isn't, it depends on f and X.
look all $\hat{f}$ is, is a linear transformation "as much like f as possible" on V/X. "modding by X" could take some of the domain values where multiple
values in V get mapped by f to the same point, to a single point in V/X, but it might not.
as far as the example you gave, how do you propose to extend f to a map defined on all of V?
but sure, if X = V, and f is the identity on V, f(x) = x, then of course $\hat{f}$ takes everything to the "0" of V/V,
because V/V has just ONE coset, with all of V in it! V/V = {0+V}, so $\hat{f}$ is the 0-map AND monic!
but seriously, i think you are making this be far more complicated than it needs to be.
a vector space is just an abelian group with some "extra structure" (namely, the scalar multiplication).
and the quotient space is just the quotient group, as with any abelian group, along with an "induced" scalar multiplication:
a(v + X) = av + X. $\hat{f}$ is just f "ignoring what happens in X" (since f(X) is a subset of X, which all just gets dropped in the + X part of the coset).
All times are GMT -8. The time now is 05:38 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420861601829529, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/201904-eight-grade-math-honors-help-please-help.html
|
# Thread:
1. ## Eight Grade Math Honors Help - Please Help!
Hello,
I am in eight grade Math Honors and I would like some help on the following questions:
1. 5x + 9 = 2
2. 7 - 6d = 3
3. 1/3y - 2 = 4
4. 7 (10-3w) = 5(15 -4w)
This may look easy for you guys, but I don't have a lot of time and I don't know which place to post this.
Thanks, Help is appreciated.
2. ## Re: Eight Grade Math Honors Help - Please Help!
1. Subtract 9 from both sides of the equation:
$5x = -7$
Divide both sides by 5 to get $x = -\frac{7}{5}$.
2. is done the same way.
3. is a little unclear -- do you mean $\frac{1}{3y} - 2 = 4$ or $\frac{1}{3y-2} = 4$?
4. Expand both sides to get $70 - 21w = 75 - 60w$. Bring all the $w$ terms to one side, solve for $w$.
3. ## Re: Eight Grade Math Honors Help - Please Help!
Thanks. For the second one, if it is done the same way as the first one, would the answer be: D = -4/6? Also, the third one is the first one on your post. 1/3y - 2 = 4, and the two is not under the one.
4. ## Re: Eight Grade Math Honors Help - Please Help!
Originally Posted by AbhiKap
Thanks. For the second one, if it is done the same way as the first one, would the answer be: D = -4/6? Also, the third one is the first one on your post. 1/3y - 2 = 4, and the two is not under the one.
no, that is incorrect on the second one. you should always check your work:
if d = -4/6, then:
7 - 6d = 7 - 6(-4/6) = 7 + 4 = 11, which is not 3.
i would proceed as follows (since you appear to have trouble with "signs"):
7 - 6d = 3
7 - 6d + 6d = 3 + 6d
7 + 0 = 3 + 6d
7 = 3 + 6d
7 = 6d + 3
7 - 3 = 6d + 3 - 3
...can you continue?
by the way, this *is* the wrong section, this is "pre-college algebra" not "college algebra" (as you might have guessed from being in the 8th grade, which is not college, hmm?)
5. ## Re: Eight Grade Math Honors Help - Please Help!
Yeah, second one's not -4/6. Apparently you put this in the "advanced algebra" forum. Algebra can be waaaay more abstract than what you're learning right now.
6. ## Re: Eight Grade Math Honors Help - Please Help!
As mentioned before, I am aware of the fact that this is in the wrong section. However, the reason I put it here is because I couldn't find a Super Basic Algebra section, if there is one.
7. ## Re: Eight Grade Math Honors Help - Please Help!
Deveno, I don't get your last steps, can you explain them please?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556048512458801, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/fourier-transform?sort=unanswered&pagesize=15
|
# Tagged Questions
The fourier-transform tag has no wiki summary.
2answers
148 views
### Energy stored in space/frequency electric field
I've come across a problem with finding the energy stored in time/frequency electric field. In space/time we have (taking $\epsilon = 1$) Energy = \frac{1}{2} \int_V |\mathbf{E}(\mathbf{x},t)|^2 ...
1answer
140 views
### What's the average position of oscillating particles in a box with periodic boundary conditions?
Imagine an open box repeating itself in a way that a if a particle crossing one of the box boundary is "teleported" on the opposite boundary (typical periodic boundary position in 3D). Now put a ...
1answer
36 views
### Why pulse waves results in wave packets?
I was doing experiments of measuring sonic velocity and I generate pulse waves from sensor 1, but when they are received by sensor 2, I saw wave packets on the oscilloscope, can you explain why? I was ...
0answers
97 views
### Discrete sum over an exponential with imaginary argument, considering only every second lattice site?
Let's say I sum an exponential function $e^{\imath \left(k-k^{\prime}\right) x_{i}}$ over a chain system where every member of the chain is of the same type, e.g., A-A-A-...-A-A (total of N sites) ...
0answers
46 views
### Fourier Transform of ribbon's beam Electric Field
I have a monochromatic ribbon beam with $E(x)e^{i(kz-\omega t)}$ being the electric field's amplitude. I want to show that the lowest order approximation in terms of plane waves is ...
0answers
147 views
### Splitting light into colors, mathematical expression (fourier transforms)
I am trying to solve a problem that includes a function of the light hitting a certain area. My question is, how would I change a function $G(x)$ of photons hitting a certain area to include just ...
0answers
51 views
### Definition of frequency domain coordinates
I am using the Fourier Transform in Optics to perform differentiation with a filter by making use of the relation \$\frac {\partial}{\partial x} f(x)=2\pi i \int^{\infty}_{-\infty} u F(u) \exp (2i\pi ...
0answers
53 views
### How to solve following equation (Yukawa field)?
By using Lagrangian for Yukawa interaction, L = -\frac{1}{c}A_{\alpha}j^{\alpha} + \frac{1}{8 \pi c}(\partial_{\alpha}A_{\beta})(\partial^{\alpha}A^{\beta}) + ...
0answers
89 views
### What should the amplitude be when plotting 1-sided Amplitude Spectrum?
I have a continuous signal $x(t)$ such that $$x(t)=12\cos(6\pi t)+6\cos(24\pi t)+3\cos(30 \pi t)$$ and is asked to sketch a 1-sided Amplitude Spectrum of the signal $x(t)$ if sampled above the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.889723002910614, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/35480/is-there-a-difference-between-these-integral-notations?answertab=active
|
# Is there a difference between these integral notations?
I've come across these two notations for calculating an indefinite integral but I'm not sure whether or not they are equal:
• $f(x)dx$
• $\int f(x)dx$
When calculating the indefinite integral, the first notation is used in my learning book, but isn't that the same as the second notation?
Thanks for any clarification.
-
Could you please provide a reference where the notation appears? The context might make it easier for people to explain the role of the notation. – Jonas Meyer Apr 27 '11 at 18:18
2
...the first is just a differential. – J. M. Apr 27 '11 at 18:18
As J.M. says, if $\int f(x)dx = F(x)+C$, then $dF=f(x)dx$ is the differential. At least, this is the usual notation. – Jonas Meyer Apr 27 '11 at 18:19
## 2 Answers
The first is not an integral; it's a differential. The second is an integral.
When doing substitution or integration by parts, one considers differentials. For example, to do integration by parts on $$\int x\sin x\,dx$$ one can say "let $u=x$ and $dv = \sin x\,dx$." Then you want to find a function whose differential is $dv$, so you are trying to find $\int dv = \int \sin x\,dx$; we usually don't actually write this, and simply write "then $v=-\cos x$", which may be what is confusing you and leading you to believe that "$\sin x\,dx$" is some kind of integral. But it's not an integral.
I'm just guessing, mind you, since you have not yet provided explicit examples of the use you believe refers to integrals.
-
This makes sense. It looks like I was overlooking something. Thanks. – pimvdb Apr 27 '11 at 18:27
## Did you find this question interesting? Try our newsletter
The first one usually denotes a differential 1-form, where the second denotes the indefinite integral of $f$, i.e. it's anti-derivative.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398564100265503, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?s=cd79260dce1bc057844bd82696bc1d59&p=3963393
|
Physics Forums
## Calculate foci of planetary orbit given known parameters
Hello everyone. I am trying to calculate the focal points of the ellipse traced by a planet in orbit around a star, given the following known parameters:
• $M_{sun}$ Mass of the sun
• $r_{planet}$ Position vector of the planet from the sun
• $m_{planet}$ Mass of the planet
• $v_{planet}$ Velocity of the planet
• $G$ Gravitational constant
• $F_{grav} = G\frac{Mm}{r^{2}}$
I would like to find (if possible) the orbital information of the planet (including eccentricity, focal points, and axes, if possible) based on these pieces of data. Is it possible to calculate the focal points of its orbit based on these parameters? If more parameters are necessary, I might be able to add them. Any help would be greatly appreciated! (The focal points of an ellipse are shown below.)
Tags
foci, gravity, orbit, planet, star
Thread Tools
| | | |
|-------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Calculate foci of planetary orbit given known parameters | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 15 |
| | Introductory Physics Homework | 2 |
| | Aerospace Engineering | 3 |
| | Introductory Physics Homework | 0 |
| | Introductory Physics Homework | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8492564558982849, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/isometries
|
## Tagged Questions
0answers
45 views
### Surface locally isometric to a sphere.
If for any two points p,q in a regular, compact surface $S\subseteq R^3$, there exists an isometry $f:S\rightarrow S$ s.t. f(p)=q. How to prove that $S$ is locally isometric to the …
3answers
229 views
### Which metric spaces have this superposition property?
Let $A \subset X$ and $B \subset X$ be two isometric subsets of a metric space $X$. So there is an isometry $f: A \to B$. Say that a metric space $X$ has the superposition property …
0answers
86 views
### Isometric decomposition
Any progress on the following: Can a closed disc in the plane be partitioned into three disjoint sets which are pair-wise isometric, i.e. each set is an image of the others an isom …
1answer
94 views
### Properties of $S_2$ and the plane and $[-1,1]^2$ [closed]
Is the sphere $S_2$ isometric / isomorphic / diffeomorphic / homeomorphic to the plane? Is the sphere $S_2$ minus a point isometric / isomorphic / diffeomorphic / homeomorphic to …
1answer
358 views
### Possible isometries of a positively curved $S^2\times S^2$
Just to put things in perspective, recall that the Hopf Conjecture asks whether $S^2\times S^2$ admits a metric of positive sectional curvature. By the work of Hsiang-Kleiner, it i …
1answer
219 views
### Open problems about CMC hypersurfaces with symmetries?
Recently, Andrews and Li announced a complete classification of CMC ($H=const.$) tori in $S^3$, confirming a conjecture of Pinkall and Sterling. Their main result is that any such …
1answer
111 views
### Partial isometries making families of linearly independent vectors orthogonal
Suppose I have a family of $n$ linearly-independent elements $v_i$ of the Hilbert space $\mathbb{C}^m$, which are not necessarily orthogonal. Can I always find a partial isometry \$ …
2answers
315 views
### All the isometries of $\mathbb{C}^n$ into itself are made like these
This is again a request for references. I'd appreciate a pointer to any published proof of the following: Proposition. Given $n \in \mathbb{N}^+$, let $\Phi$ be a function \$\ …
0answers
427 views
### Homometric $\Rightarrow$ isometric?
Suppose you know that there is a mapping between two Riemmanian manifolds $M_1$ and $M_2$ such that, for each $x_1 \in M_1$, the (codimension-1) measure of the set of points at dis …
1answer
499 views
### Isometry groups of Riemannian submersions with totally geodesic fibers
Suppose $F\to M\stackrel{\pi}{\to} B$ is a Riemannian submersion with totally geodesic fibers, all manifolds compact. In general, unless $M=B\times F$ is a Riemannian product, the …
2answers
375 views
### “Measuring” how far is one Banach space from being surjectively isometric to another
Bonjour/bonsoir à toutes et à tous. Assume that $\mathbf{V} \equiv (V, \|\cdot\|_V)$ and $\mathbf{W} \equiv (W, \|\cdot\|_W)$ are Banach spaces (over the real or complex field). …
1answer
435 views
### When do 0-preserving isometries have to be linear?
Let $\langle \mathbf{V},+,\cdot,||.|| \rangle$ be a normed vector space over $\mathbb{R}$. Let $f : \mathbf{V} \to \mathbf{V}$ be an isometry that satisfies \$f(\mathbf{0}) = \math …
0answers
308 views
### Surjectively isometric normed spaces: Hamel vs (extended) Schauder dimension
Bonjour/bonsoir à toutes et à tous. This may really be a very basic question, but... Let $\mathbf{X} \equiv (X, \|\cdot\|_X)$ and $\mathbf{Y} \equiv (Y, \|\cdot\|_Y)$ be surjectiv …
2answers
614 views
### Terminology: “cocompact”
Let $M$ be a Riemannian manifold such that its isometry group $G=\textrm{Iso}(M)$ is a Lie group, and let $\Gamma$ be a subgroup of $G$. 1) What does the phrase "$\Gamma$ is a …
1answer
361 views
### Must a surjective isometry on a dual space have a pre-adjoint?
Background: Let $X$ be a Banach space. We know a linear map $h$ is a surjective isometry of $X$ if and only if its adjoint $h^*$ is a surjective isometry of $X^*$. In general, …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8530188202857971, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/194925-sum-difference-formulas-print.html
|
# Sum or Difference Formulas
Printable View
• January 4th 2012, 10:27 PM
Nickod777
Sum or Difference Formulas
This is not for homework, nor for a teacher to see, just a test correction so that I know what to do, or what I am doing wrong.
Use the sum or Difference formulas to find the exact value of each expression.
1. sin (165 degrees)
2. cos (5π/12)
I tried to use my calculator, a TI-89 ended up with the same answer both, and it was wrong unfortunately, and I was possibly wondering what I could do to get the correct answer.
my TI-89 gave me this:
((√3-1)*√(2))/(4)
Radical 3 - 1 * radical 2 all over 4.
And my best guess as the answer for both is [√6 + √2] / 4. Just not sure how to get that.
Edit: this might be my only trig question on here, as the 1st semester is almost gone, and I just wanted to figure these problems out.
• January 4th 2012, 10:43 PM
MarkFL
Re: Sum or Difference Formulas
1.) $\sin\left(165^{\circ}\right)$
Use the reflection $\sin\left(180^{\circ}-x\right)=\sin(x)$
$\sin\left((180-165)^{\circ}\right)=\sin\left\((45-30)^{\circ}\)$
Now apply the angle difference identity for sine.
Or alternately:
$\sin\left(165^{\circ} \right)=\sin\left(15^{\circ}\right)=\cos\left(75^{ \circ}\right)=\cos\left((45+30)^{\circ}\right)$
Now apply the angle sum identity for cosine.
2.) $\cos\left(\frac{5\pi}{12}\right)$
Apply the identity: $\cos(x)=\sin\left(\frac{\pi}{2}-x\right)$ to get:
$\cos\left(\frac{5\pi}{12} \right)=\sin\left(\frac{\pi}{2}-\frac{5\pi}{12}\right)$
$\cos\left(\frac{5\pi}{12} \right)=\sin\left(\frac{\pi}{12}\right)$
$\cos\left(\frac{5\pi}{12} \right)=\sin\left(\frac{\pi}{4}-\frac{\pi}{6}\right)$
Now apply the angle difference identity for sine.
Or, alternately:
$\cos\left(\frac{5\pi}{12}\right)=\cos\left(\frac{( 3+2)\pi}{12}\right)=\cos\left(\frac{\pi}{4}+\frac{ \pi}{6}\right)$
Now apply the angle sum identity for cosine.
• January 5th 2012, 05:44 AM
Soroban
Re: Sum or Difference Formulas
Hello, Nickod777!
Quote:
Use the Sum or Difference formulas to find the exact value of each expression.
. . $1.\;\sin (165^o) \qquad 2.\;\cos\left(\tfrac{5\pi}{12}\right)$
Did it occur to you to use the Sum or Difference Formulas?
Do you know the Sum or Difference Formulas?
$1.\;\sin(165^o) \;=\;\sin(45^o + 120^o)$
. . . . . . . . . $=\; \sin(45^o)\cos(120^o) + \cos(45^o)\sin(120^o)$
. . . . . . . . . $=\;\left(\frac{1}{\sqrt{2}}\right)\left(-\frac{1}{2}\right) + \left(\frac{1}{\sqrt{2}}\right)\left(\frac{\sqrt{3 }}{2}\right)$
. . . . . . . . . $=\; -\frac{1}{2\sqrt{2}} + \frac{\sqrt{3}}{2\sqrt{2}} \;=\;\frac{\sqrt{3}-1}{2\sqrt{2}} \quad\begin{array}{c}^{\text{Usually this is an acceptable answer,}} \\ ^{\text{but often they rationalize the denominator.}}\end{array}$
. . . . . . . . . $=\;\frac{\sqrt{2}}{\sqrt{2}}\cdot \frac{\sqrt{3}-1}{2\sqrt{2}} \;=\; \frac{\sqrt{2}\left(\sqrt{3}-1\right)}{4} \;=\;\frac{\sqrt{6} - \sqrt{2}}{4}$
$2.\;\cos\left(\frac{5\pi}{12}\right) \;=\;\cos\left(\frac{\pi}{6} + \frac{\pi}{4}\right)$
. . . . . . . . . $=\;\cos\left(\frac{\pi}{6}\right)\cos\left(\frac{\ \pi}{4}\right) - \sin\left(\frac{\pi}{6}\right)\sin\left(\frac{\pi} {4}\right)$
. . . . . . . . . $=\;\left(\frac{\sqrt{3}}{2}\right)\left(\frac{1}{\ \sqrt{2}}\right) - \left(\frac{1}{2}\right)\left(\frac{1}{\sqrt{2}} \right)$
. . . . . . . . . $=\;\frac{\sqrt{3}-1}{2\sqrt{2}}$
• January 5th 2012, 04:29 PM
Nickod777
Re: Sum or Difference Formulas
Quote:
Originally Posted by Soroban
Hello, Nickod777!
Did it occur to you to use the Sum or Difference Formulas?
Do you know the Sum or Difference Formulas?
$1.\;\sin(165^o) \;=\;\sin(45^o + 120^o)$
. . . . . . . . . $=\; \sin(45^o)\cos(120^o) + \cos(45^o)\sin(120^o)$
. . . . . . . . . $=\;\left(\frac{1}{\sqrt{2}}\right)\left(-\frac{1}{2}\right) + \left(\frac{1}{\sqrt{2}}\right)\left(\frac{\sqrt{3 }}{2}\right)$
. . . . . . . . . $=\; -\frac{1}{2\sqrt{2}} + \frac{\sqrt{3}}{2\sqrt{2}} \;=\;\frac{\sqrt{3}-1}{2\sqrt{2}} \quad\begin{array}{c}^{\text{Usually this is an acceptable answer,}} \\ ^{\text{but often they rationalize the denominator.}}\end{array}$
. . . . . . . . . $=\;\frac{\sqrt{2}}{\sqrt{2}}\cdot \frac{\sqrt{3}-1}{2\sqrt{2}} \;=\; \frac{\sqrt{2}\left(\sqrt{3}-1\right)}{4} \;=\;\frac{\sqrt{6} - \sqrt{2}}{4}$
$2.\;\cos\left(\frac{5\pi}{12}\right) \;=\;\cos\left(\frac{\pi}{6} + \frac{\pi}{4}\right)$
. . . . . . . . . $=\;\cos\left(\frac{\pi}{6}\right)\cos\left(\frac{\ \pi}{4}\right) - \sin\left(\frac{\pi}{6}\right)\sin\left(\frac{\pi} {4}\right)$
. . . . . . . . . $=\;\left(\frac{\sqrt{3}}{2}\right)\left(\frac{1}{\ \sqrt{2}}\right) - \left(\frac{1}{2}\right)\left(\frac{1}{\sqrt{2}} \right)$
. . . . . . . . . $=\;\frac{\sqrt{3}-1}{2\sqrt{2}}$
Unfortunately I do not know the formulas.
Thanks for the help though.
• January 5th 2012, 06:01 PM
pickslides
Re: Sum or Difference Formulas
Quote:
Originally Posted by Nickod777
Unfortunately I do not know the formulas.
Thanks for the help though.
Here are the formulas MarkFL2 & Soroban used.
Trigonometric Formulas: Sum and Difference
• January 5th 2012, 06:07 PM
Prove It
Re: Sum or Difference Formulas
Quote:
Originally Posted by Nickod777
Unfortunately I do not know the formulas.
I find it hard to believe that you were asked to use formulae if you haven't been taught them...
• January 5th 2012, 06:53 PM
Nickod777
Re: Sum or Difference Formulas
Quote:
Originally Posted by Prove It
I find it hard to believe that you were asked to use formulae if you haven't been taught them...
in my class, we are given the formulas, not learning how to use them, instead we have to learn them ourselves.
Anyway, let us just end this gentlemen.
All times are GMT -8. The time now is 04:24 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8588155508041382, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/291893/congruent-division-of-a-shape-in-euclidean-plane
|
# Congruent division of a shape in euclidean plane
Any triangle can be divided into 4 congruent shapes:
http://www.math.missouri.edu/~evanslc/Polymath/WebpageFigure2.png
An equilateral triangle can be divided into 3 congruent shapes.
Questions:
1) a triangle can be divided into 3 congruent shapes. Is it equilateral?
2) a shape in the plane can be divided into n congruent shapes for any positive integer n. What can it be? (e.g. it can be the interior of a circle or a rectangle)
let shapes be connected, open and convex. in fact I cannot define a shape exactly but its definition is intuitively clear.
-
3
Lots of shapes satisfy (2). Parallelograms and "wedges" of circles come to mind. – mjqxxxx Feb 1 at 4:09
you're right. is there any other shape. I'll edit 2, – CutieKrait Feb 1 at 4:14
I think if a shape is divided into convex shapes, the only possible divisions are by straight lines. So convexity limits the problem. – CutieKrait Feb 1 at 20:46
## 2 Answers
Here's a large class of valid shapes. It's easiest to explain with a picture of a representative example:
$\hskip{2.5in}$
Formally, let $\gamma:[a,b]\to\mathbb R^2$ be a curve (the blue one). Let $T:[0,1]\to(\mathbb R^2\to\mathbb R^2)$ be a constant-speed rotation or translation of the plane. That is, either there is a fixed vector $x$ such that $T(\alpha)$ is a translation by $\alpha x$, or there is a fixed point $x$ and a fixed scalar $\theta$ such that $T(\alpha)$ is a rotation by $\alpha\theta$ about $x$ (the red arcs). Let us also require that the map $$s:[a,b]\times[0,1]\to\mathbb R^2,\\s(u,v)=T(v)\big(\gamma(u)\big)$$ is injective, so the transformed copies of $\gamma$ never overlap. Then the range of $s$, i.e. $s([a,b],[0,1])$, is a shape that can be divided into any number $n$ of congruent pieces, namely $$s([a,b],[0,\tfrac1n]),\quad s([a,b],[\tfrac1n,\tfrac2n]),\quad \ldots,\quad s([a,b],[\tfrac{n-1}n,1]).$$
This violates a few of your conditions (openness, convexity), but those pretty seem arbitrary to me. I don't know whether this is a complete solution, i.e. whether this includes all the possible shapes that are divisible into arbitrarily many congruent parts.
-
good idea. if convex it looks be disk wedges or parallelograms. – CutieKrait Feb 1 at 13:32
my idea is using isometric homeomorphisms of a shape (eg triangle) and considering the automorphism group of these isometric homeomorphisms. – CutieKrait Feb 1 at 14:41
Not sure what you mean by that last comment. If you have a different answer, you should post it as an answer. – Rahul Narain Feb 1 at 21:25
needs more work. – CutieKrait Feb 1 at 21:26
+1 A good example. I find it hard to define what parts of this space have every piece convex. Certainly if the red and blue curves are both straight you are there. You can't have the red curves arcs of circles as one will be concave inward or the pieces won't be congruent. – Ross Millikan Feb 2 at 6:09
show 1 more comment
The answer to 1) is, not necessarily. A 30-60-90 triangle can be divided into three congruent 30-60-90 triangles. If you can't see how to do this, see Figure 1 in this paper. Theorem 2 in that paper proves that this and the equilateral are the only triangles that can be divided into three congruent triangular shapes.
EDIT: If you don't insist on convexity, you can divide any triangle into three congruent shapes. First divide it into four congruent shapes by drawing line segments joining the midpoints of the sides. Then divide the one in the middle into four congruent shapes the same way, and then divide the middle one of those four into four congruent shapes the same way, and so on, ad infinitum. Now make a shape by taking one triangle of each size in the final diagram --- you can easily choose them in such a way as to make the shape connected. The remaining triangles form two shapes congruent to the first one, if you are careful when you decide which triangle goes to which of the two.
-
great paper. thanks. – CutieKrait Feb 1 at 13:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269428849220276, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/65160/when-two-ring-a-b-are-flat-then-are-they-isomorphic
|
## when two ring A,B are flat, then are they isomorphic? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let ring A and B be a flat over k[t]/(t^{2}).
if there are morphism f from A to B then, f should be isomorphism?
if so, why?
-
6
There is no reason that $f$ should be an isomorphism. – Kevin Buzzard May 16 2011 at 18:01
3
It's hard to answer this question because nothing like this is true. For example, $k[t]/(t^2)$ is flat over itself and $k[t, x]/(t^2)$ is also flat over $k[t]/(t^2)$, and there are plenty of maps between them. Could you give us some context about why you think this? – David Speyer May 16 2011 at 19:10
Since your question is about to get closed: I suggest that you follow David Speyer's advice, edit your question to clarify what it is you think is true, and then use the 'flag' option to contact a moderator asking if the question can be re-opened. – Yemon Choi May 16 2011 at 22:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462308287620544, "perplexity_flag": "middle"}
|
http://cs.stackexchange.com/questions/6842/what-are-the-effects-of-the-alphabet-size-on-construct-algorithms-for-suffix-tre
|
# What are the effects of the alphabet size on construct algorithms for suffix trees?
For what size alphabet does it take longer to construct a suffix tree - for a really small alphabet size (because it has to go deep into the tree) or for a large alphabet size? Or is it dependent on the algorithm you use? If it is dependent, how does the alphabet size affect Ukkonen's algorithm?
-
## 1 Answer
A larger alphabet is usually a drawback. However there are algorithms that can deal with this as long as the alphabet size is $n^{O(1)}$.
Ukkonen's algorithm runs only in $O(n)$ if the alphabet size is a constant but it is $O(n \log n)$ without this assumption. However, there are alternatives. You can compute the suffix-array of a text in linear time with the DC-3 Algorithm. This is a super-cool fancy algorithm that can be implemented in 50 lines of readable C++ code - one of my all-time favorites. If you can compare two characters in constant time and the alphabet size is $n^{O(1)}$, then the DC3 algorithm runs in $O(n)$ time.
Notice that you can get the suffix tree out of the suffix array in $O(n)$ time, when you have the LCP-array. Basically, you compute the Cartesian tree of the LCP-array and use the suffix-array to label the nodes. The LCP-array can be also computed with the DC3-algorithm.
-
+1 for answering the question and also mentioning suffix arrays, and the super-cool fancy algorithm! – Paresh Nov 23 '12 at 9:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175559282302856, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/28237/dimension-of-central-simple-algebra-over-a-global-field-built-using-class-field/28264
|
## Dimension of central simple algebra over a global field “built using class field theory”.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $F$ is a global field then a standard exact sequence relating the Brauer groups of $F$ and its completions is the following:
$$0\to Br(F)\to\oplus_v Br(F_v)\to\mathbf{Q}/\mathbf{Z}\to 0.$$
The last non-trivial map here is "sum", with each local $Br(F_v)$ canonically injecting into $\mathbf{Q}/\mathbf{Z}$ by local class field theory.
In particular I can build a class of $Br(F)$ by writing down a finite number of elements $c_v\in Br(F_v)\subseteq \mathbf{Q}/\mathbf{Z}$, one for each element of a finite set $S$ of places of $v$, rigging it so that the sum $\sum_vc_v$ is zero in $\mathbf{Q}/\mathbf{Z}$.
This element of the global Brauer group gives rise to an equivalence class of central simple algebras over $F$, and if my understanding is correct this equivalence class will contain precisely one division algebra $D$ (and all the other elements of the equiv class will be $M_n(D)$ for $n=1,2,3,\ldots$).
My naive question: is the dimension of $D$ equal to $m^2$, with $m$ the lcm of the denominators of the $c_v$? I just realised that I've always assumed that this was the case, and I'd also always assumed in the local case that the dimension of the division algebra $D_v$ associated to $c_v$ was the square of the denominator of $c_v$. But it's only now, in writing notes on this stuff, that I realise I have no reference for it. Is it true??
-
1
See Theorem 2.6, Chapter VIII, of my class field theory notes, but the proof uses the Grunwald-Wang theorem which is not (yet) proved in the notes. – JS Milne Jun 15 2010 at 16:12
## 4 Answers
To paraphrase Igor Pak: OK, this I know. It is remarkable how difficult it is to track down a reference which gives an actual proof for this fact (moreover applicable to all global fields). The notes of Pete Clark don't give a proof or a reference for a proof, and its omission in Cassels-Frohlich is an uncorrected error. :)
But here is a reference: Theorem 3.6 in the notes on Honda-Tate theory on Kirsten Eisentraeger's webpage. The assertion is even stronger: one can find a cyclic splitting field of the expected minimal degree. A moment's reflection leads one to realize what is actually going on: in the non-archimedean local theory we know that one can always arrange the splitting field to be the unramified one of the expected minimal degree (already in Serre's Local Fields, and part of the story of the "local invariant"), so in particular it is cyclic in that case. Taking into account the real case, and using the exactness at the left of the global-to-local sequence for Brauer groups, the global problem reduces to making a global cyclic extension inducing specified local ones at finitely many places and having a predicted degree which is lcm of local degrees (in the local theory the degree is actually all that really matters, not the unramifiedness).
Enter Grunwald-Wang... and since all that matters locally is the degree, if we don't care about global cyclicity but just global degree and some local degrees then weak approximation & Krasner's Lemma suffice to do the job (so for the question as asked, in which there's no cyclicity, the global problem is actually very elementary once the local case is settled!). Note that in Cassels-Frohlich the global cyclic splitting field is addressed, but not its degree (since Grunwald-Wang is not addressed in Cassels-Frohlich).
Historically the existence of a global cyclic splitting field, moreover of the expected degree, was regarded as one of the real triumphs of global class field theory, and the early attempts at class field theory by the German school were intimately tied up with this problem of the cyclic splitting field. This is why it was such a shock to Artin when Wang discovered that Grunwald's proof of local-to-global for cyclic extensions was not true (but fortunately Wang's fix was sufficient); see Roquette's historical notes on CFT.
Finally, to put this in perspective, it should be noted (as remarked in Eisentraeger's notes) that there are examples of complex function fields in transcendence degree 3 admitting nontrivial 2-torsion Brauer classes not represented by a quaternion division algebra! (The appearance of trdeg 3 is reasonable, as the period-index problem for surfaces over an algebraically closed field was proved by deJong.)
-
Thanks a lot for this nice reference. Here's a clickable link to it: math.psu.edu/eisentra/hondatate.pdf . Theorem 3.6, as you say. – Kevin Buzzard Jun 15 2010 at 14:03
@Boyarsky: which notes are you talking about? So far as I can recall, this statement is true for all of my posted lecture notes, but since I don't have any notes on Brauer groups per se, this is not so surprising. – Pete L. Clark Jun 15 2010 at 14:49
Never mind -- D. Savitt's answer explains it all. (You know you have a lot of notes on your webpage when...) – Pete L. Clark Jun 15 2010 at 14:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Isn't this the period-index problem in the Brauer group of a number field? In which case the answer is yes, it's true -- see e.g. Fact 4(c) on page 4 of these notes by Pete Clark (as well as Example 1.1.2 and the definition following it, both on the same page):
http://math.uga.edu/~pete/periodindexnotes.pdf
-
@D. Savitt: that my question is the period-index question for local and global fields does indeed seem to be the case, although I don't understand why the definition of I(eta) and M(eta) in Pete's example 1.1.2 coincide with his earlier definitions. But Pete is only claiming that period=index for F the rationals, and I asked about the case of a general global field. It's comforting to see it in print for these cases though ;-) because of course the moment you realise you don't know a proof and the standard references you pick up don't give one either, you begin to think it might be wrong... – Kevin Buzzard Jun 15 2010 at 14:04
Isn't Pete's property Br(1) for a field K the property that period=index holds for all finite extensions of K? – D. Savitt Jun 15 2010 at 14:59
@D: Yes, that's right. But Boyarsky is also right: I don't give any reference to a proof, not even a [?]. In my defense, this is because for me, Brauer groups are sort of a prelude to what I really want to talk about. (I do think that a standard reference on CSA's should give a proof: for instance, I would be surprised if the result cannot be found in Pierce or Gille-Szamuely.) – Pete L. Clark Jun 15 2010 at 15:08
@Pete: The book of Gille-Szamuely punts to elsewhere for getting the cyclic splitting field of correct degree: see Remarks 6.5.5 and 6.5.6. Amusingly, for global function fields they refer to Weil's "Basic Number Theory" without saying where in that dense tome it is to be found, and my recollection is that Weil's book does not handle the degree of the cyclic splitting field. But they also refer the reader to Pierce's "Associative Algebras" for proofs for global fields (or at least number fields). But that isn't freely available on the Internet, and so by modern standards it does not exist. – Boyarsky Jun 15 2010 at 15:25
2
@Pete: I stand corrected: the Chinese webpage ishare.iask.sina.com.cn/f/6403052.html provides a free copy of Pierce's book as a .djvu file (click on the downwards-pointing green arrow there, not the upwords one), so the book exists after all. The Theorem in section 18.6 is the desired result (stated only for number fields solely because Pierce only reviews the basics of number theory in the char. 0 case; the method of proof via Grunwald-Wang is the same as in the general case). – Boyarsky Jun 15 2010 at 15:40
show 4 more comments
I think you mean "lcm of the denom. of the $c_v$" rather than "gcd".
In http://front.math.ucdavis.edu/0507.5070, Colliot-Th'el`ene attributes the result that "exponent = index" in the Brauer group of a global (and local) field to Brauer-Hasse-Noether and Albert and gives references to:
R. BRAUER, H. HASSE et E. NOETHER, Beweis eines Hauptsatzes in der Theorie der Algebren, J. reine angew. Math. (Crelle) 167 (1932) 399–404.
M. DEURING – Algebren, Zweite, korrigierte Auflage. Ergebnisse der Mathe- matik und ihrer Grenzgebiete 41, Springer-Verlag, Berlin-New York 1968.
and
P. ROQUETTE – The Brauer-Hasse-Noether theorem in historical perspective. Schriften der Math.-Phys. Klasse der Heidelberger Akademie der Wissenschaften 15 (2004).
The result seems to be proved in Reiner's book (Theorem (32.19):
I. REINER – Maximal orders, Corrected reprint of the 1975 original. With a foreword by M. J. Taylor. London Mathematical Society Monographs. New Se- ries 28, The Clarendon Press, Oxford University Press, Oxford, 2003.
-
gcd->lcm fixed; thanks. – Kevin Buzzard Jun 15 2010 at 13:58
PS I'd like to accept your answer as well as Boyarsky's but I think I can only accept one :-( – Kevin Buzzard Jun 15 2010 at 14:04
I agree that Boyarsky's answer was nice! I note that Reiner's book states without proof the Hasse Norm Theorem and the Grunwald-Wang Theorem (see intro to chapter 8). But excepting that, I think Reiner's account is pretty thorough. – George McNinch Jun 15 2010 at 14:23
The fact that the Schur index of an element of the Brauer group of a number field equals its order in the Brauer group (and also is a cyclic algebra) is Theorem 18.6 in Richard Pierce's Associative Algebras (GTM88 Springer-Verlag 1982). The proof uses the Grunwald-Wang theorem.
-
Thanks Robin. Nice to have a reference from the "standard literature". I wish I knew my way around Pierce better. – Kevin Buzzard Jun 16 2010 at 6:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9058400392532349, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/160597/number-of-prime-divisors-of-the-order-of-e-8q
|
# Number of prime divisors of the order of $E_8(q)$.
I am trying to compute the number of prime divisors of the order of $E_8(q)$. I am interested in the general solution, but in particular, my problem calls for $q=p^{15}$ (for prime $p$) and $q\equiv 0,1,$ or $4 \mod 5$, if this helps at all.
So, the order is $|E_8(q)|=q^{120}(q^{30}-1)(q^{24}-1)(q^{20}-1)(q^{18}-1)(q^{14}-1)(q^{12}-1)(q^{8}-1)(q^{2}-1)$ (ref: Wilson, The Finite Simple Groups). Is there any more efficient algorithm than the standard to factorize integers of this form? I am primarily interested in knowing the number of prime divisors, but the divisors themselves would also be very useful.
-
## 1 Answer
So, among other things, you want to know the (number of) prime factors of $p^{450}-1$ for various primes $p$. Of course the polynomial $x^{450}-1$ factors into lots of smaller pieces, but there's still an irreducible part of degree 120. I can't imagine there would be a general formula for the number of prime factors of $f(p)$ for such a polynomial $p$, nor even much chance of computing them for $p$ with more than 2 digits.
You may find some useful information in the book, Brillhart, Lehmer, Selfridge, Tuckerman, Wagstaff, Factorizations of $b^n\pm1$.
Also, I'm not certain what algorithm you refer to when you write, "the standard." The standard algorithm for factoring that kind of number is probably the Special Number Field Sieve; is that what you had in mind?
-
If I am remembering correctly, then there are no special methods for factorizing numbers of the form $p^n-1$, but such factorizations are frequently needed, so a lot of computing power has been devoted to storing them once they have been computed. The book you mention is probably the best place to look! – Derek Holt Jun 20 '12 at 9:25
Yes, the special number field sieve is what I was talking about. I will look into this book and post if I come up with anything - thank you! – Alexander Gruber Jun 24 '12 at 2:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548945426940918, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Vi%c3%a8te's_formulas
|
# Vieta's formulas
(Redirected from Viète's formulas)
For a method for computing π, see Viète's formula.
In mathematics, Vieta's formulas are formulas that relate the coefficients of a polynomial to sums and products of its roots. Named after François Viète (more commonly referred to by the Latinised form of his name, Franciscus Vieta), the formulas are used specifically in algebra.
## The Laws
### Basic formulas
Any general polynomial of degree n
$P(x)=a_nx^n + a_{n-1}x^{n-1} +\cdots + a_1 x+ a_0 \,$
(with the coefficients being real or complex numbers and an ≠ 0) is known by the fundamental theorem of algebra to have n (not necessarily distinct) complex roots x1, x2, ..., xn. Vieta's formulas relate the polynomial's coefficients { ak } to signed sums and products of its roots { xi } as follows:
$\begin{cases} x_1 + x_2 + \dots + x_{n-1} + x_n = -\tfrac{a_{n-1}}{a_n} \\ (x_1 x_2 + x_1 x_3+\cdots + x_1x_n) + (x_2x_3+x_2x_4+\cdots + x_2x_n)+\cdots + x_{n-1}x_n = \frac{a_{n-2}}{a_n} \\ {} \quad \vdots \\ x_1 x_2 \dots x_n = (-1)^n \tfrac{a_0}{a_n}. \end{cases}$
Equivalently stated, the (n − k)th coefficient an−k is related to a signed sum of all possible subproducts of roots, taken k-at-a-time:
$\sum_{1\le i_1 < i_2 < \cdots < i_k\le n} x_{i_1}x_{i_2}\cdots x_{i_k}=(-1)^k\frac{a_{n-k}}{a_n}$
for k = 1, 2, ..., n (where we wrote the indices ik in increasing order to ensure each subproduct of roots is used exactly once).
The left hand sides of Vieta's formulas are the elementary symmetric functions of the roots.
### Generalization to rings
Vieta's formulas are frequently used with polynomials with coefficients in any integral domain R. In this case the quotients $a_i/a_n$ belong to the ring of fractions of R (or in R itself if $a_n$ is invertible in R) and the roots $x_i$ are taken in an algebraically closed extension. Typically, R is the ring of the integers, the field of fractions is the field of the rational numbers and the algebraically closed field is the field of the complex numbers.
Vieta's formulas are useful in this situation, because they provide relations between the roots without having to compute them.
For polynomials over a commutative ring which is not an integral domain, Vieta's formulas may be used only when the $a_i$'s are computed from the $x_i$'s. For example, in the ring of the integers modulo 8, the polynomial $x^2-1$ has four roots 1, 3, 5, 7, and Vieta's formulas are not true if, say, $x_1=1$ and $x_2=3$.
## Example
Vieta's formulas applied to quadratic and cubic polynomial:
For the second degree polynomial (quadratic) $P(x)=ax^2 + bx + c$, roots $x_1, x_2$ of the equation $P(x)=0$ satisfy
$x_1 + x_2 = - \frac{b}{a}, \quad x_1 x_2 = \frac{c}{a}.$
The first of these equations can be used to find the minimum (or maximum) of P. See second order polynomial.
For the cubic polynomial $P(x)=ax^3 + bx^2 + cx + d$, roots $x_1, x_2, x_3$ of the equation $P(x)=0$ satisfy
$x_1 + x_2 + x_3 = - \frac{b}{a}, \quad x_1 x_2 + x_1 x_3 + x_2 x_3 = \frac{c}{a}, \quad x_1 x_2 x_3 = - \frac{d}{a}.$
## Proof
Vieta's formulas can be proved by expanding the equality
$a_nx^n + a_{n-1}x^{n-1} +\cdots + a_1 x+ a_0 = a_n(x-x_1)(x-x_2)\cdots (x-x_n)$
(which is true since $x_1, x_2, \dots, x_n$ are all the roots of this polynomial), multiplying the factors on the right-hand side, and identifying the coefficients of each power of $x.$
Formally, if one expands $(x-x_1)(x-x_2)\cdots(x-x_n),$ the terms are precisely $(-1)^{n-k}x_1^{b_1}\cdots x_n^{b_n} x^k,$ where $b_i$ is either 0 or 1, accordingly as whether $x_i$ is included in the product or not, and k is the number of $x_i$ that are excluded, so the total number of factors in the product is n (counting $x^k$ with multiplicity k) – as there are n binary choices (include $x_i$ or x), there are $2^n$ terms – geometrically, these can be understood as the vertices of a hypercube. Grouping these terms by degree yields the elementary symmetric polynomials in $x_i$ – for xk, all distinct k-fold products of $x_i.$
## History
As reflected in the name, these formulas were discovered by the 16th century French mathematician François Viète, for the case of positive roots.
In the opinion of the 18th century British mathematician Charles Hutton, as quoted in (Funkhouser), the general principle (not only for positive real roots) was first understood by the 17th century French mathematician Albert Girard; Hutton writes:
...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation.
## References
• Hazewinkel, Michiel, ed. (2001), "Viète theorem", , Springer, ISBN 978-1-55608-010-4
• Funkhouser, H. Gray (1930), "A short account of the history of symmetric functions of roots of equations", American Mathematical Monthly (Mathematical Association of America) 37 (7): 357–365, doi:10.2307/2299273, JSTOR 2299273
• Vinberg, E. B. (2003), A course in algebra, American Mathematical Society, Providence, R.I, ISBN 0-8218-3413-4
• Djukić, Dušan, et al. (2006), The IMO compendium: a collection of problems suggested for the International Mathematical Olympiads, 1959–2004, Springer, New York, NY, ISBN 0-387-24299-6
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8986667394638062, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/189498-finding-inifnite-group-not-cyclic.html
|
# Thread:
1. ## Finding an inifnite group that is not cyclic
Problem: Find an infinite group that is not cyclic.
I was looking at the set $\mathbb{Q}$ for this.
I said that if $b=\frac{x}{y}$ so that $b \in \mathbb{Q}$, then the generator $<a> = b^n$ cannot generate all of $\mathbb{Q}$, and therefore, since there is no generator for $\mathbb{Q}$, it is not cyclic.
For all $n$ with $b^n$, $b$ must be zero in order to generate the zero element that is present in $\mathbb{Q}$. However, if $b=0$, then ONLY the zero element is generated, the rest of $\mathbb{Q}$ is not generated, and so $\mathbb{Q}$ is not cyclic.
In the other case, if $b \neq 0$, $b$ could potentially generate $\mathbb{Q}$ but without the zero element since b is not zero.
I have the nagging suspicion that I am incorrect.
Any input is appreciated.
2. ## Re: Finding an inifnite group that is not cyclic
How about something like... if $|b|<1$ then for all $n$, $b^n$ ...?
and hence $b$ didn't generate which rationals ?
Similarly,
if $|b|=1$, then
and if $|b|>1$ then for all $n$, $b^n$ ...?
Hence b cannot generate all of $\mathbb{Q}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913998544216156, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=2dc43e83ae92351806e6da5562769274&p=4201853
|
Physics Forums
## Non Linear PDE in 2 dimensions
Hi all. I'm trying to solve this PDE but I really can't figure how. The equation is
[tex]
f(x,y) + \partial_x f(x,y) - 4 \partial_x f(x,y) \partial_y f(x,y) = 0
[/tex]
As a first approximation I think it would be possible to consider $$\partial_y f$$ a function of only y and $$\partial_x f$$ a function of only x but even in this case I couldn't find a general solution.
Any idea?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Quote by L0r3n20 Hi all. I'm trying to solve this PDE but I really can't figure how. The equation is $$f(x,y) + \partial_x f(x,y) - 4 \partial_x f(x,y) \partial_y f(x,y) = 0$$ As a first approximation I think it would be possible to consider $$\partial_y f$$ a function of only y and $$\partial_x f$$ a function of only x but even in this case I couldn't find a general solution. Any idea?
Your equation needs to be supplemented by a boundary condition: say $f(x,g(x)) = h(x)$ for suitable $g(x)$.
The method of characteristics looks like a good bet.
By the chain rule,
[tex]
\frac{\mathrm{d}f}{\mathrm{d}t} = \frac{\partial f}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}t} + \frac{\partial f}{\partial y}\frac{\mathrm{d}y}{\mathrm{d}t}
[/tex]
which by comparison with your equation gives the following system:
[tex]
\frac{\mathrm{d}x}{\mathrm{d}t} = 1 \\
\frac{\mathrm{d}y}{\mathrm{d}t} = -4\frac{\partial f}{\partial x} \\
\frac{\mathrm{d}f}{\mathrm{d}t} = - f
[/tex]
subject to the initial conditions $f(0) = f_0 = h(x_0)$, $x(0) = x_0$, $y(0) = y_0 = g(x_0)$ so that $f(x_0,g(x_0)) = h(x_0)$.
Solving the first equation gives $x = t + x_0$, and the third gives $f = f_0e^{-t} = f_0e^{x_0-x}$. Substituting these into the second gives
[tex]
\frac{\mathrm{d}y}{\mathrm{d}t} = 4f \\
[/tex]
so that $y = y_0 + 4f_0(1 - e^{-t})$.
Therefore given a characteristic starting at $(x_0,g(x_0))$, the value of the function at $(x,y) = (x_0, g(x_0) + 4h(x_0)(1 - e^{-t}))$ is $h(x_0)e^{-t}$.
It is of vital importance that the curve $(x,g(x))$ on which the boundary condition is given is not a characteristic (ie a curve $(x(t),y(t)$) for some $(x_0,y_0)$). There may also be a problem if characteristics intersect.
Thread Tools
| | | |
|-----------------------------------------------------|----------------------------|---------|
| Similar Threads for: Non Linear PDE in 2 dimensions | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 20 |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8616871237754822, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=159154
|
Physics Forums
## Equivalency of functions/graphs
I'm not sure where I'm supposed to put his, but I guess that at the core of my problem is the coordinate function, which I've seen in algebra related courses.
I need some help with the concept of equivalency of functions or their graphs.
I need to know if two functions or their graphs are equivalent at a point m ( f1(0)=f2(0)=m ). And I know the following about the situation:
Graphs f1 and f2 on manifold M are equivalent at a point m (m is in an open group U), if for some chart fc(q1,...,qn): U -> Rn of manifold M
$$\frac{d}{dt}q^i(f_1(t))=\frac{d}{dt}q^i(f_2(t))\ |_{t=0}$$
I also know that q is a coordinate function:
$$q^i:=pr^i \circ f_c$$ where the projection $$pr^i:R^n \rightarrow R, (x^1,...,x^n) \mapsto x^i$$
I'm also told that if $$q^i:=pr^i \circ f_c$$ then f can be written as: $$f_c=(q^1,...,q^n)$$
How can fc be defined using the coordinate function q, if the coordinate function q was defined using the function fc?
Also, I do not know what the projection function looks like, I only know about its mapping.
Based on this info I should be able to find out if the two graphs f1 and f2 are equivalent at the point m. I can see how graphs can be considered equivalent, but I do not understand the meaning of the coordinate function in all this.
And I suppose that f1 and f2 are charts of M, and therefore the coordinate function can be written as:
$$q^i:=pr^i \circ f_1$$
Addition:
After defining the equivalency using:
$$\frac{d}{dt}q^i(f_1(t))=\frac{d}{dt}q^i(f_2(t))\ |_{t=0}$$
the material states that by using the chain rule it can be seen that this is an equivalence relation and doesn't depend on the chosen chart. If I use the chain rule I get
$$\frac{dq^i}{df_1} \frac{df_1}{dt} = \frac{dq^i}{df_2} \frac{df_2}{dt}$$
and I do not see how the chart chosen does not matter.
I'd appreciate the help.
(I posted this on other forum earlier. I'm posting this here too, in the hope that someone here could help me. I hope you don't take me as a hard-core spammer.)
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Thread Tools
| | | |
|------------------------------------------------------|------------------------------|---------|
| Similar Threads for: Equivalency of functions/graphs | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 12 |
| | Calculus & Beyond Homework | 1 |
| | Special & General Relativity | 36 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9068059325218201, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/301245/how-many-square-tiles-can-fit-in-an-area-of-1080-by-1920
|
# How many square tiles can fit in an area of 1080 by 1920
I'm trying to calcute how large 35 tiles must be in order to fit in an area of 1080 by 1920
-
Tiles of equal size, apparently? – rschwieb Feb 12 at 15:48
## 1 Answer
If you accept rectangular tiles, you can fill the area by making tiles $\frac {1080}5 \times \frac {1920}7$ or $\frac {1080}7 \times \frac {1920}5$. These are $216 \times 274\frac 37$ and $154\frac 27 \times 384$. As you can see, in one direction the tile has a fractional dimension. You could also use $1080 \times \frac {1920}{35}$ or $\frac {1080}{35} \times 1920$ tiles, but I suspect that is not what you are looking for. For similar problems, you decide on the grid you want, starting by factoring the number of tiles, then divide the linear dimension by the corresponding factor.
In your title, you call for square tiles. You can't fill that area with $35$ square tiles. The best you can do is to use tiles with a side of $216$. Five of them will fill the $1080$ dimension, but the other direction will be $7 \times 216=1512$ and you will have $408$ left uncovered.
-
I see nowhere in the question where the tiles are required to "fill". It just says "fit". I think the OP is trying to ask "What is the largest size that 35 square tiles can be, which still fit in this area?" – rschwieb Feb 12 at 15:41
And by that I meant only to draw attention that readers can skip over the first part of the answer to the last part of the answer, which addresses that. – rschwieb Feb 12 at 16:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473085999488831, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72120/cardinality-of-the-fibre-of-a-constantly-branched-finite-morphism-over-the-branc
|
cardinality of the fibre of a constantly branched, finite morphism over the branch locus
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\pi:Y\to X$ be a Galois cover, i.e. a finite morphism of nonsingular varieties over an algebraically closed field $\Bbbk$ such that $K(X)\hookrightarrow K(Y)$ is Galois. Let $H\subset X$ be the branch locus of $\pi$. We assume that each component of $H$ is nonsingular1. I seek your help in proving the following statement:
Proposition Let $P\in X$ be a regular point of $H$ and let $\eta$ be the generic point of the component $Z$ of $H$ which contains $P$. Assume that every point $\tau$ of codimension one with $\pi(\tau)=\eta$ has ramification index $n$. Then, $\left|\pi^{-1}(P)\right|=\deg(\pi)/n$.
Of course, if you can refer me to a text where this or a more general result is proven, that'd be great, since I couldn't find anything. Also, feel free to provide a proof which is different from my approach, but this is what I did so far:
1. Restrict to the case where $X=\mathrm{Spec}(A)$ and $Y=\mathrm{Spec}(B)$ are affine (duh).
2. Use the primitive element theorem and further localization to get $B=A[t]$ for some $t$ which is integral over $A$ and the minimal polynomial $F\in A[x]$ of $t$ has degree equal to $\deg(\pi)$.
3. Denote by $F_P$ the image of $F$ under $A[x]\twoheadrightarrow (A/\mathfrak{m}_P)[x]=\Bbbk[x]$, then for any $Q\in\pi^{-1}(P)$, we have $0=F(t)(Q)=F_P(t(Q))$. The cardinality of the fiber of $P$ is therefore equal to the number of distinct zeros of $F_P$.
4. I thought I could write $F=\prod_i (x-f_i)$ for certain $f_i\in B$ by possibly localizing further, since $K(Y)$ is normal over $K(X)$. These $f_i$ will all be distinct, but at $P$, exactly $n$ of them should have the same value. I don't know how to prove that, though.
5. I also thought about considering the discriminant of the $(k-1)$-st formal derivative of $F$ - this is a function on $Y$ which 'knows' where $F$ has $k$-uple zeros. I have even less of an idea how to relate that to ramification, though.
1 Of course, if you want to weaken these assumptions anywhere, be my guest.
-
Since $\pi \colon Y \to X$ is a Galois cover, we have $X=Y/G$, where the finite group $G$ (with $|G|=\deg \pi$) is the Galois group. Take a generic point $\eta$ of a component of the branch locus with ramification index $n$. You are assuming that the points $\tau$ over $P$ have stabilizer isomorphic to $\mathbb{Z}/n \mathbb{Z}$. Since $|\pi^{-1}(\eta)|$ is the cardinality of the orbit of $\tau$, it must be equal to $|G|$ divided by the order of the stabilizer at $\tau$, and this gives precisely $\deg (\pi) / n$. – Francesco Polizzi Aug 5 2011 at 8:29
By the way, since the stabilizer subgroup of different points in $\pi^{-1}(\eta)$ are conjugate in $G$, all points in $\pi^{-1}(\eta)$ have necessarily the same ramification index. Of course, this is false for non-Galois covers. – Francesco Polizzi Aug 5 2011 at 8:31
That sounds interesting, I'd really like to read more about Galois covers - is there any Book which covers all the stuff you used there? – Jesko Hüttenhain Aug 5 2011 at 12:34
You can read Chapter III, Section 3 "Group action on Riemnann surfaces" of Miranda's book "Algebraic curves and Riemann surfaces". It develops the theory only in dimension 1, but it is enough in order to understand how things work. – Francesco Polizzi Aug 5 2011 at 12:54
@Francesco: The assumption is that the generic stabilizer along $Z$ has order $n$. Why can't the order jump at $P$? – Laurent Moret-Bailly Aug 7 2011 at 10:39
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460601806640625, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/04/10/the-exponential-property/?like=1&_wpnonce=84af3d0b4c
|
The Unapologetic Mathematician
The Exponential Property
We’ve defined the natural logarithm and shown that it is, in fact, a logarithm. That is, it’s a homomorphism from the multiplicative group of positive real numbers to the additive group of all real numbers. Now I assert that this function is in fact an isomorphism.
First off, the derivative of $\ln(x)$ is $\frac{1}{x}$, which is always positive for positive $x$. Thus it’s always strictly increasing. That is, if $0<x<y$ then $\ln(x)<\ln(y)$. So no two distinct numbers ever have the same natural logarithm, and the function is thus injective.
Flipping this around tells us that we definitely have some nonzero values for the function. For example, we know that $0<\ln(2)$. Now, since the real numbers are an Archimedean field, no matter how big a number $y>0$ we pick, there will be some natural number $n$ so that $y<n\ln(2)=\ln(2^n)$, where the latter inequality follows from the logarithmic property.
That is, no matter how large a number we pick, $\ln$ takes values at least that large. But because $\ln$ is continuous on a connected interval there must be some number $x$ with $\ln(x)=y$. Similarly, if $y<0$ then there will be some $x$ with $\ln(x)=-y$, and thus $\ln(\frac{1}{x})=y$. Thus the natural logarithm is surjective.
So, since our function is one-to-one and onto, it has an inverse function. We will call this function the “exponential” (denoted $\exp$), and define it to be the unique function satisfying
$\exp( \ln(x) )=x$
$\ln(\exp(y))=y$
for all positive real $x$ and all real $y$.
From here it’s straightforward to see that $\exp$ must be the inverse homomorphism. That is, given two real numbers $y_1$ and $y_2$ we know there must be (unique!) positive real numbers $x_1$ and $x_2$ with $\ln(x_i)=y_i$. Then we calculate
$\exp(y_1+y_2)=\exp(\ln(x_1)+\ln(x_2))=\exp(\ln(x_1 x_2))=$
$x_1x_2=\exp(\ln(x_1))\exp(\ln(x_2))=\exp(y_1)\exp(y_2)$
And it’s clear from here that $\exp(0)=1$. A homomorphism from the additive reals to the multiplicative positive reals like this is said to satisfy the “exponential property”, which is just the reverse of the logarithmic property from last time.
Like this:
Posted by John Armstrong | Analysis, Calculus
8 Comments »
1. [...] Exponential Functions The exponential property is actually a rather stringent condition on a differentiable function . Let’s start by [...]
Pingback by | April 10, 2008 | Reply
2. Another parse error.
Comment by | April 16, 2008 | Reply
3. [...] and Powers The exponential function is, as might be expected, closely related to the operation of taking powers. In fact, any of our [...]
Pingback by | April 16, 2008 | Reply
4. [...] run from zero all the way up to infinity. Now the same sort of argument as we used to construct the exponential function gives us an inverse sending any nonnegative number to a unique nonnegative square [...]
Pingback by | August 19, 2008 | Reply
5. [...] one that’s particularly nice is the exponential function . We know that this function is its own derivative, and so it has infinitely many derivatives. In [...]
Pingback by | October 7, 2008 | Reply
6. [...] Exponential Series What is it that makes the exponential what it is? We defined it as the inverse of the logarithm, and this is defined by integrating . But the important thing we [...]
Pingback by | October 8, 2008 | Reply
7. [...] Exponential Differential Equation So we long ago defined the exponential function to be the inverse of the logarithm, and we showed that it satisfied the exponential property. Now [...]
Pingback by | October 10, 2008 | Reply
8. [...] The Circle Group Yesterday we saw that the unit-length complex numbers are all of the form , where measures the oriented angle from around to the point in question. Since the absolute value of a complex number is multiplicative, we know that the product of two unit-length complex numbers is again of unit length. We can also see this using the exponential property: [...]
Pingback by | May 27, 2009 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043839573860168, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2012/02/17/conservation-of-electromagnetic-energy/?like=1&source=post_flair&_wpnonce=d0199d491e
|
# The Unapologetic Mathematician
## Conservation of Electromagnetic Energy
$\displaystyle\nabla\times B=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}$
Now let’s take the dot product of this with the electric field:
$\displaystyle E\cdot(\nabla\times B)=\mu_0E\cdot J+\epsilon_0\mu_0E\cdot\frac{\partial E}{\partial t}$
On the left, we can run a product rule in reverse:
$\displaystyle B\cdot(\nabla\times E)-\nabla\cdot(E\times B)=\mu_0E\cdot J+\epsilon_0\mu_0E\cdot\frac{\partial E}{\partial t}$
Now, Faraday’s law tells us that
$\displaystyle\nabla\times E=-\frac{\partial B}{\partial t}$
so we can write:
$\displaystyle-B\cdot\frac{\partial B}{\partial t}-\nabla\cdot(E\times B)=\mu_0E\cdot J+\epsilon_0\mu_0E\cdot\frac{\partial E}{\partial t}$
Let’s rearrange this a bit:
$\displaystyle-\frac{1}{\mu_0}B\cdot\frac{\partial B}{\partial t}-\epsilon_0E\cdot\frac{\partial E}{\partial t}=\nabla\cdot\left(\frac{E\times B}{\mu_0}\right)+E\cdot J$
The dot product of a vector field with its own derivative should look familiar; we can rewrite:
$\displaystyle-\frac{\partial}{\partial t}\left(\frac{1}{2\mu_0}B\cdot B-\frac{\epsilon_0}{2}E\cdot E\right)=\nabla\cdot\left(\frac{E\times B}{\mu_0}\right)+E\cdot J$
But now we should recognize almost all the terms in sight! On the left, we’re taking the derivative of the combined energy densities of the electric and magnetic fields:
$\displaystyle U=\frac{\epsilon_0}{2}\lvert E\rvert^2+\frac{1}{2\mu_0}\lvert B\rvert^2$
The second term on the right is the energy density lost to Joule heating per unit time. The only thing left is this vector field:
$\displaystyle u=\frac{E\times B}{\mu_0}$
which we call the “Poynting vector”. It’s really named after British physicist John Henry Poynting, but generations of students remember it because it “points” in the direction electromagnetic energy flows.
To see this, look at the final form of our equation:
$\displaystyle-\frac{\partial U}{\partial t}=\nabla\cdot u+E\cdot J$
On the left we have the rate at which the electromagnetic energy is going down at any given point. On the right, we have two terms; the second is the rate electromagnetic energy density is being lost to heat energy at the point, while the first is the rate electromagnetic energy is “flowing away from” the point.
Compare this with the conservation of charge:
$\displaystyle-\frac{\partial\rho}{\partial t}=\nabla\cdot J$
where the rate at which charge density decreases is equal to the rate that charge is “flowing away” through currents. The only difference is that there is no dissipation term for charge like there is for energy.
One other important thing to notice is what this tells us about our plane wave solutions. If we take such an electromagnetic wave propagating in the direction $k$ and with the electric field polarized in some particular direction, then we can determine that
$\displaystyle u=\frac{E\times B}{\mu_0}=\frac{\lvert E\rvert^2}{\mu_0c}k=\epsilon_0c\lvert E\rvert^2k$
showing that electromagnetic waves carry electromagnetic energy in the direction that they propagate.
## 2 Comments »
1. Nice post!
The subtraction sign in the parentheses on the left side of the seventh displayed equation ought to be an addition sign.
Comment by | February 19, 2012 | Reply
2. [...] is called the Killing form, named for Wilhelm Killing and not nearly so coincidentally as the Poynting vector. It will be very useful to study the structures of Lie [...]
Pingback by | September 3, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181628227233887, "perplexity_flag": "middle"}
|
http://nrich.maths.org/1321
|
### Whole Number Dynamics I
The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases.
### Whole Number Dynamics II
This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point.
### Whole Number Dynamics III
In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again.
# Whole Number Dynamics IV
##### Stage: 4 and 5
Article by Alan Beardon
This is the fourth in a series of five articles and in this article we look at a dynamic system which ends in zero, or a cycle of four numbers, and investigate why this is the case.
Every whole number has a remainder $R$ when divided by $10$, for example, $53$ has remainder $3$, $67$ has remainder $7$ and so on.
Of course a whole number can be negative and we have to agree what we mean by the remainder in this case; for example, what is the remainder when $-12$ is divided by $10$? There are two possible answers here and both are equally valid:
First we could say that $-12 = (-1 \times10) + (-2)$ so the remainder is $-2$ or second, we could say that $-12 = (-2 \times 10) + (8)$ so that the remainder is $8$.
In this article the remainder will always be positive (or, of course, zero) so that, for example, the remainder of $-137$ is $3$, and the remainder of $-58$ is $2$. The remainder of $120$, and of $-120$, is $0$.
The general rule, then, is that any whole number can be written as a multiple of $10$ plus the remainder, where the remainder is always one of the numbers: $0$, $1$, $2$, $\ldots$, $9$.
Now let us consider the following rule. Given a whole number $N$, let us write $N$ as a multiple of $10$, say $M \times 10$, plus a remainder $R$; where (as always) $0 \leq R \leq 9$.
Starting with $N$ let's produce a new whole number $N\prime$ which is given by $N\prime = 10 R - M$
Be careful here, for this is not saying that $-M$ is the remainder of $N\prime$ (indeed, it cannot be if it is negative, and even if it is positive it might still be greater than $9$). An example will make this clearer.
Starting with $N =58$, we have $M =5$ and $R =8$ so that $N\prime= (8 \times 10) - 5 = 75$.
As another example, if $N = -123$ then $M = -13$ and $R =7$ so that in this case, $N\prime = (7 \times 10) - (-13) = 83$
You should now start with the whole number $68$ and keep on applying the rule: $N \to N\prime$. For example, starting with $68$ (which is $(6 \times 10) + 8$) and applying the rule gives $74$; now apply the rule to $74$, and so on until you notice something special.
When you have done this, repeat the whole process again, but this time starting with $28$; what do you notice now? Again, repeat the whole process, this time starting with $49$; what do you notice now?
Now start with $3164$.
What is (1) different, and (2) the same, about carrying out the process with this starting point? Now choose your own starting point, and keep on repeating the process until you again notice something special; then try again several times, each with different starting points. I suggest that you keep a careful record of your results and then try and write down what you have discovered. Ask your teacher, or a friend (who likes arithmetic!) to check your results for you.
***
You should have discovered by now that whatever number you start with, you will either end up at the number zero (and stay there thereafter), or find yourself circulating repeatedly around a cycle of four numbers. For example, one such cycle is:
$$13 \to (3 \times 10) - 1 = 29 \to (9 \times 10) - 2 = 88 \to (8 \times 10) - 8 = 72 \to (2 \times10) - 7 = 13$$
We shall now begin to try to analyse what is happening here.
First we notice that given any whole number $K$ there are exactly $10$ whole numbers which go to $K$ when we apply the rule; for example, the numbers $-530$, $-429$, $-328$, $-227$, $-126$, $-25$, $76$, $177$, $278$, $379$ are all the numbers that map to $53$.
To see this, we start with any whole number $N$, which we may write as $N = 10M+R$, and note that this ends up at $K$ if $10R - M = K$ or, equivalently, if $M= 10R- K$.
This means to say that if $N$ goes to $K$, then $$N = 10 \times M + R = 10(10R - K) = 101 R - 10 K$$ As $R$ can take any value between $0$ and $9$ inclusive (and only these values), the numbers which go to $K$ when we apply the rule once are precisely the numbers: \begin{eqnarray} -10 K, 101 - 10 K, 202 - 10 K, 303 - 10 K, \ldots, 909 - 10K \end{eqnarray} Let us now look at a special case of this general result.
Obviously $0$ goes to $0$ when we apply the rule (for if $N =0$ then $M =0$ and $R =0$).
What else goes to $0$ in one step?
According to the list (1) above, there are exactly ten numbers which go to $0$ in one step, and putting $K =0$ in the list, we see that these are: $$0, 101, 202, 303, \ldots,909$$ Notice that these are all multiples of $101$. Pick one of these, say $303$ and now ask what goes to $303$ in one step. Of course, if $N$ goes to $303$ in one step, then it goes to $0$ in two steps. Again using the list above, the set of numbers that go to $303$ in one step is $$-10 \times 303, 101-(10 \times 303), 202-(10 \times 303), \ldots,808-(10 \times 303), 909-(10 \times 303)$$ and these too are all multiples of $101$.
This argument shows much more than this. It is clear from (1) that if $K$ is a multiple of $101$, then all ten numbers that go to $K$ in one step are also multiples of $101$. As $0$ is a multiple of $101 (0 - 0 \times 101)$, we see that every number that goes to $0$ in any number of applications of the rule must also be a multiple of $101$.
Here is a question: does the number $123456$ end up at $0$ or in a cycle of four numbers? Of course you can carry out the process many times, but is obviously much less work to see whether or not $123456$ is a multiple of $101$. Try it. What about the number $987654321$?
We have seen that every number that goes (after a number of steps) to $0$ is a multiple of $101$. This is not the same thing as saying that every multiple of $101$ does go to $0$ (after a certain number of steps) but this is true and in the next article we shall show this is so.
• Whole Number Dynamics I
• Whole Number Dynamics II
• Whole Number Dynamics III
• Whole Number Dynamics IV
• Whole Number Dynamics V
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 112, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402582049369812, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/61520?sort=votes
|
## Bundle Gerbes as Characteristic Classes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Perhaps this is a bit naïve, but I was wondering if it possible to (at least formally) represent Bundle Gerbes as Characteristic Classes. Disclaimer: My understanding of Bundle Gerbes is limited to this paper of Hitchin so perhaps I'm not thinking of this correctly. Just for reference, a Bundle Gerbe is defined by specifying an open cover ${U_i}$ of a manifold $M$ that has associated to it maps $g_{ijk} : U_i \cap U_j \cap U_k \rightarrow S^1$ that satisfy certain cocycle-like conditions, $g_{jkl} g_{ikl}^{-1} g_{ijl} g^{-1}_{ijk}$. One can define connective structures with $3$-form curvatures $H$ on Bundle Gerbes that define principle circle bundles on the Loop Space of $M$ (See Hitchin, Page 4). These connective structures are classified by their curvatures, $[H / 2\pi] \in H^3(M,\mathbb{Z})$ just like the curvature $2$-form of a line bundle generates the first Chern Class. Explicitly, my question is the following:
Can we expand the definition of a bundle gerbe on a manifold $M$ to an arbitrary compact, finite-dimensional Lie Group $G$ by considering a Bundle Gerbe to instead be the set of maps $g_{ijk} : U_i \cap U_j \cap U_k \rightarrow G$? If $\dim G = n$, will $H^{n}(M,\mathbb{Z})$ classify the Principle $G$-bundles on $\Omega M$?
Again, my understand of gerbes is quite insufficient so perhaps this is "obvious" in some other literature. If this is the case, could you please cite a reference?
Thanks!
PS: I'm not sure if the compactness is truly necessary, I just added it with the hope that its more likely in the compact case
-
1
So is your question whether there is a notion of nonabelian gerbe? There is an obvious problem with this, because although one can make sense of $H^1(M,G)$ for nonabelian $G$, but not of $H^{n>1}(M,G)$. On the other hand, there are reasons (coming from Physics) to believe that nonabelian gerbes have to exist. In some talks at the Newton Institute in 1996, Edward Witten asked that very question. I am not sure if there has been any progress since then. – José Figueroa-O'Farrill Apr 13 2011 at 12:26
1
There is a notion of nonabelian bundle gerbe. It can be found in "Nonabelian bundle gerbes, their differential geometry and gauge theory" by Aschieri, Cantini and Jurco. There is also a notion of nonabelian cohomology with values in a crossed module. – Ulrich Pennig Apr 13 2011 at 13:41
Thanks for the responses. I suppose that if $G$ is non-abelian it definitely fit the bill and I think that Ulrich's answer (at least from a cursory reading) looks pretty good. However, is the classification still cohomological? It seems as if we now need K-Theory to classify non-Abelian gerbes – Tarun Chitra Apr 13 2011 at 15:42
## 1 Answer
The fact that you are dealing with compact and/or finite dimensional Lie groups is completely irrelevant. The fact that these group are Lie is also partially irrelevant (unless you care about putting connections on your bundle gerbes, in which case it becomes very relevant). More relevant is whether the groups abelian or not. A priori, the cocycle relation only makes sense for abelian groups.
But there is also a theory of non-abelian (bundle) gerbes, where you allow non-abelian groups. The cocycles have two kinds of data: Maps
$\alpha_{ij}:U_i\cap U_j\to \mathrm{Inn}(G)$ and maps
$g_{ijk}:U_i\cap U_j\cap U_k \to G$,
where $\mathrm{Inn}(G)$ denotes the group of inner automorphisms of $G$.
These non-abelian gerbes are classified by $H^2(-,Z(G))$, the second Cech cohomology group with coefficients in the sheaf of $Z(G)$-valued functions. [that's a non-trivial theorem]
That was the case of a trivial band.
A band is the same thing as an $\mathrm{Out}(G)$-principal bundle. Say you are given an $\mathrm{Out}(G)$ principal bundle $P$, described by transition functions $b_{ij}:U_i\cap U_j\to \mathrm{Out}(G)$. Then you can twist the above definition as follows: The cocycles now consist of maps
$\alpha_{ij}:U_i\cap U_j\to \mathrm{Aut}(G)$ and maps
$g_{ijk}:U_i\cap U_j\cap U_k \to G$,
where the $\alpha_{ij}$ are lifts of the $b_{ij}$.
The gerbes with band $P$ are classified by a set that is either ♦ empty, or ♦ isomorphic to $H^2(-,Z(G)\times_{\mathrm{Out}(G)} P)$, the second Cech cohomology group with coefficients in the sheaf of sections of $Z(G)\times_{\mathrm{Out}(G)} P$.
Whether or not that set is empty depends on the value of an obstruction class that lives in $H^3(-,Z(G)\times_{\mathrm{Out}(G)} P)$. It's non-empty iff that obstruction vanishes.
Finally, to answer your last question. If $G$ is a Lie group and you have a bundle gerbe with connection (trivialized over the base point), then you get a $G$-principal bundle, but only on a subspace of the based loop space $\Omega M$. It's the subspace consisting of those loops over which the band $P$ and its connection trivialize.
-
Thanks a lot! Is there any chance you know of a reference where I could find the proofs of these facts? – Tarun Chitra Apr 13 2011 at 16:15
3
The standard reference is Giraud's book Cohomologie non-abelienne. This book is unreadable in the strongest possible meaning of the word "unreadable". So... no. – André Henriques Apr 13 2011 at 16:37
Depends on taste, André, I find most of the contemporary articles in this area, which are often nonsystematic in terminology and notation, plus wave hands and use jargon on most issues, much less readable than Giraud's book. – Zoran Škoda Apr 13 2011 at 17:03
Actually, is there any chance that either of you know of an English book? Or is there an English translation around of <i>Cohomologie non-abelienne</i>? – Tarun Chitra Apr 13 2011 at 17:14
I'm currently reading Brylisnki's Loop Spaces, Characteristic Classes, and Geometric Quantization which might be helpful. – Cheyne Jun 27 at 22:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324882626533508, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/tagged/discriminant-analysis?sort=votes&pagesize=15
|
# Tagged Questions
Given multivariate data split into several subsamples (classes) the analysis finds linear combinations of variables, called discriminant functions, which discriminate between classes and are uncorrelated. The functions are applied then to assign old or new observations to the classes. Discriminant ...
1answer
620 views
### Treatment of outliers produced by Kurtosis
I was wondering if anyone could help me with information about Kurtosis (i.e. is there any way to transform your data to reduce it?) I have a questionnaire dataset with a large number of cases and ...
1answer
677 views
### Cluster Analysis followed by Discriminant Analysis
What is the rationale, if any, to use Discriminant Analysis (DA) on the results of a clustering algorithm like k-means, as I see it from time to time in the literature (essentially on clinical ...
2answers
431 views
### Why prediction of a predicted variable from a discriminant analysis is imperfect
I am puzzled by something I found using Linear Discriminant Analysis. Here is the problem - I first ran the Discriminant analysis using 20 or so independent variables to predict 5 segments. Among the ...
2answers
592 views
### How does linear discriminant analysis reduce the dimensions?
There are words from "The Elements of Statistical Learning" on page 91: The K centroids in p-dimensional input space span at most K-1 dimensional subspace, and if p is much larger than K, this ...
1answer
157 views
### Variant of discriminant analysis for known multiple independent classifications?
I have a large data set: over 100,000 data points, each with 60 dimensions. I want to display the data in 2D to visibly maximize the separation between classes, which I know for each point. I asked a ...
1answer
217 views
### Classifying clusters using discriminant analysis
Suppose I've data for 100 individuals for 5 variables, say Var1, Var2,...Var5. I run the cluster analysis using these 5 variables on these 100 rows & got 3 clusters. Now, I want to differentiate ...
3answers
1k views
### What is the relationship between regression and linear discriminant analysis?
Is their a relationship between regression and linear discriminant analysis? What are their similarities and differences? Does it make any difference for two classes and more than two classes?
1answer
405 views
### Multi class LDA vs 2 class LDA
The problem of designing a multi-class classifier using LDA can be expressed as a 2 class problem(one vs everything else) or a multi-class problem. Why is it that in certain cases Multi-class LDA ...
2answers
162 views
### How do you identify the variables that separate several groups?
I don't have much background on statistics. I am working on multivariate morphometrics of a sample of frogs. I have a data matrix of 19 variables (continuous characteristics) for around 250 samples. ...
0answers
134 views
### Using QDA for Non-Gaussian distributions
I am evaluating a Quadratic Discriminant Analysis (QDA) classifier on a high-dimensionality feature set. The features come from highly non-Gaussian distributions. However, when I transform the ...
2answers
567 views
### Why are Gaussian “discriminant” analysis models called so?
Gaussian discriminant analysis models learn $P(x|y)$ and then apply Bayes rule to evaluate $$P(y|x) = \frac{P(x|y)P_{prior}(y)}{\Sigma_{g \in Y} P(x|g) P_{prior}(g) }.$$ Hence, they are generative ...
1answer
209 views
### What do “real values” refer to in supervised classification?
I'm using supervised classification algorithms from mlpy to classify things into two groups for a question-answering system. I don't really know how these algorithms work, but they seem to be doing ...
2answers
488 views
### Linear discriminant analysis and Bayes rule
What is the relation between Linear discriminant analysis and Bayes rule? I understand that LDA is used in classification by trying to minimize the ratio of within group variance and between group ...
1answer
84 views
### Is discriminant analysis supervised learning?
Is linear discriminant analysis, specifically Linear Programming Discriminant Analysis (LPDA), supervised learning? Can you provide a valid reference that states so if possible. My study supervisor ...
2answers
667 views
### How to interpret a predictor with a positive structure coefficient and a negative standardised coefficient in discriminant function analysis?
I am doing a discriminant function analysis and I have four continous independent variables and one categorical dependent variable (that has 3 groups). I have chosen to do this analysis to see how ...
1answer
260 views
### Plotting a discriminant as line on scatterplot
Given a data scatterplot I can plot the data's principal components on it, as axes tiled with points which are principal components scores. You can see an example plot with the cloud (consisting of 2 ...
1answer
1k views
### Deriving total (within class + between class) scatter matrix
I was fiddling with PCA and LDA methods and I am stuck at a point, I have a feeling that it is so simple that I can't see it. Within-class ($S_W$) and between-class ($S_B$) scatter matrices are ...
2answers
312 views
### Theory on discriminant analysis in small sample size conditions
I see a similarity between a problem I'm working on and Linear (or Quadratic) Discriminant Analysis when the sample size is smaller than $p+1$. I'm interested in theory bounding the generalization ...
1answer
166 views
### Choosing variables for Discriminant Analysis
I've 110 variables & 200 data points. Of this 110 variables, one is group variable (say "brown eye","blue eye"). I want to use discriminant analysis to classify the groups based on remaining 119 ...
0answers
129 views
### LDA, Significance of orthonormality- Trace Ratio Maximization
The objective of fisher linear discriminant analysis can be formulated as maximizing $\frac{Tr[X^TAX]}{Tr[X^TBX]}$ over $X$ where $A$ and $B$ are positive semi-definite with orthonormality constraints ...
2answers
125 views
### Where does the definition of the hyperplane in a simple SVM come from?
I'm trying to figure out support vector machines using this resource. On page 2 it is stated that for linearly separable data the SVM problem is to select a hyperplane such that \$\vec{x}_i\vec{w} + b ...
2answers
191 views
### Can you use discriminant analysis to classify new observations into categories generated by a previous $k$-means clustering?
After doing k-means clustering on a set of observations, I would like to construct a discriminant function so as to classify new observations into the categories I found after k-means. Is this at all ...
1answer
149 views
### What exactly the ROC curve can tell us or can be inferred?
(I post this originally at http://stackoverflow.com/questions/15477282/what-exactly-the-roc-curve-can-tell-us-or-can-be-inferred, but people directed me to here. Sorry about posting this twice.) I ...
3answers
243 views
### Does PCA followed by LDA make sense?
This is a question about classification. I am a neuroscience student with little experience of classification methods and I'd be grateful for any advice about the best way to implement a linear ...
1answer
306 views
### Non-parametric discriminant analysis in R
I want to use Discriminant Analysis between two non normal populations in R. Can anybody tell me the name of the R function to do so? Could also anybody tell me how accurate my results will be if I ...
3answers
301 views
### Usage of LDA with more than two classes
I'm reading about the Linear Discriminant Analysis by Fisher and I have a couple of questions about its usage. If you have k>2 classes in a two-dimensional space you find k−1 vectors that you need ...
1answer
121 views
### Fisher discrimination power of a variable and Discriminant analysis
Apparently, the Fisher analysis aims at simultaneously maximising the between-class separation, while minimising the within-class dispersion. A useful measure of the discrimination power of a ...
1answer
222 views
### Decision boundaries from coefficients of linear discriminants?
I have a data set with four variables and 3000+ observations on which I performed an LDA. I was wondering how I can use the scaled coefficients of linear discriminants (output of R shown below as ...
1answer
154 views
### How to estimate the deposit mix of a bank using interest rate as the independent variable?
Let's say a bank has 5 different types of deposits. One type is certificates of deposits (CD), and the other 4 types are different checking and savings account products with various interest rates ...
0answers
85 views
### How do you detect if a given dataset has multivariate normal distribution?
I'm looking at Fisher's LDA on various datasets on UCI ML repository and trying to see where LDA might perform badly. One reason I can think of is if the data distribution is not a multi-variate ...
1answer
102 views
### Sample size and documentation for discriminant analysis
Does anybody have good documentation for discriminant analysis? I have 9 variables (measurements), 60 patients and my outcome is good surgery, bad surgery. Also, is my sample size too small? Thank ...
1answer
42 views
### Inconsistency in cross-validation results
I have a set of dataset recorded from subjects as they perform some particular cognitive task. The data consists of 16 channels and a number of sample points per channel and I want to classify this ...
1answer
184 views
### Dealing with high dimension in principal component analysis
For very extreme high dimensions in PCA, the number of dimensions $p$ is larger than the sample size $N$, does PCA work well or does it work at all? By 'work' I mean does it work mathematically If ...
1answer
122 views
### How do I test whether I can properly apply LDA?
I have some data which works nicely with JMP's canned linear discriminant analysis (LDA), but after reading about LDA I'm not sure if the analysis is valid. The Wiki article notes a fundamental ...
1answer
40 views
### Why is there a sharp elbow in my ROC curves?
I have some EEG data sets that I am testing against two classes. I can get a decent error rate from LDA (the class-conditional distributions aren't Gaussian, but have similar tails and good enough ...
1answer
170 views
### How does Fisher LDA work?
Intuitively, how does Fisher LDA work? From this Linear discriminant analysis and Bayes rule I completely understood the Bayesian approach but I'm not able to relate it to the Fisher's one described ...
1answer
182 views
### LDA projection for classification
I am dealing with 2 class LDA classification problem. During a test phase (after training), I'm trying to project a feature vector to lower dimensional space. How do we get the projected test ...
0answers
58 views
### Using linear discriminant analysis to validate the cluster groups resulting from kmeans
I'm currently working on a cluster analysis project and ran kmeans on the data for k=2. I was reading similar articles on similar experiments, and the investigators used discriminant analysis to ...
0answers
73 views
### How to interpret this output in Dimension Reduction?
I'm running Linear Discriminant Analysis on a dataset and then performing clustering on it. I'm reducing it to dimensions 2,6,10. On comparing metrics like Accuracy and Normalized Mutual Information, ...
0answers
83 views
### How to overcome singularity problem in Linear Discriminant Analysis?
I've code for LDA which is failing as the matrices passed to calculate eigen values are not singular and hence lead to infinite eigen values? Can anyone recommend what can be done to fix this? I've ...
0answers
47 views
### Overfitting a linear Linear Discriminant Function
I am estimating a Linear Discriminant function with 250 input variables over 4000 data records. Should I consider feature selection, am I over fitting the model? How do I know when feature selection ...
0answers
88 views
### Discriminant analysis with random effects
Is it possible to do discriminant analysis with random effects? Is there an R package for this? Context: I have habitat use data for two species of frogs from radio telemetry, but nested within ...
0answers
93 views
### Different approaches of linear discriminant analysis
What are the differences in three different approaches of classification of LDA for two groups? The three are: 1. Fisher's approach 2. regression approach 3. Bayes's approach They will be exactly ...
0answers
196 views
### How to combine features extracted by PCA, LDA and LBP?
What I'm thinking is to combine PCA features, LDA features and LBP features together to get a higher accuracy, since I think the three features are all kind of histogram vectors and when we decide the ...
0answers
127 views
### LDA with a categorical predictor variable
I have a dataset with a categorical treatment (1 if treated, 0 if control) and many columns (34) of response variables. Each column represents a species and its response (some measured abundance) to ...
0answers
55 views
### Difference and connection between generative learning, discriminative learning and max-margin learning
I once heard that, generative learning, discriminative learning and max-margin learning can be separated in terms of their respective definition of loss function. I am not sure how to achieve that?
0answers
359 views
### How to classify new cases in discriminant analysis exactly as SPSS does?
What I have: There is one base with already classified cases. There are 23 independent variables that were used in this classification and 10 groups. Another base has new unclassified cases. There ...
0answers
106 views
### Range of standardized coefficients in a discriminant analysis
I want to run a discriminant analysis on different motion capture measures to see which of the measures distinguishes best between my two conditions. The problem is that some of the standardized ...
2answers
126 views
### How can you make linear discriminant analysis reduce dimensions to the number of dimensions you are looking for?
Let's say I have a $m \times n$ matrix where $m$ is the number of points and $n$ is the number of dimensions. I would like to give a target dimension parameter which is let's say d. d can be a set of ...
1answer
60 views
### Is “discriminant function” a synonym for “classification function”
"discriminant function" and "classification function" are two terms used in literature to denote a a function that maps a feature vector into a discrete class variable. I presume "discriminant ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170128703117371, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/122487-lagranges-theorem.html
|
# Thread:
1. ## Lagranges theorem
I have tried to understand this using various websites and m,y notes but i still dont understand whats going on.
Could someone please explain what im meant to do to prove a a set in lagranges theorem thanks.
2. Originally Posted by adam_leeds
I have tried to understand this using various websites and m,y notes but i still dont understand whats going on.
Could someone please explain what im meant to do to prove a a set in lagranges theorem thanks.
What is it your are wanting us to explain how to prove?
Lagranges Theorem states that the order of every subgroup $H$ of a finite group $G$ divides the order of $G$, $H \leq G \Rightarrow |H| \mid |G|$.
This is, perhaps, a result which is easier to understand once you have played around with it a bit. So, pick a few of your favourite groups and verify the result.
For instance, the Klein 4-group has order 4 and subgroups of order 1, 2, 2, 2 and 4, and cyclic groups of prime order (for instance, $\mathbb{Z}/7\mathbb{Z}$) have no proper subgroups.
3. Originally Posted by Swlabr
What is it your are wanting us to explain how to prove?
Lagranges Theorem states that the order of every subgroup $H$ of a finite group $G$ divides the order of $G$, $H \leq G \Rightarrow |H| \mid |G|$.
This is, perhaps, a result which is easier to understand once you have played around with it a bit. So, pick a few of your favourite groups and verify the result.
For instance, the Klein 4-group has order 4 and subgroups of order 1, 2, 2, 2 and 4, and cyclic groups of prime order (for instance, $\mathbb{Z}/7\mathbb{Z}$) have no proper subgroups.
What i dont understand is how we calculate g and h
4. Originally Posted by adam_leeds
What i dont understand is how we calculate g and h
Do you mean how do you calculate the orders of the group and the subgroup? This is just the number of elements in your group.
For instance, the Klein 4-group has order 4 as it has precisely 4 elements, and the cyclic group of order 7 has, well, order 7. It is just the elements $\{0, 1, 2, 3, 4, 5, 6\}$ under addition modulo 7.
5. Originally Posted by adam_leeds
What i dont understand is how we calculate g and h
We don't!
G is a given group and H is a given subgroup of G. If by "g" and "h" you mean the number of elements in G and H, respectively, when we are given G and H, we are given g and h.
I think you should look closely at the concept of "left cosets" of H. They are crucial in Lagrange's theorem and important for other things as well. For any x in G, its left coset is the set {xy| y in H}. (A "right" coset would be of the form {gx| y in H}.) You should look at the proofs that
1) Every left coset contains the same number of elements.
(And since {1y |y in H} is just H itself, that is just h.)
2) Every member of G is in exactly one such left coset.
If there are, say, n left cosets, each containing h members, and each member of G is in exactly one, we must have g= nh so h divides g.
6. Originally Posted by Swlabr
Do you mean how do you calculate the orders of the group and the subgroup? This is just the number of elements in your group.
For instance, the Klein 4-group has order 4 as it has precisely 4 elements, and the cyclic group of order 7 has, well, order 7. It is just the elements $\{0, 1, 2, 3, 4, 5, 6\}$ under addition modulo 7.
In my notes it says If G is a finite group of order g = ¦G¦ and H is a subgroup of order h = ¦H¦, then h must be a factor of G.
So if the set was G = {1, -1, i, -i}, i = ((-1)^0.5). I know how to show its a group by the axioms.
But then the question says obtain a non-trivial solution subgoup H which i ahve no idea to do. And use it to illustrate lagranges theorem for a finite group.
Would G and H be 4 in this case?
7. Originally Posted by adam_leeds
In my notes it says If G is a finite group of order g = ¦G¦ and H is a subgroup of order h = ¦H¦, then h must be a factor of G.
So if the set was G = {1, -1, i, -i}, i = ((-1)^0.5). I know how to show its a group by the axioms.
But then the question says obtain a non-trivial solution subgoup H which i ahve no idea to do. And use it to illustrate lagranges theorem for a finite group.
Would G and H be 4 in this case?
A non-trivial subgroup is a subgroup which is not equal to the group itself nor the trivial group. Thus, it must have order strictly greater to one and not equal to that of the group. The group you have been given is the Klein 4-group which I havev mentioned above. It has 3 non-trivial subgroups each of order 2. Can you find them? (Hint: Pick an element. Any element...other than 1...)
8. Originally Posted by Swlabr
A non-trivial subgroup is a subgroup which is not equal to the group itself nor the trivial group. Thus, it must have order strictly greater to one and not equal to that of the group. The group you have been given is the Klein 4-group which I havev mentioned above. It has 3 non-trivial subgroups each of order 2. Can you find them? (Hint: Pick an element. Any element...other than 1...)
why cant it be one?
thanks for your help by the way it is much appreciated.
9. Originally Posted by adam_leeds
why cant it be one?
thanks for your help by the way it is much appreciated.
By definition, if $H$ is a non-trivial subgroup of $G$, then $H \neq G ~ \text{and} ~ H \neq \{1_G\}$ where $1_{G}$ is the identity element of G and the group $\{1_G\}$ is called the trivial subgroup of G.
10. Originally Posted by Defunkt
By definition, if $H$ is a non-trivial subgroup of $G$, then $H \neq G ~ \text{and} ~ H \neq \{1_G\}$ where $1_{G}$ is the identity element of G and the group $\{1_G\}$ is called the trivial subgroup of G.
well -1, i and -i cant be in it as well then?
11. Originally Posted by adam_leeds
well -1, i and -i cant be in it as well then?
Why not?
12. edit: answered by swlabr
13. Originally Posted by Swlabr
Why not?
because they are in G
14. Originally Posted by adam_leeds
because they are in G
A subgroup is a set of elements from G which form a group under the operation of G.
So, a subgroup will always contain the identity, in this case denoted 1. If it is non-trivial it will contain other elements too, for instance -1.
So, take the set $\{-1, 1\}$. Firstly, you should note that you have associativity as you inherit this from the group itself. You also have the identity element, this is just 1.
Can you find an inverse for -1? Is this in the set?
What about closure? If you multiply two elements from this set are we still in the set? The only non-trivial product is $(-1)^2$, but this is still easy...
15. Originally Posted by Swlabr
A subgroup is a set of elements from G which form a group under the operation of G.
So, a subgroup will always contain the identity, in this case denoted 1. If it is non-trivial it will contain other elements too, for instance -1.
So, take the set $\{-1, 1\}$. Firstly, you should note that you have associativity as you inherit this from the group itself. You also have the identity element, this is just 1.
Can you find an inverse for -1? Is this in the set?
What about closure? If you multiply two elements from this set are we still in the set? The only non-trivial product is $(-1)^2$, but this is still easy...
the inverse of -1 is -1 and is in the set
1 times -1 = -1 which is in the set which shows closure
but i thought you said 1 couldnt be in the subset?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530780911445618, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=609507
|
Physics Forums
## Do nation models exist?
This might seem like a naive question from someone who doesn't know a whole lot about economics/government: Do governments have 'unified' computer models of the countries they govern, including for example energy generation, agriculture, transport etc.. A bit like simcity or civilisation games, but incredibly more realistic and complex.
I have tried a few search terms in google but I feel I don't quite know what I'm looking for.
The thought occurred to me after thinking about sustainable energy, and how you might convert to a sustainable energy economy. If there existed a model, you could simply play around with different scenarios and see the results. Or else you could search through possible parameter space to find the best configurations.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help I'd be very surprised if this were the case. Some arms of governments use a range of models for the bits of what makes the country that fall under their purview.
So no one has attempted to couple the different models together? It does seem infeasible for very large countries such as China, but what about for smaller ones (or just less populated), like Sweden or Switzerland or maybe just individual towns and cities? If the economics component is too complicated, what about a couple energy-infrastructure model. My thought was that if you then included the geography/geology of the country you could search all possible combinations of energy sources and the geographic location of these energy sources. Then I suppose that you'd have to assess whether the results were economically/politically possible afterwards, but that could also be partly paramaterised into the model.
Recognitions:
Homework Help
## Do nation models exist?
So give it a go and see...
Ha, I might just since I have a bit of free time this summer (how sad is that!).
Recognitions: Gold Member JesseC, Here is an example of a model that sounds similar to what you are asking about: “Abstract: Better understanding the nature, origin and popularity of varying levels of popular religion versus secularism, and their impact upon socioeconomic conditions and vice versa, requires a cross national comparison of the competing factors in populations where opinions are freely chosen. Utilizing 25 indicators, the uniquely extensive Successful Societies Scale reveals that population diversity and immigration correlate weakly with 1st world socioeconomic conditions, and high levels of income disparity, popular religiosity as measured by differing levels of belief and activity, and rejection of evolutionary science correlate strongly negatively with improving conditions." www.epjournal.net/filestore/EP07398441_c.pdf
Thanks Bobbywhy, but that link has 404'd for me! Apologies for the late reply. From the abstract is seems a bit more like an analysis than a simulation.
Recognitions: Gold Member JesseC, I know nothing about modeling or simulating dynamics like energy production/useage, food production/consumption, et cetera. But I did a little searching and found these items that may be of interest, especially the last one on "system dynamics": The Limits to Growth is a 1972 book about the computer modeling of unchecked economic and population growth with finite resource supplies. In 2008 Graham Turner at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia published a paper called "A Comparison of The Limits to Growth with Thirty Years of Reality".[8][9] It examined the past thirty years of reality with the predictions made in 1972 and found that changes in industrial production, food production and pollution are all in line with the book's predictions of economic and societal collapse in the 21st century. http://en.wikipedia.org/wiki/The_Limits_to_Growth The Global 2000 Report to the President was released in 1980 by the Council on Environmental Quality and the United States Department of State. It was commissioned by President Jimmy Carter on May 23, 1977, and was directed by Gerald O. Barney. It was based on data collected by different institutions. This data and information was used in computer models to make projections for the future based on trends for the next decades. http://en.wikipedia.org/wiki/The_Glo..._the_President System dynamics is an approach to understanding the behaviour of complex systems over time. It deals with internal feedback loops and time delays that affect the behaviour of the entire system.[1] What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity. http://en.wikipedia.org/wiki/System_dynamics
Hey JesseC. I wouldn't be surprised if some elements of some governments were moving towards this or have partially completed this kind of thing, but in terms of using the system to automate, even in any small way such global forms of national management, I'm 'doubtful'. However simulations of various things happen all the time. The Pentagon for example does simulations for war-scenarios that include economic simulations, which are important to consider in modern warfare (bringing down economies is one of the best ways to bring down a nation state). There are many problems though currently with this kind of thing that are not necessary technological (computer power, infrastructure, etc) that relate more or less to understanding algorithmically and intuitively at all levels how a system will actually function regardless of the intention of the people behind it. All you need to do is look at even one area to see the lack of understanding which is the economy and the study of economics. Even with the 'whiz-bang' models produced by professional economists, all of them can't really produce anything of value. The real economical principles are still largely qualitative not quantitative, and this attribute to just economics but for many sciences in general where clarification of the qualitative aspects to a quantitative one is non-trivial. The other thing is that again, regardless of intention of who is creating it, people won't just use a system like this without a lot of credibility and faith in the system that usually comes from many years of observation and testing. To give you a specific example, look at cryptography: the whole ATM, net-banking, remote-transaction framework didn't suddenly come into place the moment or even a year after things like RSA were developed: they had to be scrutinized, tested, evaluated by lots and lots of people before it actually came into the world in the form of providing remote financial transactions. The kind of thing you are talking about would take a very long time if not for anything else, for the above attribute. Remember that it is a good guideline that more integrated something is and the more potentially abusive something is to entire systems or groups of people (in finance they call this systemic risk), the more precaution, care, evaluation, and testing goes into the actual design, construction, and evaluation of the system at all levels. This is why for example being trialled for murder is not the same for being caught with an ounce of marijuana, or going to a tribunal for not paying a weeks rent along with why lots of people had to investigate things like RSA and other prime-factoring algorithms before they were used across the board. It's the same reason you get lots of red-tape for a lot of things in law, engineering, medicine, and every other area and although it may be excessive and many parts overcomplicated and un-necessary, the precedent that was used was done for a very logical reason.
Recognitions: Gold Member We used to hear in the UK moderately often about a thing called the "Treasury Model" which I understood to be a model of the entire UK economy with massive inputs. I think it was used for predictive purposes. But there were other models around, e.g. in University and other research departments. I would expect such models to be of some limited predictive use for analysing the effects of small 'marginal' changes, fine tuning, in other words have 'local stability', and even then be dependent on quality of data input and no doubt some value judgements but sceptical about whether they could have predicted the large shocks or be more than marginally useful in policies to deal with these, where such non-marginal factors as vision, courage, ideology, elections, diplomacy and other political considerations have to come into it. Well these shocks were not predicted. Except insofar as of course after anything has happened, whatever it is, you can always find an economist who did predict it. Unsurprisingly we have heard less of this Model in recent years than before. Here is some stuff (but this is definitely not my field). http://en.wikipedia.org/wiki/ITEM_club http://www.bized.co.uk/virtual/economy/model/info3.htm
Bobby, I stumbled across 'The Limits To Growth' a few weeks back and have been meaning to read it. Seems like they used a relatively simple model with only something like 6 major parameters. Must get around to reading it soon, perhaps additional complexity isn't necessary for long term prediction but I was originally interested in quite accurate short term models. Chiro, I get your point. I wasn't supposing that simulating a nation would somehow result in automation of decision making, but that it could rather aid decision making. epenguin, I'm from the UK so surprised I haven't heard of that one. Doesn't seem to get mentioned on the news any more? Interestingly, I read in one of your links that the Treasury refused to publish long term results (such as long term predictions of interest rates or house prices) in case they then influenced peoples decision making. Maybe they worried that it may become a self-fulfilling prophecy or, perhaps even the opposite. That is certainly something that hadn't occurred to me! I'm not sure how big an issue this is since people are often dismissive and sceptical of, for example, numerical weather model predictions. gmax137, I believe Psychohistory requires a larger population than currently exists on earth to be statistically accurate. :(
Mentor
Blog Entries: 1
One of the flagship EU Future Emerging Technologies Projects (FET) aims to produce something very similar to what you are proposing
http://www.futurict.eu/
Quote by Project Overview Data from our complex globe-spanning ICT system will be leveraged to develop models of techno-socio-economic systems. In turn, insights from these models will inform the development of a new generation of socially adaptive, self-organized ICT systems. FuturICT as a whole will act as a Knowledge Accelerator, turning massive data into knowledge and technological progress. In this way, FuturICT will create the scientific methods and ICT platforms needed to address planetary-scale challenges and opportunities in the 21st century. Specifically, FuturICT will build a sophisticated simulation, visualization and participation platform, called the Living Earth Platform. This platform will power Exploratories, to detect and mitigate crises, and Participatory Platforms, to support the decision-making of policy-makers, business people and citizens, and to facilitate a better social, economic and political participation.
Thread Tools
| | | |
|----------------------------------------------|----------------|---------|
| Similar Threads for: Do nation models exist? | | |
| Thread | Forum | Replies |
| | Biology | 18 |
| | Current Events | 37 |
| | Current Events | 19 |
| | Current Events | 27 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569646120071411, "perplexity_flag": "middle"}
|
http://www.citizendia.org/Exclusive_or
|
"XOR" redirects here. For other uses, see XOR (disambiguation).
For the corresponding concept in combinational logic, see XOR gate. In Digital circuit theory combinational logic (also called combinatorial logic) is a type of logic circuit whose output is a Pure function of the Symbols There are two symbols for XOR gates the 'military' symbol and the 'rectangular' symbol
The logical operation exclusive disjunction, also called exclusive or, (symbolized XOR or EOR), is a type of logical disjunction on two operands that results in a value of “true” if and only if exactly one of the operands has a value of “true”. Table of logic symbolsIn Logic, two sentences (either in a formal language or a natural language may be joined by means of a logical connective to form a compound sentence Logical connectiveIn Logic, a set of symbols is commonly used to express logical representation In Mathematics, an operand is one of the inputs (arguments of an Operator. ↔ [1]
Put differently, exclusive disjunction is a logical operation on two logical values, typically the values of two propositions, that produces a value of true just in cases where the truth value of the operands differ. In Logic and Mathematics, a logical value, also called a truth value, is a value indicating the extent to which a Proposition is true In Logic and Philosophy, proposition refers to either (a the content or Meaning of a meaningful Declarative sentence
## Contents
### Truth table
The truth table of $p\, \mathrm{XOR}\, q$ (also written as $p \oplus q$, or $p \neq q$) is as follows:
pq$\oplus$
FFF
FTT
TFT
TTF
Note the three-way symmetry of the outcomes: The identity of p, q, and $\neq$ in this table could be arbitrarily re-assigned, and the table would still be correct. A truth table is a Mathematical table used in Logic — specifically in connection with Boolean algebra, Boolean functions and Propositional
### Venn diagram
The Venn diagram of $A \oplus B$ (red part is true)
## Equivalencies, elimination, and introduction
The following equivalents can then be deduced, written with logical operators, in mathematical and engineering notation:
$\begin{matrix}p \oplus q & = & (p \land \lnot q) & \lor & (\lnot p \land q) = p\overline{q} + \overline{p}q \\\\ & = & (p \lor q) & \land & (\lnot p \lor \lnot q) = (p+q)(\overline{p}+\overline{q}) \\\\ & = & (p \lor q) & \land & \lnot (p \land q) = (p+q)(\overline{pq})\end{matrix}$
Generalized or n-ary XOR is true when the number of 1-bits is odd. Venn diagrams or set diagrams are Diagrams that show all hypothetically possible Logical relations between a finite collection of sets (groups Table of logic symbolsIn Logic, two sentences (either in a formal language or a natural language may be joined by means of a logical connective to form a compound sentence
The exclusive disjunction $p \oplus q$ can be expressed in terms of the logical conjunction ($\land$), the disjunction ($\lor$), and the negation ($\lnot$) as follows:
$\begin{matrix}p \oplus q & = & (p \land \lnot q) \lor (\lnot p \land q)\end{matrix}$
The exclusive disjunction $p \oplus q$ can also be expressed in the following way:
$\begin{matrix}p \oplus q & = & \lnot (p \land q) \land (p \lor q)\end{matrix}$
This representation of XOR may be found useful when constructing a circuit or network, because it has only one $\lnot$ operation and small number of $\land$ and $\lor$ operations. In Logic and/or Mathematics, logical conjunction or and is a two-place Logical operation that results in a value of true if both of In Logic and Mathematics, negation or not is an operation on Logical values for example the logical value of a Proposition The proof of this identity is given below:
$\begin{matrix}p \oplus q & = & (p \land \lnot q) & \lor & (\lnot p \land q) \\& = & ((p \land \lnot q) \lor \lnot p) & \and & ((p \land \lnot q) \lor q) \\& = & ((p \lor \lnot p) \land (\lnot q \lor \lnot p)) & \land & ((p \lor q) \land (\lnot q \lor q)) \\& = & (\lnot p \lor \lnot q) & \land & (p \lor q) \\& = & \lnot (p \land q) & \land & (p \lor q)\end{matrix}$
It is sometimes useful to write $p \oplus q$ in the following way:
$\begin{matrix}p \oplus q & = & \lnot ((p \land q) \lor (\lnot p \land \lnot q))\end{matrix}$
This equivalence can be established by applying De Morgan's laws twice to the fourth line of the above proof. In Logic, De Morgan's laws or De Morgan's theorem are rules in Formal logic relating pairs of dual Logical operators in a systematic manner expressed
The exclusive or is also equivalent to the negation of a logical biconditional, by the rules of material implication (a material conditional is equivalent to a the disjunction of the negation of its antecedent and its consequence) and material equivalence. In Logic and Mathematics, logical biconditional (sometimes also known as the material biconditional) is a Logical operator connecting two statements The material conditional, also known as the material implication or truth functional conditional, expresses a property of certain Conditionals in Logic ↔
## Relation to modern algebra
Although the operators $\land$ (conjunction) and $\lor$ (disjunction) are very useful in logic systems, the latter fails a more generalizable structure in the following way:
The systems $(\{T, F\}, \land)$ and $(\{T, F\}, \lor)$ are monoids. In Mathematics, an operator is a function which operates on (or modifies another function In Logic and/or Mathematics, logical conjunction or and is a two-place Logical operation that results in a value of true if both of In Abstract algebra, a branch of Mathematics, a monoid is an Algebraic structure with a single Associative Binary operation This unfortunately prevents the combination of these two systems into larger structures, such as a mathematical ring. In Mathematics, a ring is an Algebraic structure which generalizes the algebraic properties of the Integers though the rational, real
However, the system using exclusive or $(\{T, F\}, \oplus)$ is an abelian group. An abelian group, also called a commutative group, is a group satisfying the additional requirement that the product of elements does not depend on their order (the The combination of operators $\land$ and $\oplus$ over elements {T,F} produce the well-known field F2. In Abstract algebra, a field is an Algebraic structure in which the operations of Addition, Subtraction, Multiplication and division This field can represent any logic obtainable with the system $(\land, \lor)$ and has the added benefit of the arsenal of algebraic analysis tools for fields.
## Exclusive “or” in natural language
The Oxford English Dictionary explains “either … or” as follows:
The primary function of either, etc. , is to emphasize the indifference of the two (or more) things or courses … but a secondary function is to emphasize the mutual exclusiveness, = either of the two, but not both.
Following this kind of common-sense intuition about “or”, it is sometimes argued that in many natural languages, English included, the word “or” has an “exclusive” sense. English is a West Germanic language originating in England and is the First language for most people in the United Kingdom, the United States The exclusive disjunction of a pair of propositions, (p, q), is supposed to mean that p is true or q is true, but not both. For example, it is argued, the normal intention of a statement like “You may have coffee or you may have tea” is to stipulate that exactly one of the conditions can be true. Certainly under many circumstances a sentence like this example should be taken as forbidding the possibility of one's accepting both options. Even so, there is good reason to suppose that this sort of sentence is not disjunctive at all. If all we know about some disjunction is that it is true overall, we cannot be sure that either of its disjuncts is true. For example, if a woman has been told that her friend is either at the snack bar or on the tennis court, she cannot validly infer that he is on the tennis court. But if her waiter tells her that she may have coffee or she may have tea, she can validly infer that she may have tea. Nothing classically thought of as a disjunction has this property. This is so even given that she might reasonably take her waiter as having denied her the possibility of having both coffee and tea.
There are also good general reasons to suppose that no word in any natural language could be adequately represented by the binary exclusive “or” of formal logic. First, any binary or other n-ary exclusive “or” is true if and only if it has an odd number of true inputs. In Logic, Mathematics, and Computer science, the arity (synonyms include type, adicity, and rank) of a function But it seems as though no word in any natural language that can conjoin a list of two or more options has this general property. Second, as pointed out by Barrett and Stenner in the 1971 article “The Myth of the Exclusive ‘Or’” (Mind, 80 (317), 116–121), no author has produced an example of an English or-sentence that appears to be false because both of its inputs are true. Certainly there are many or-sentences such as “The light bulb is either on or off” in which it is obvious that both disjuncts cannot be true. But it is not obvious that this is due to the nature of the word “or” rather than to particular facts about the world.
## Alternative symbols
The symbol used for exclusive disjunction varies from one field of application to the next, and even depends on the properties being emphasized in a given context of discussion. In addition to the abbreviation “XOR”, any of the following symbols may also be seen:
• A plus sign ( + ). This makes sense mathematically because exclusive disjunction corresponds to addition modulo 2, which has the following addition table, clearly isomorphic to the one above:
Addition Modulo 2
pqp + q
000
011
101
110
• The use of the plus sign has the added advantage that all of the ordinary algebraic properties of mathematical rings and fields can be used without further ado. Addition is the mathematical process of putting things together In Mathematics, modular arithmetic (sometimes called modulo arithmetic, or clock arithmetic) is a system of Arithmetic for Integers In Abstract algebra, an isomorphism ( Greek: ἴσος isos "equal" and μορφή morphe "shape" is a bijective In Mathematics, a ring is an Algebraic structure which generalizes the algebraic properties of the Integers though the rational, real In Abstract algebra, a field is an Algebraic structure in which the operations of Addition, Subtraction, Multiplication and division However, the plus sign is also used for Inclusive disjunction in some notation systems.
• A plus sign that is modified in some way, such as being encircled ($\oplus$). This usage faces the objection that this same symbol is already used in mathematics for the direct sum of algebraic structures. The symbol \oplus \! denotes direct sum it is also the astrological and astronomical symbol for Earth, and a symbol for the Exclusive disjunction
• An inclusive disjunction symbol ($\lor$) that is modified in some way, such as being underlined ($\underline\lor$) or with dot above ($\dot\vee$).
• In several programming languages, such as C, C++, Python and Java, a caret (`^`) is used to denote the bitwise XOR operator. A programming language is an Artificial language that can be used to write programs which control the behavior of a machine particularly a Computer. tags please moot on the talk page first! --> In Computing, C is a general-purpose cross-platform block structured C++ (" C Plus Plus " ˌsiːˌplʌsˈplʌs is a general-purpose Programming language. Python is a general-purpose High-level programming language. Its design philosophy emphasizes programmer productivity and code readability This is not used outside of programming contexts because it is too easily confused with other uses of the caret.
• The symbol .
• In IEC symbology, an exclusive or is marked “=1”.
## Properties
This section uses the following symbols:
$\begin{matrix}0 & = & \mbox{false} \\1 & = & \mbox{true} \\\lnot p & = & \mbox{not}\ p \\p + q & = & p\ \mbox{xor}\ q \\p \land q & = & p\ \mbox{and}\ q \\p \lor q & = & p\ \mbox{or} \ q\end{matrix}$
The following equations follow from logical axioms:
$\begin{matrix}p + 0 & = & p \\p + 1 & = & \lnot p \\p + p & = & 0 \\p + \lnot p & = & 1 \\\\p + q & = & q + p \\p + q + p & = & q \\p + (q + r) & = & (p + q) + r \\p + q & = & \lnot p + \lnot q \\\lnot (p + q) & = & \lnot p + q & = & p + \lnot q \\\\p + (\lnot p \land q) & = & p \lor q \\p + (p \land \lnot q) & = & p \land q \\p + (p \lor q) & = & \lnot p \land q \\\lnot p + (p \lor \lnot q) & = & p \lor q \\p \land (p + \lnot q) & = & p \land q \\p \lor (p + q) & = & p \lor q\end{matrix}$
### Associativity and commutativity
In view of the isomorphism between addition modulo 2 and exclusive disjunction, it is clear that XOR is both an associative and a commutative operation. In Abstract algebra, an isomorphism ( Greek: ἴσος isos "equal" and μορφή morphe "shape" is a bijective In Mathematics, associativity is a property that a Binary operation can have In Mathematics, commutativity is the ability to change the order of something without changing the end result Thus parentheses may be omitted in successive operations and the order of terms makes no difference to the result. For example, we have the following equations:
$\begin{matrix}p + q & = & q + p \\\\(p + q) + r & = & p + (q + r) & = & p + q + r\end{matrix}$
### Other properties
• falsehood preserving: The interpretation under which all variables are assigned a truth value of ‘false’ produces a truth value of ‘false’ as a result of exclusive disjunction.
• linear
## Computer science
Traditional symbolic representation of an XOR logic gate
### Bitwise operation
Main article: Bitwise operation
Exclusive disjunction is often used for bitwise operations. The word linear comes from the Latin word linearis, which means created by lines. A logic gate performs a logical operation on one or more logic inputs and produces a single logic output In Computer programming, a bitwise operation operates on one or two Bit patterns or binary numerals at the level of their individual Bits On most Examples:
• 1 xor 1 = 0
• 1 xor 0 = 1
• 1110 xor 1001 = 0111 (this is equivalent to addition without carry)
As noted above, since exclusive disjunction is identical to addition modulo 2, the bitwise exclusive disjunction of two n-bit strings is identical to the standard vector of addition in the vector space $(\Z/2\Z)^n$. In Elementary arithmetic a carry is a Digit that is transferred from one Column of digits to another column of more significant digits during a calculation In Mathematics, a vector space (or linear space) is a collection of objects (called vectors) that informally speaking may be scaled and added
In computer science, exclusive disjunction has several uses:
• It tells whether two bits are unequal.
• It is an optional bit-flipper (the deciding input chooses whether to invert the data input).
• It tells whether there is an odd number of 1 bits ($A \oplus B \oplus C \oplus D \oplus E$ is true iff an odd number of the variables are true). In Mathematics, the parity of an object states whether it is even or odd ↔
In logical circuits, a simple adder can be made with a XOR gate to add the numbers, and a series of AND, OR and NOT gates to create the carry output. In electronics an adder or summer is a Digital circuit that performs Addition of numbers Symbols There are two symbols for XOR gates the 'military' symbol and the 'rectangular' symbol
On some computer architectures, it is more efficient to store a zero in a register by xor-ing the register with itself (bits xor-ed with themselves are always zero) instead of loading and storing the value zero.
In simple threshold activated neural networks, modeling the ‘xor’ function requires a second layer because ‘xor’ is not a linearly-separable function. Traditionally the term neural network had been used to refer to a network or circuit of biological neurons.
Exclusive-or is sometimes used as a simple mixing function in cryptography, for example, with one-time pad or Feistel network systems. Cryptography (or cryptology; from Greek grc κρυπτός kryptos, "hidden secret" and grc γράφω gráphō, "I write" In Cryptography, the one-time pad (OTP is an Encryption Algorithm where the Plaintext is combined with a random key or "pad" In Cryptography, a Feistel cipher is a symmetric structure used in the construction of Block ciphers named after the German IBM cryptographer Horst
XOR is used in RAID 3–6 for creating parity information. RAID — which stands for Redundant Array of Inexpensive Disks,or alternatively Redundant Array of Independent Disks (a less specific name and thus now the For example, RAID can “back up” bytes `10011100` and `01101100` from two (or more) hard drives by XORing (`11110000`) and writing to another drive. Under this method, if any one of the three hard drives are lost, the lost byte can be re-created by XORing bytes from the remaining drives. If the drive containing `01101100` is lost, `10011100` and `11110000` can be XORed to recover the lost byte.
XOR is also used to detect an overflow in the result of a signed binary arithmetic operation. If the leftmost retained bit of the result is not the same as the infinite number of digits to the left, then that means overflow occurred. XORing those two bits will give a “one” if there is an overflow.
XOR can be used to swap two numeric variables in computers, using the XOR swap algorithm; however this is regarded as more of a curiosity and not encouraged in practice. In Computer programming, the XOR swap is an Algorithm that uses the XOR Bitwise operation to swap distinct values of Variables
In computer graphics, XOR-based drawing methods are often used to manage such items as bounding boxes and cursors on systems without alpha channels or overlay planes. Computer graphics are Graphics created by Computers and more generally the Representation and Manipulation of Pictorial Data For code compliance see Bounding. In Computer graphics and Computational geometry, a bounding volume In computing a cursor is an indicator used to show the position on a Computer monitor or other Display device that will respond to input from a text input or In Computer graphics, alpha compositing is the process of combining an image with a background to create the appearance of partial transparency
## Notes
1. ^ See Stanford Encyclopedia of Philosophy, article Disjunction
The Logical fallacy of affirming a disjunct also known as the fallacy of the alternative disjunct occurs when a deductive argument takes either of In Boolean logic, logical nor or joint denial is a truth-functional operator which produces a result that is the inverse of logical or. Boolean algebra (or Boolean logic) is a logical calculus of truth values, developed by George Boole in the late 1830s This is a list of topics around Boolean algebra and propositional logic. In Mathematics and Abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpretations include false and In Mathematics, a (finitary Boolean function is a function of the form f: B k &rarr B, where B = {0 1} A boolean-valued function, in some usages a predicate or a proposition, is a function of the type f: X → B, where X is an arbitrary set The Controlled NOT gate (also C-NOT or CNOT) is a Quantum gate that is an essential component in the construction of a Quantum computer. A disjunctive syllogism, historically known as Modus tollendo ponens, is a classically valid, simple Argument form: P or Q First-order logic (FOL is a formal Deductive system used in mathematics philosophy linguistics and computer science A logical graph is a special type of diagramatic structure in any one of several systems of graphical syntax that Charles Sanders Peirce developed for In Logic and Mathematics, a logical value, also called a truth value, is a value indicating the extent to which a Proposition is true In Logic and Mathematics, the minimal negation operator \nu\! is a Multigrade operator (\nu_{k}_{k \in \mathbb{N}} where each In Logic and Mathematics, a multigrade operator \Omega is a Parametric operator with parameter k in the set In its simplest meaning in Mathematics and Logic, an operation is an action or procedure which produces a new value from one or more input values In Logic and Mathematics, a parametric operator \Omega\! with parameter \alpha\! in the parametric set \Alpha\! Error detection If an odd number of bits (including the parity bit are changed in transmission of a set of bits then parity bit will be incorrect and will thus indicate This is a technical mathematical article about the area of mathematical logic variously known as "propositional calculus" or "propositional logic" In Mathematics, the symmetric difference of two sets is the set of elements which are in one of the sets but not in both XOR linked lists are a Data structure used in Computer programming. Symbols There are two symbols for XOR gates the 'military' symbol and the 'rectangular' symbol In Cryptography, a simple XOR cipher is a relatively simple Encryption algorithm that operates according to the principles A \oplus 0 The Stanford Encyclopedia of Philosophy (SEP is a freely-accessible Online encyclopedia of Philosophy maintained by Stanford University.
## exclusive or
### -noun
1. (logic, computing) Exclusive disjunction: the use of to indicate that of two predicates, one is true and one is false (without specifying which is which);
2. (logic, computing, more generally) Exclusive disjunction: the use of to indicate that of two or more predicates, an odd number are true (without specifying which or how many);
3. (logic, computing) An exclusive disjunction; the result of applying the above-described exclusive or to two or more predicates;
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094861149787903, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/sage+polynomials
|
# Tagged Questions
2answers
183 views
### Help need in Mathematica or Sage
I want to find the common positive solutions of two polynomials $f_{a,b}(x,y)$, $g_{a,b}(x,y)$ where $a,b$ runs from 0 to 1 with an interval 0.01. Let $(x_0,y_0)$ be a common positive solution. Then ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8802975416183472, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/190479-abstract-reasoning-natural-deduction-fitch.html
|
# Thread:
1. ## Abstract reasoning/natural deduction with Fitch
I'm practicing natural deduction, using a system similar to Fitch. I'm wondering if what I'm doing is legal, especially the => intro @ 5.
I've to prove $((P \implies Q) \implies P) \implies ((P \implies Q) \implies Q)$ is a tautology. I've set my reasoning up as follows:
Code:
```1, H: (P => Q) => P
2, H: P
3, => elim on 1 & 2 P => Q
4, => elim on 2 & 3 Q
5, => intro on 3 & 4 (P => Q) => Q
6, => intro on 1 & 5 ((P => Q) => P) => ((P => Q) => Q)```
Is this approach correct? I think not, because I'm not actually using the hypothesis (2) in my conclusion (5).
Am I allowed to do this?
Code:
```1, H: P
2, H: Q
3, ^I 1,2 P ^ Q
4, =>I 3,2 (P ^ Q) => Q
|| ||```
2. ## Re: Abstract reasoning/natural deduction with Fitch
I've a new way to solve the first one:
Code:
```1, H: (P => Q) => P
2, H: P => Q
3, =>E 1,2: P
4, =>E 2,3: Q
5, =>I 2,4: (P => Q) => Q
6, =>I 1,5: ((P => Q) => P) => ((P => Q) => Q)```
I think this is correct, any advice?
Regarding the second part, the question was to prove P => ((P ^ Q) => Q) is a tautology, now I've set up my reasoning as follows
Code:
```1, H: P
2, H: Q
3, ^I 1,2: P ^ Q
4, ^E 3: Q
5, =>I 3,4: (P ^ Q) => Q
6, =>I 1,5: P => ((P ^ Q) => Q```
Furthermore I'm trying to prove (P => Q) => ((P ^ R) => (Q ^ R)) and having some trouble introducing the R, my current train of thought leaves me with an assumption i can't get rid of.
Code:
```1, H: P => Q
2, H: P
3, =>E 1,2: Q
4, H: R
5, ^I 2,4: P ^ R
6, ^I 3,4: Q ^ R
7, =>I 5,6: (P ^ R) => (Q ^ R)
8, =>I 1,7: (P => Q) => ((P ^ R) => (Q ^ R))```
As you see, I'm left out with an assumption too many. I guess I'm introducing the R (4) incorrect, any pointers?
3. ## Re: Abstract reasoning/natural deduction with Fitch
Originally Posted by Lepzed
Code:
```1, H: (P => Q) => P
2, H: P
3, => elim on 1 & 2 P => Q
4, => elim on 2 & 3 Q
5, => intro on 3 & 4 (P => Q) => Q
6, => intro on 1 & 5 ((P => Q) => P) => ((P => Q) => Q)```
Step 3 is wrong because MP gives you the conclusion of an implication, not the premise. Step 5 is also wrong because =>I can only close an assumption and never a formula that is a conclusion of a subderivation, like formula 3.
Originally Posted by Lepzed
Code:
```1, H: P
2, H: Q
3, ^I 1,2 P ^ Q
4, =>I 3,2 (P ^ Q) => Q
|| ||```
=>I cannot close P ^ Q since it's not an assumption.
Originally Posted by Lepzed
I've a new way to solve the first one:
Code:
```1, H: (P => Q) => P
2, H: P => Q
3, =>E 1,2: P
4, =>E 2,3: Q
5, =>I 2,4: (P => Q) => Q
6, =>I 1,5: ((P => Q) => P) => ((P => Q) => Q)```
This is correct.
Originally Posted by Lepzed
Regarding the second part, the question was to prove P => ((P ^ Q) => Q) is a tautology, now I've set up my reasoning as follows
Code:
```1, H: P
2, H: Q
3, ^I 1,2: P ^ Q
4, ^E 3: Q
5, =>I 3,4: (P ^ Q) => Q
6, =>I 1,5: P => ((P ^ Q) => Q```
Cannot close P ^ Q: not an assumption. You need to assume P ^ Q (along with a vacuous assumption P) and deduce Q by ^E.
Originally Posted by Lepzed
Furthermore I'm trying to prove (P => Q) => ((P ^ R) => (Q ^ R)) and having some trouble introducing the R, my current train of thought leaves me with an assumption i can't get rid of.
Code:
```1, H: P => Q
2, H: P
3, =>E 1,2: Q
4, H: R
5, ^I 2,4: P ^ R
6, ^I 3,4: Q ^ R
7, =>I 5,6: (P ^ R) => (Q ^ R)
8, =>I 1,7: (P => Q) => ((P ^ R) => (Q ^ R))```
As you see, I'm left out with an assumption too many. I guess I'm introducing the R (4) incorrect, any pointers?
Same mistake: you need to assume P ^ R instead of proving it. From it you get P and (using another assumption P => Q) Q. From P ^ R you also get R and hence Q ^ R.
4. ## Re: Abstract reasoning/natural deduction with Fitch
Hmm, I see. Is there some sort of 'algorithm' I should follow when trying to solve questions like this?
Currently I try to work from bottom to top and fill in blanks when needed and apparantly I'm not quite sure what's allowed to be used as a hypotheses..
5. ## Re: Abstract reasoning/natural deduction with Fitch
At least when the formula has only conjunctions and implications and its proof uses only the rules for those connectives (no reasoning by contradiction, for example), then the principle is pretty simple. Each branch in the shortest derivation starts with a series of elimination rules and ends with a series of introduction rules. So, going bottom up, you introduce all assumptions and split all conjunctions, and then see what you can get with the assumptions you collected.
6. ## Re: Abstract reasoning/natural deduction with Fitch
Code:
```1, H: P
2, H: Q
3, ^I 1,2: P ^ Q
4, ^E 3: Q
5, =>I 3,4: (P ^ Q) => Q
6, =>I 1,5: P => ((P ^ Q) => Q```
Cannot close P ^ Q: not an assumption. You need to assume P ^ Q (along with a vacuous assumption P) and deduce Q by ^E.
I'm not sure if I follow, instead of assuming Q, I should've assumed P ^ Q as follows:
Code:
```1, H: P
2, H: P ^ Q
3, ^E 2: Q
4, =>I 2,3: (P ^ Q) => Q
5, =>I 1,4: P => ((P ^ Q) => Q```
Is this correct? If so then I think I understand this part now.
I'm going to spend the evening practicing (at work now, 40 hour job next to uni is hard work, especially if you cant attend lectures because of it :<) and then ND with negation, disjunction and quantifiers is next Thanks again!
7. ## Re: Abstract reasoning/natural deduction with Fitch
Originally Posted by Lepzed
Code:
```1, H: P
2, H: P ^ Q
3, ^E 2: Q
4, =>I 2,3: (P ^ Q) => Q
5, =>I 1,4: P => ((P ^ Q) => Q```
Yes, this is correct.
8. ## Re: Abstract reasoning/natural deduction with Fitch
I can't help but trying to work on (P => Q) => ((P ^ R) => (Q ^ R)) during lunch break
Code:
```1, H: P => Q
2, H: P ^ R
3, ^E 2: P
4, =>E 1,3: Q
5, ^E 2: R
6, ^I 4,5: Q ^ R
7, =>I 2,6: (P ^ R) => (Q ^ R)
8, =>I 1,7: (P => Q) => ((P ^ R) => (Q ^ R))```
I'm having some doubts about the long list of conclusions starting at 3, is this allowed?
9. ## Re: Abstract reasoning/natural deduction with Fitch
Yes, you are absolutely right. Good work.
10. ## Re: Abstract reasoning/natural deduction with Fitch
I've been working on some examples and exercises, and I think I'm getting there, there are however 4 exercises I'm having some difficulties with, I was hoping I could get soms feedback on them, since they're a bit harder than the rest. I think I've done 2 of them reasonably well, the other 2 not so much Still, just implication and conjunction.
1) (P => Q) => (P => (P ^ Q))
Code:
```1, H: P => Q
2, H: P
3, => E 1,2: Q
4, ^I 2,3: P ^ Q
5, =>I 2,4: P => (P ^ Q)
6, =>I 1,5: (P => Q) => (P => (P ^ Q))```
Seems sound, right?
2) ((P => Q) ^ (R => S)) => ((P ^ R) => (Q ^ S))
Code:
``` 1, H: (P => Q) ^ (R => S)
2, H: (P ^ R)
3, ^E 1: P => Q
4, ^E 1: R => S
5, ^E 2: P
6, ^E 2: R
7, =>E 3,5: Q
8, =>E 4,6: S
9, ^I 5,6: P ^ R
10, ^I 7,8: Q ^ S
11, =>I 9, 10: (P ^ R) => (Q ^ S)
12, =>I 1, 11: ((P => Q) ^ (R => S)) => ((P ^ R) => (Q ^ S))```
I think I did this okay, I have no idea how to do this otherwise.
3) (P => Q) => ((R => (P => Q)) ^ ((P ^ R) => Q))
Code:
``` 1, H: P => Q
2, H: P ^ R
3, ^E 2: P
4, ^E 2: R
5, =>E 1,3: Q
6, =>I 4,1: R => (P = > Q)
7, H: P ^ R
8, ^E 7: P
9, =>E 1,8 Q
10, =>I 7,9: (P ^ R) => Q
11, ^I 6, 10: (R => (P => Q)) ^ ((P ^ R) => Q)
12, =>I 1, 11: (P => Q) => ((R => (P => Q)) ^ ((P ^ R) => Q))```
This one I'm not so sure about. Seems a bit redundant to assume P ^ Q twice for example, but I'm not sure how else I can get to the ^I I need to connect the two parts.
And lastly:
4) (P => (Q => R)) => ((P => Q) => (P => R))
And this one I get stuck, doing too many assumptions.
I tried the following
Code:
``` 1, H: P => (Q => R)
2, H: P ^ Q
3, ^E 2: P
4, ^E 2: Q
5, =>I 2,3: P => Q
6, H: P ^ Q
7, ^E 6: P
8, ^E 6 Q
9, =>E 1,7: Q => R
10, =>E 8,9: R
11, =>I 7, 10: P => R```
Im stuck here, again, assuming the same thing twice just seems wrong. I can't really wrap my head around it, any tips? I seem to miss an assumption as well, but if I would assume P, well think im being twice as redundant :P May be it's super easy, but I think this is rather harder than the other exercises hehe
11. ## Re: Abstract reasoning/natural deduction with Fitch
Originally Posted by Lepzed
1) (P => Q) => (P => (P ^ Q))
Code:
```1, H: P => Q
2, H: P
3, => E 1,2: Q
4, ^I 2,3: P ^ Q
5, =>I 2,4: P => (P ^ Q)
6, =>I 1,5: (P => Q) => (P => (P ^ Q))```
Correct.
Originally Posted by Lepzed
2) ((P => Q) ^ (R => S)) => ((P ^ R) => (Q ^ S))
Code:
``` 1, H: (P => Q) ^ (R => S)
2, H: (P ^ R)
3, ^E 1: P => Q
4, ^E 1: R => S
5, ^E 2: P
6, ^E 2: R
7, =>E 3,5: Q
8, =>E 4,6: S
9, ^I 5,6: P ^ R
10, ^I 7,8: Q ^ S
11, =>I 9, 10: (P ^ R) => (Q ^ S)
12, =>I 1, 11: ((P => Q) ^ (R => S)) => ((P ^ R) => (Q ^ S))```
Line 9 is not needed, and line 11 should close line 2.
Originally Posted by Lepzed
3) (P => Q) => ((R => (P => Q)) ^ ((P ^ R) => Q))
Code:
``` 1, H: P => Q
2, H: P ^ R
3, ^E 2: P
4, ^E 2: R
5, =>E 1,3: Q
6, =>I 4,1: R => (P = > Q)
7, H: P ^ R
8, ^E 7: P
9, =>E 1,8 Q
10, =>I 7,9: (P ^ R) => Q
11, ^I 6, 10: (R => (P => Q)) ^ ((P ^ R) => Q)
12, =>I 1, 11: (P => Q) => ((R => (P => Q)) ^ ((P ^ R) => Q))```
Line 6 closes line 4, which is not an assumption. Should be like this.
Code:
``` 1, H: P => Q
2, H: R
3, =>I 1,2: R => (P => Q)
4, H: P ^ R
5, ^E 4: P
6, =>E 1,5: Q
7, =>I 4,6: P ^ R => Q
8, ^I 3, 7: (R => (P => Q)) ^ ((P ^ R) => Q)
9, =>I 1, 8: (P => Q) => ((R => (P => Q)) ^ ((P ^ R) => Q))```
I am not sure if the hypothesis 1 should be repeated between lines 2 and 3 (probably not). In a tree-form derivation, the hypothesis P => Q occurs twice, but in Fitch style it should probably be declared just once.
Originally Posted by Lepzed
4) (P => (Q => R)) => ((P => Q) => (P => R))
And this one I get stuck, doing too many assumptions.
I tried the following
Code:
``` 1, H: P => (Q => R)
2, H: P ^ Q
3, ^E 2: P
4, ^E 2: Q
5, =>I 2,3: P => Q
6, H: P ^ Q
7, ^E 6: P
8, ^E 6 Q
9, =>E 1,7: Q => R
10, =>E 8,9: R
11, =>I 7, 10: P => R```
In the shortest derivation, all formulas should be subformulas of open assumptions or of the conclusion. (If I remember correctly, this is true when double-negation elimination is not used.) So, P ^ Q is not needed; one can do with implication only.
There are three levels. You assume P => (Q => R)), (P => Q), and P, and then you derive in turn:
Q => R
Q
R
and close everything.
Also, note that rule #8 in this sticky thread says not to ask more than two questions in one thread.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535538554191589, "perplexity_flag": "middle"}
|
http://planetmath.org/SylvestersLaw
|
# Sylvester’s law
Let $V$ be an $n$-dimensional real vector space and let $Q\colon V\times V\to\mathbb{R}$ be a symmetric quadratic form. Then there exists a basis of $V$ such that $Q(u,v)=u^{T}Mv$ where the matrix $M$ is diagonal.
Furthermore, for every choice of basis such that $Q(u,v)=u^{T}Mv$ with $M$ diagonal, the number of positive diagonal entries, the number of negative diagonal entries, and the number of zeros on the diagonal will be the same. The number of non-zero entries on the diagonal is known as the rank of the quadratic form.
To account for the fact that not all the entries on the diagonal may be positive, one defines a quantity known as the signature. However, there is more than one definition of the signature in use. Some define the signature as the number of positive diagonal entries minus the number of negative ones. Others define the signature as the number of strictly positive entries on the diagonal.
Among some people as, for instance, in pseudo-Riemannian geometry and relativity theory, the term signature is used to refer to a symbolic display like $[++-0]$ which shows the number of positive, negative, and zero diagonal entries or a pair of numbers $(n,m)$ which means that there are $n$ positive entries and $m$ negative ones.
The rank and signature (whichever definition; pick your favorite) of a quadratic form are invariant under change of basis.
This implies that two quadratic forms over the same finite-dimensional real vector space are related by a change of basis if and only if they have the same rank and signature.
Type of Math Object:
Theorem
Major Section:
Reference
## Mathematics Subject Classification
15A03 Vector spaces, linear dependence, rank
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: rspuzio
Added: 2002-08-25 - 19:18
Author(s): rspuzio
## Versions
(v9) by rspuzio 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8760247230529785, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/19190/category-groupoid-x-poset/19198
|
## Category = Groupoid x Poset?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is it possible to split a given category $C$ up into its groupoid of isomorphisms and a category that resembles a poset?
"Splitting up" should be that $C$ can be expressed as some kind of extension of a groupoid $G$ by a poset $P$ (or "directed category" $P$ the only epimorphisms in $P$ are the identities, all isomorphisms in $P$ are identities).
-
3
I don't see why you should, without some kind of "acyclicity" condition on your category. Consider the monoid of natural numbers as a one-object category, for instance -- what should this splitting be? – Dan Petersen Mar 24 2010 at 14:58
The requirement that the only epimorphisms in $P$ are identities is not satisfiable for general categories; consider the following counter example: f 1 -----> 2 ^ -----> ^ |\ g |\ - - The morphisms f,g are both epi, but not identities. If you replace 'epimorphisms' with isomorphisms, then the construction I outlined below should work. – Mikola Mar 24 2010 at 15:06
2
I'm no category theorist but -- consider the special case of the monoid $M = R - {0}$ under multiplication, where R is an integral domain. Then we have $U = R^{\times}$, the group of units of the monoid, and $M/U$ has a natural partial ordering induced by the divisibility relation. I wonder whether there is a generalization of this to (some more, not all) categories? – Pete L. Clark Mar 24 2010 at 21:10
Perhaps this should be posted as a separate question, but will this splitting up work if we allow monoids? That is, can a category be split up into posets, groupoids and monoids? – Colin Tan Apr 20 2012 at 16:04
## 4 Answers
I am also looking forward to answers to your question. Meanwhile here is something pointing roughly into that direction:
One can study a category $C$ through its set-valued functor category $Set^C$. By the Yoneda lemma, $C$ sits as a full subcategory inside this functor category, and from it one can reconstruct something close to $C$ (I think the idempotent completion of $C$). But non-equivalent categories can give rise to equivalent functor categories, e.g. category $C$ in which not every idempotent splits and its idempotent completion, i.e. the category made from $C$ by adjoining objects such that each idempotent becomes a composition of projection to and inclusion of a subobject and thus splits. One calls such categories Morita-equivalent.
Now $Set^C$ is a Grothendieck topos (:=category of sheaves on a site, in this case with trivial topology) and there is the following theorem about those:
A locale is a distributive lattice closed under meets and finite joins, just like the lattice of open sets of a topological space, so it is a particular poset. The theorem of Joyal and Tierney, from their monograph "An extension of the Galois theory of Grothendieck", states that every Grothendieck topos is equivalent to the category of $G$-equivariant sheaves on a groupoid object in locales - see e.g. here.
Well at least it is a statement which separates a category into a groupoid and a poset part. So if you look from very far and take it with a boulder of salt you could read this as saying that every category is "Morita-equivalent" (not really!) to a groupoid internal to posets (it makes some intuitive sense to see this as an extension).
-
1
I agree that considering idempotents is important. To focus the issue: take the category C with only one object and only one non-identity morphism, which is idempotent. The original question seems to founder on this example; how do you "split it up"? Peter reaches a positive answer by blurring this issue out (which probably has to be done if you do want a positive answer). – Tom Leinster Mar 25 2010 at 4:00
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One type of category that factors nicely is called an EI category. The definition is that every Endomorphism is an Isomorphism. After taking the quotient by the groupoid, every endomorphism is the identity. But it is still not a poset. It could be something like the category of two parallel arrows, where Mor(A,B) has two elements, Mor(B,A) none, and the endomorphisms of each object are only the identity. This has a further poset quotient $A\to B$, but it isn't there yet.
So groupoid and poset are only two kinds of behavior in categories. Monoids are a third that have been mentioned before. In particular, idempotents, as in the monoid {0,1} under multiplication, do not embed in any group. And the two parallel arrows category is yet a fourth.
-
this reminds me of a characterization of the graphs of dynamical systems – Joey Hirsh Jul 31 at 5:23
Given any locally small category, $C$, the collection of all isomorphisms forms a subgroupoid, $G \subseteq C$, where $Ob(G) = Ob(C)$ and $Hom_G(A,B) = \left ( f \in Hom_C(A,B) : \exists g, h \in Hom_C(B,A) g \circ f = id_A, f \circ h = id_B \right )$.
Because $G$ is a groupoid, it determines an equivalence relation, $R$ on the objects and morphisms of $C$ such for $A, B \in Ob(C)$:
$A \equiv_R B \Longleftrightarrow Hom_G(A,B) \neq \emptyset$
And for $f, g \in Hom_C(A,B)$:
$f \equiv_{R_{A,B}} g \Longleftrightarrow \exists h_B \in Hom_G(B,B), h_A \in Hom_G(A,A) : h_B \circ f = g \circ h_A$
If I understand what you are asking, then the quotient $C/R$ should be the 'poset' you want.
-
*Subject to the substitution epi -> iso as clarified in the comments, as otherwise this is not possible. – Mikola Mar 24 2010 at 16:27
Unless I totally misunderstand the question, this doesn't even work for categories with one object, i.e. monoids (which are not groups), does it?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240996241569519, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/135624-integrating-area-formed-three-points.html
|
# Thread:
1. ## Integrating Area Formed By Three Points
Use integration to find the area of the triangle having given the vertices
(0,0), (a,0),(b,c)
So I drew out my graph and I got 2 lines
y=c/b(x) and y=(c/b-a)(x-a)
and I know you have to integrate c/b(x) from 0 to b and add that to the integral of (c/b-a)(x-a) from b to a
Im just really having trouble integrating and getting the answer [which is .5(c)(a)]
Please Help
2. Rather than adding the integrals, I think you actually need to use an iterated integral since you're concerned with both the x and y directions simultaneously. In this case, it will obviously be a double integral. When setting it up, you should make y the inner integral since the general horizontal line across the region goes between 2 functions. The x part makes a good candidate for the outer integral because you can then just integrate from 0 to the constant b.
This page has a number of examples involving triangles.
3. I already talked to my teacher about this & he wants me to add them since we're just learning integration
4. Whoops, sorry about that.
5. It's much easier to integrate with respect to y from 0 to c. Have you tried this approach?
6. $\int_0^b \frac{c}{b}x \, dx + \int_b^a \frac{c}{b-a}(x-a) \, dx$
$= \frac{c}{b} \int_0^b x \, dx + \frac{c}{b-a} \int_b^a (x-a) \, dx<br />$
$= \frac{c}{b} \left[ \frac{x^2}{2} \right]_0^b + \frac{c}{b-a} \left[ \frac{x^2}{2}-ax \right]_b^a$
Can you follow what I am doing here?
7. yes, ive done this part already, i just get an answer that looks nothing like .5ac
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421833157539368, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/63943?sort=oldest
|
## Why do sl(2) and so(3) correspond to different points on the Vogel plane?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Vogel assigns to every simple metric Lie algebra (and more generally to every simple metric Lie algebra object in a symmetric monoidal category) a point in the orbifold $\mathbb{P}^2/S_3$ (where $S_3$ acts by permuting the 3 projective coordinates) based on the value of the Casimir on the various summands of the symmetric square of the adjoint representation. These three numbers are only defined up to permutation (as there's no natural way to specify which rep is which) and rescaling (because the Casimir itself is only well-defined up to rescaling).
Under this assignment we have $\mathfrak{sl}_2$ and $\mathfrak{so}_3$ going to different points. How is this possible? My best guess is that $\mathfrak{sl}_2$ and $\mathfrak{so}_3$ are different as metric Lie algebras, but that also seems weird.
In the conventions of this paper $\mathfrak{sl}_2$ corresponds to the point $(-1:1:1)$ while $\mathfrak{so}_3$ corresponds to the point $(-1:2:-1)$ and these are different points in $\mathbb{P}^2/S_3$. You can also easily check that the points are different in other convetions.
This question was originally asked in comments by Scott Carnahan, but I wanted to move it up to the main page.
-
Could it have something to do with the fact that over R, they are different? – Ben Webster♦ May 4 2011 at 19:33
1
But they are different as metric Lie algebras: the inner product is definite in $\mathfrak{so}_3$ and lorentzian in $\mathfrak{sl}_2$, assuming I'm understanding the question. – José Figueroa-O'Farrill May 4 2011 at 19:36
Perhaps I'm using the word "metric" wrong here. Vogel says "pseudo-quadratic" rather than "metric." The point is that you have (in addition to the bracket and the crossing) an adjoint pair of maps of representations $\mathfrak{g} \otimes \mathfrak{g} \rightarrow \mathbf{1}$ and $\mathbf{1} \rightarrow \mathfrak{g} \otimes \mathfrak{g}$. But at any rate, its a bilinear form, not a Hermitian one. – Noah Snyder May 4 2011 at 19:57
Even more surprising: $\mathfrak{sl}_3$ and $\mathfrak{so}_8$ appear twice (they also appear in the exceptionnal series in math.tamu.edu/~jml/LMunivpub.pdf). – DamienC May 4 2011 at 20:20
@DamienC: Those two "different" points for $\mathfrak{sl}_3$ are actually the same in the quotient $\mathbb{P}^2/S_3$. So that's a different issue. – Noah Snyder May 5 2011 at 0:04
show 3 more comments
## 1 Answer
I take it you mean $\mathfrak{sl}_2$ is on the line $\mathfrak{sl}_n$ and $\mathfrak{so}_3$ is on the line $\mathfrak{so}_n$. There is no contradiction because there is a whole line for this metric Lie algebra. The symmetric square of the adjoint decomposes as the trivial representation (which is accounted for by the Killing form) and one other irreducible (of dimension 5). This means you only have one of the three coordinates. Usually the coordinates are the values of the quadratic Casimir on the three non-trivial factors of the symmetric square of the adjoint representation. However looking at the adjoint representation you also know the sum of the three coordinates. This determines a line in the Vogel plane.
-
If I understand correctly the point is that if a certain projection is negligible (the projections corresponding to the two irreps that don't occur for $\mathfrak{sl}_2$) it doesn't necessarily follow that the Casimir acts by 0 on those projections. But there's one point I'm still confused about, do you actually get different finite type invariants (or, I think equivalently, different values on closed diagrams) for different points on the $\mathfrak{sl}_2$ line? Or is the point that the real configuration space is some sort of blow-down along this line? – Noah Snyder May 4 2011 at 23:46
I have not worked it through in detail but my understanding is that they are different. – Bruce Westbury May 5 2011 at 6:08
I thought about this some more and talked to Dylan, and I still think that every point on this line will give the same value on closed diagrams, and you need to look at what they do to diagrams with boundaries before you see a difference. (This is because the reps which differ over different points on this line are all negligible.) – Noah Snyder May 20 2011 at 20:11
There is something I have never cleared up. What does it mean to "evaluate a diagram at a point in the Vogel plane". The way I understand it you evaluate a diagram at a point of $\mathrm{Spec}(\Lambda)$ where $\Lambda$ is Vogel's ring; this is not finitely generated and has zero divisors. The Vogel plane is $\mathrm{Spec}(R)$ for some other ring $R$. These are related but I think the connection is somewhat mysterious. – Bruce Westbury May 20 2011 at 20:39
What happens to the dimension formulae on this line? – Bruce Westbury May 20 2011 at 20:40
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441753029823303, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/254380/limit-of-left-fracn28n-1n2-4n-5-rightn-is-the-following-true?answertab=votes
|
# Limit of $\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)^{n}$, is the following true?
Using the previous question*, arithmetics of bordes, and the sandwich theorem, find the limits of the following sequenes:
b. $\left(\dfrac{n^{2}+8n-1}{n^{2}-4n-5}\right)^{n}$
(*) the previous question is to prove that $\lim_{n\to \infty }\left(1+\dfrac{x}{n}\right) = e^x$
Looking at the question it looked quite easy. Since we proved in class that $\lim (a_n)^k = (\lim a_n)^k$ I can easily say the limit of the inside is $1$, therefore the limit of everything is $1^n=1$ (This is a good time to notice that I don't know how to use anything but inline equations here):
$$\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)^{n}=\left(\dfrac{1+\dfrac{8}{n}-\dfrac{1}{n^{2}}}{1-\dfrac{4}{n}-\dfrac{5}{n^{2}}}\right)^{n}$$ due to arithmetics of borders:
$$\begin{align*} \lim\left(\frac{1+\dfrac{8}{n}-\dfrac{1}{n^{2}}}{1-\dfrac{4}{n}-\dfrac{5}{n^{2}}}\right)^{n}&=\left(\frac{\lim\left(1+\dfrac{8}{n}-\dfrac{1}{n^{2}}\right)}{\lim\left(1-\dfrac{4}{n}-\dfrac{5}{n^{2}}\right)}\right)^{n}\\\\ &=\left(\frac{\lim1+\lim\dfrac{n}{8}-\lim\dfrac{1}{n^{2}}}{\lim1-\lim\dfrac{4}{n}-\lim\dfrac{5}{n^{2}}}\right)^{n}\\\\ &=1^{n}\\\\ &=1 \end{align*}$$
But for some reason this feels wrong to me. It doesn't use the previous question or the sandwich theorem, and the solution feels to trivial to be true. Is there anything wrong here?
-
5
With this logic, $e = \lim_{n \to \infty} (1+\frac{1}{n})^n=1^n=1$ – The Substitute Dec 9 '12 at 8:49
lol yeah... that's right... So I guess it's wrong that $\lim \left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)^{n} = (\lim \left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right))^n$ huh?... Oh after thinking about this for a while (using the answer given after I started typing as well) I think I see why it's wrong. Thanks! – Nescio Dec 9 '12 at 8:58
The problem is that you can't take the $n$ outside of the limit. It is a dummy variable - it only has meaning in the context of a limit. Similarly, in calculus you can't write $\frac{d}{dx} x = x \frac{d}{dx} 1 = 0.$ – Jair Taylor Dec 9 '12 at 9:01
## 3 Answers
$$\lim_{n\to\infty}\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)^{n}$$ $$=\lim_{n\to\infty}\left(1+\frac{12n+4}{n^{2}-4n-5}\right)^{n}$$ $$=\lim_{n\to\infty}\left\{\left(1+\frac{12n+4}{n^{2}-4n-5}\right)^{\frac{n^{2}-4n-5}{12n+4}}\right\}^{\frac{(12n+4)n}{n^{2}-4n-5}}$$ $$=e^{12}$$ as $\lim_{n\to\infty}\frac{(12n+4)n}{n^{2}-4n-5}=\lim_{n\to\infty}\frac{12+\frac4n}{1-\frac4n-\frac5{n^2}}=12$
-
Note that for any $\epsilon > 0$, $$1 + \dfrac{8-\epsilon}{n} \le 1 + \dfrac{8}{n} - \dfrac{1}{n^2} \le 1 + \dfrac{8}{n}$$ for sufficiently large $n$. Therefore $$\eqalign{e^{8-\epsilon} &= \lim_{n \to \infty} \left(1 + \dfrac{8-\epsilon}{n}\right)^n \le \liminf_{n \to \infty} \left(1 + \dfrac{8}{n} - \dfrac{1}{n^2}\right)^n \cr&\le \limsup_{n \to \infty} \left(1 + \dfrac{8}{n} - \dfrac{1}{n^2}\right)^n \le \lim_{n \to \infty} \left(1 + \dfrac{8}{n}\right)^n = e^8}$$ and taking $\epsilon \to 0+$ we get $$\lim_{n \to \infty} \left(1 + \dfrac{8}{n} - \dfrac{1}{n^2} \right)^n = \lim_{n \to \infty} \left(1 + \frac{8}{n} \right)^n = e^8$$ Similarly, $$\lim_{n \to \infty} \left(1 - \frac{4}{n} - \frac{5}{n^2} \right)^n = e^{-4}$$ So $$\lim_{n \to \infty} \left(\frac{n^2 - 8 n - 1}{n^2 - 4 n - 5}\right)^n = \dfrac{\displaystyle \lim_{n \to \infty} \left(1 + \dfrac{8}{n} - \dfrac{1}{n^2} \right)^n} {\displaystyle \lim_{n \to \infty} \left(1 - \dfrac{4}{n} - \dfrac{5}{n^2} \right)^n} = e^{12}$$
-
Write \begin{equation}\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)^{n}=e^{\ln\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)n} \end{equation} and \begin{equation}\ln\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)n=\frac{\ln\left(\frac{n^{2}+8n-1}{n^{2}-4n-5}\right)}{\frac{1}{n}} \end{equation} FInally use De L'Hopital Rule to get $e^{12}$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267117977142334, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/3894/why-does-the-price-of-a-derivative-not-depend-on-the-derivative-with-which-you-h
|
# Why does the price of a derivative not depend on the derivative with which you hedge volatility risk?
I'm trying to derive the valuation equation under a general stochastic volatility model. What one can read in the literature is the following reasoning:
One considers a replicating self-financing portfolio $V$ with $\delta$ underlying and $\delta_1$ units of another derivative $V_1$. One writes Ito on one hand, and the self-financing equation on the other hand, and then one identifies the terms in front of the two Brownian motions and in front of $dt$.
The first two identifications give $\delta$ and $\delta_1$, and the last identification gives us a PDE in $V$ and $V_1$. Then what is commonly done is to write it with a left hand side depending on $V$ only, and a right hand side depending on $V_1$ only. So you get $f(V) = f(V_1)$. We could have chosen $V_2$ instead of $V_1$ so one gets $f(V) = f(V_1) = f(V_2)$. Thus $f(W)$ does not depend on the derivative $W$ one chooses, and is called the market price of the volatility risk.
What I cannot understand in this reasoning is why $V$ does not depend on the derivative $V_1$ you choose to hedge the volatility risk in your portfolio with. As far as I see it, one should write $V(V_1)$ instead of $V$. Then one has $f(V(V_1)) = f(V_1)$ and $f(V(V_2)) = f(V_2)$ so one gets no unique market price of the volatility risk.
Does anyone know why the price of a derivative does not depend on the derivative you choose to hedge against the volatility risk?
-
## 1 Answer
Consider the following analogy: you can hedge a derivative in a deterministic-volatility model using either futures, or spot underlying. The hedge ratio will change, but all the mathematics to effectively eliminate stochastic portfolio PL is the same, and must work out to be equivalent.
A similar situation applies here: any triangle of (nontrivial) derivative securities can be shown to have an equivalent set of hedge ratios from any two of them (assumed observable, liquid etc) to form a price of the third. Basically the hedge ratio for $V_2$ in terms of $V_1$ is perfectly symmetric to the one for $V_1$ in terms of $V_2$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313495755195618, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/4051/integration-with-vector-coefficients/4053
|
# Integration with vector coefficients
I asked this same question in Mathematics, and it was suggested I might try here. I'm more comfortable with Maple, but if I can get Mathematica to do what I'm after, so much the better.
Basically I'm trying to symbolically integrate something like this:
$\displaystyle\int \frac{a\mu-b}{||a\mu-b||^3} \mathrm{d}\mu$
where $a,b$ are vectors and $\mu$ is a scalar. The denominator is the cube of the 2-norm of the vector, and can be found by taking the dot product of a vector with itself, and raising it to the power of $\frac{3}{2}$.
Right now in Maple I'm explicitly multiplying out the denominator and making substitutions so that the denominator, at least, is only in terms of scalars ($a \cdot a = C$, etc. ), but I hate doing it this way, because it adds a lot of bookkeeping. Basically I'd like the computer to understand that $a * (b \cdot a)$ is not the same thing as $b * a^2$, but that $a \cdot b * c \cdot d = c \cdot d * a \cdot b$.
What's the most kosher way to do this integration in Mathematica?
UPDATE
This is the full integral I'm trying to do. I'm not sure it even has an answer, but the first integral is similar to what I have above. So I was hoping I could take any techniques that work on the simpler one above and apply them to the full problem below.
Let: $\vec{f} = (a - c) \mu_1 + (b - c) \upsilon_1 - (x - z) \mu_2 - (y-z) \upsilon_2 - (z - c)$
where $a, b, c, x, y, z$ are vectors representing positions, and $\mu_1, \nu_1, \mu_2, \nu_2$ are scalars.
I want to find:
$\vec{F_G} = \displaystyle\int_0^1 \int_0^{1-v_2} \int_0^{1} \int_0^{1-v_1} \! \frac{f}{||{f}||^3} \, \mathrm{d} \mu_1 \mathrm{d} \upsilon_1 \mathrm{d} \mu_2 \mathrm{d} \upsilon_2$
-
– belisarius Apr 9 '12 at 14:28
Using a tensor plugin is an interesting idea. I'll explore it a bit and see if I can get what I'm after. – Jay Lemmon Apr 10 '12 at 16:38
– Jay Lemmon Apr 12 '12 at 22:37
Don't forget to post an answer if you solve it. Good luck! – belisarius Apr 13 '12 at 1:29
– helen May 7 '12 at 6:04
## 2 Answers
It helps to do a little analysis to simplify the problem. This expression is integrating over a line through $\mathbf{b}$ in the direction of $\mathbf{a}$. By choosing a suitable coordinate system you can arrange for $\mathbf{a} = (x,0,0)$ where, to assure a unit Jacobian, $x = \|\mathbf{a}\|$ (and you can even make $\mathbf{b} = (0,b,0)$ if you like, but let's just stop here and generically take $\mathbf{b} = (a,b,c)$). Brute force now succeeds:
````ClearAll[x, a, b, c];
Integrate[{x, 0, 0} \[Mu] / Norm[{x, 0, 0} \[Mu] - {a, b, c}]^3,
{\[Mu], -Infinity, Infinity},
Assumptions -> Im[a] == 0 && Im[b] == 0 && Im[c] == 0 && Im[x] == 0 && a b c != 0]
````
The output, after 5 seconds, is
````{ConditionalExpression[(2 a Abs[x])/((b^2 + c^2) x^2), x != 0], 0, 0}
````
Change back to the original coordinates to obtain the general answer.
The key is to specify the assumptions implicit in the question: namely, that these are real vectors and that the line does not pass through the origin (where the integral diverges).
-
1
The same result can be achieved by assuming that `{a, b, c, x} \[Element] Reals` instead of specifying that their imaginary parts are zero. – rcollyer Apr 9 '12 at 2:01
It's an interesting approach, but the actual problems I'm interested in (specifically integrating across two triangles' barycentric coordinates (so forming a quadruple integral)) could be difficult to set up with all the coordinate transformations. Unless there's a reasonable way to automate those, also? – Jay Lemmon Apr 9 '12 at 2:56
Jay, there probably is a simple way to set up the transformations. Consider posing the problem you want answered, rather than a simplified version. @rcollyer: Thank you for the tip! – whuber Apr 9 '12 at 14:27
@whuber I've updated the problem. Don't know if you have any further insights/ideas? – Jay Lemmon Apr 10 '12 at 16:33
If I understood ...
````av = Table[Subscript[a, i], {i, 3}];
bv = Table[Subscript[b, i], {i, 3}];
i = Integrate[(av mu - bv)/Dot[av mu - bv, av mu - bv]^(3/2), mu]
k = i /. {x_ __, x_ __, x_ __} -> x;
````
And then your integral is k * j
Where
$k =\frac{1}{\left(a_1^2 \left(b_2^2+b_3^2\right)-2 a_3 a_1 b_1 b_3+a_3^2 \left(b_1^2+b_2^2\right)-2 a_2 b_2 \left(a_1 b_1+a_3 b_3\right)+a_2^2 \left(b_1^2+b_3^2\right)\right) \sqrt{-2 a_1 b_1 \mu -2 a_2 b_2 \mu -2 a_3 b_3 \mu +a_1^2 \mu ^2+a_2^2 \mu ^2+a_3^2 \mu ^2+b_1^2+b_2^2+b_3^2}}$
and
$j = \frac{i}{k}$
-
Thanks, but I'm specifically trying to prevent it from breaking the vectors in to their components. It makes the resultant equation messy to put back together. (For instance, you have $b_1^2+b_2^2+b_3^2$, when I'd want it to just be $||b||^2$ or even $b \cdot b$) – Jay Lemmon Apr 9 '12 at 6:46
@belisarius Is the `i /. {x_ __, x_ __, x_ __} -> x` used to extract a common factor of the vector? – tkott Apr 9 '12 at 14:09
@tkott yep. Surely there is a safer way, but in this case it works OK – belisarius Apr 9 '12 at 14:19
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153260588645935, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/43831/list
|
## Return to Answer
5 fix misstatement
Not from measure theory, alas, but the example that jumps to my mind is Gauss's first proof of Quadratic Reciprocity. It appears in the Disquisitiones Mathematicae. The proof occupies arts. 135 through 144 (five and a half pages in the English edition published by Springer); the proof is by strong induction on $q$ (when $p\lt q$). I don't recall who, but someone once called it a proof by "mathematical revulsion."
The proof is quite messy. Gauss argues by cases, considering the congruence classes of $p$ and $q$ modulo $4$, and whether $p$ is or is not a quadratic residue modulo $q$. He actually casts his proof as if it were a proof by minimal counterexample, so he further assumes in some instances that the result does not hold (e.g., for $p\equiv q\equiv 1 \pmod{4}$, either $p$ is a quadratic residue modulo $q$ and $q$ is not one modulo $p$; or $p$ is not a quadratic residue modulo $q$ and $p$ q$is a quadratic residue modulo$q$). p$). They fall into eight cases, though some of those cases themselves break into subcases. For example, Gauss looks at the case when $p$ and $q$ are both congruent to $1$ modulo $4$, and $\pm p$ is not a residue modulo $q$; then he takes a prime $\ell\neq p$ less than $q$ for which $q$ is not a quadratic residue, and considers the cases in which $\ell\equiv 1 \pmod{4}$ or $\ell\equiv 3 \pmod{4}$ separately; the first subcase itself breaks into four separate sub-subcases: since $p\ell$ is a quadratic residue modulo $q$, it is the square of some even $e$; then he considers the case when $e$ is not divisible by either $p$ or $\ell$, when it is divisible by $p$ but not $\ell$; when it is divisible by $\ell$ but not $p$; and when it is divisible by $\ell$ and $p$. And so on. By the time Gauss finally gets to the eighth and final case, he is clearly somewhat exhausted, writing merely "The demonstration is the same as in the preceding case."
On the one hand, the proof is pretty much the first proof that one might think to try when encountering the problem. But the different cases are just way too messy, and one quickly loses sight of the forest because one is so intently staring at the beetles in the bark of the tree directly in front.
Plenty of other proofs would follow (including five more by Gauss), ranging from the clever to the almost magical (do this, do that, and oops, quadratic reciprocity falls out).
4 grammar
Not from measure theory, alas, but the example that jumps to my mind is Gauss's first proof of Quadratic Reciprocity. It appears in the Disquisitiones Mathematicae. The proof occupies arts. 135 through 144 (five and a half pages in the English edition published by Springer); the proof is by strong induction on $q$ (when $p\lt q$). I don't recall who, but someone once called it a proof by "mathematical revulsion."
The proof is quite messy. Gauss argues by cases, considering the congruence classes of $p$ and $q$ modulo $4$, and whether $p$ is or is not a quadratic residue modulo $q$. He actually casts his proof as if it were a proof by minimal counterexample, so he further assumes in some instances that the result does not hold (e.g., for $p\equiv q\equiv 1 \pmod{4}$, either $p$ is a quadratic residue modulo $q$ and $q$ is not one modulo $p$; or $p$ is not a quadratic residue modulo $q$ and $p$ is a quadratic residue modulo $q$). They fall into eight cases, though some of those cases themselves break into subcases. For example, Gauss looks at the case when $p$ and $q$ are both congruent to $1$ modulo $4$, and $\pm p$ is not a residue modulo $q$; then he takes a prime $\ell\neq p$ less than $q$ for which $q$ is not a quadratic residue, and considers the cases in which $\ell\equiv 1 \pmod{4}$ or $\ell\equiv 3 \pmod{4}$ separately; the first subcase itself breaks into four separate sub-subcases: since $p\ell$ is a quadratic residue modulo $q$, it is the square of some even $e$; then he considers the case when $e$ is not divisible by either $p$ nor or $\ell$, when it is divisible by $p$ but not $\ell$; when it is divisible by $\ell$ but not $p$; and when it is divisible by $\ell$ and $p$. And so on. By the time Gauss finally gets to the eighth and final case, he is clearly somewhat exhausted, writing merely "The demonstration is the same as in the preceding case."
On the one hand, the proof is pretty much the first proof that one might think to try when encountering the problem. But the different cases are just way too messy, and one quickly loses sight of the forest because one is so intently staring at the beetles in the bark of the tree directly in front.
Plenty of other proofs would follow (including five more by Gauss), ranging from the clever to the almost magical (do this, do that, and oops, quadratic reciprocity falls out).
3 added 3 characters in body
Not from measure theory, alas, but the example that jumps to my mind is Gauss's first proof of Quadratic Reciprocity. It appears in the Disquisitiones Mathematicae. The proof occupies arts. 135 through 144 (five and a half pages in the English edition published by Springer); the proof is by strong induction on $q$ (when $p\lt q$). I don't recall who, but someone once called it a proof by "mathematical revulsion."
The proof is quite messy. Gauss argues by cases, considering the congruence classes of $p$ and $q$ modulo $4$, and whether $p$ is or is not a quadratic residue modulo $q$. He actually casts his proof as if it were a proof by minimal counterexample, so he further assumes in some instances that the result does not hold (e.g., for $p\equiv q\equiv 1 \pmod{4}$, either $p$ is a quadratic residue modulo $q$ and $q$ is not one modulo $p$; or $p$ is not a quadratic residue modulo $q$ and $p$ is a quadratic residue modulo $q$). They fall into eight cases, though some of those cases themselves break into subcases. For example, Gauss looks at the case when $p$ and $q$ are both congruent to $1$ modulo $4$, and $\pm p$ is not a residue modulo $q$; then he takes a prime $\ell\neq p$ less than $q$ for which $q$ is not a quadratic residue, and considers the cases in which $\ell\equiv 1 \pmod{4}$ or $\ell\equiv 3 \pmod{4}$ separately; the first case subcase itself breaks into four separate sub-subcases: since $p\ell$ is a quadratic residue modulo $q$, it is the square of some even $e$; then he considers the case when $e$ is not divisible by either $p$ nor $\ell$, when it is divisible by $p$ but not $\ell$; when it is divisible by $\ell$ but not $p$; and when it is divisible by $\ell$ and $p$. And so on. By the time Gauss finally gets to the eighth and final case, he is clearly somewhat exhausted, writing merely "The demonstration is the same as in the preceding case."
On the one hand, the proof is pretty much the first proof that one might think to try when encountering the problem. But the different cases are just way too messy, and one quickly loses sight of the forest because one is so intently staring at the beetles in the bark of the tree directly in front.
Plenty of other proofs would follow (including five more by Gauss), ranging from the clever to the almost magical (do this, do that, and oops, quadratic reciprocity falls out).
2 spelling, grammar; added 1 characters in body
Not from measure theory, alas, but the example that jumps to my mind is Gauss's first proof of Quadratic Reciprocity. It appears in the Disquisitiones Mathematicae. The proof occupies arts. 135 through 144 (five and a half pages in the English edition published by Springer); the proof is by strong induction on $q$ (when $p\lt q$). I don't recall who, but someone once called it a proof by "mathematical revulsion."
The proof is quite messy. Gauss argues by cases, considering the congruence classes of $p$ and $q$ modulo $4$, and the ways in which whether $p$ is or is not a quadratic reciprocity might fail residue modulo $q$. He actually casts his proof as if it were a proof by minimal counterexample, so he further assumes in some instances that the result does not hold (e.g., for $p\equiv q\equiv 1 \pmod{4}$, either $p$ is a quadratic residue modulo $q$ and $q$ is not one modulo $p$; or $p$ is not a quadratic residue modulo $q$ and $p$ is a quadratic residue modulo $q$). They fall into eight cases, though some of those cases themselves break into subcases. For example, Gauss looks at the case when $p$ and $q$ are both congruent to $1$ modulo $4$, and $\pm p$ is not a residue modulo $q$; then he takes a prime $\ell\neq p$ less than $q$ for which $q$ is not a qauadratic quadratic residue, and considers the cases in which $\ell\equiv 1 \pmod{4}$ or $\ell\equiv 3 \pmod{4}$ separately; the first case itself breaks into four separate *sub*-subcases: since $p\ell$ is a quadratic residue modulo $q$, it is the square of some even $e$; then he considers the case when $e$ is not divisible by either $p$ nor $\ell$, when it is divisible by $p$ but not $\ell$; when it is divisible by $\ell$ but not $p$; and when it is divisible by $\ell$ and $p$. And so on. By the time Gauss finally gets to the eighth and final case, he is clearly somewhat exhausted, writing merely "The demonstration is the same as in the preceding case."
On the one hand, the proof is pretty much the first proof that one might think upon to try when encountering the problem. But the different cases are just way too messy, and one quickly loses sight of the forest because you are one is so intent on intently staring at the beetles in the bark of the tree directly in frontof you.
Plenty of other proofs would follow (including five more by Gauss), ranging from the clever to the almost magical (do this, do that, and oops, quadratic reciprocity falls out).
1 [made Community Wiki]
Not from measure theory, alas, but the example that jumps to my mind is Gauss's first proof of Quadratic Reciprocity. It appears in the Disquisitiones Mathematicae. The proof occupies arts. 135 through 144 (five and a half pages in the English edition published by Springer); the proof is by strong induction on $q$ (when $p\lt q$). I don't recall who, but someone once called it a proof by "mathematical revulsion."
The proof is quite messy. Gauss argues by cases, considering the congruence classes of $p$ and $q$ modulo $4$, and the ways in which quadratic reciprocity might fail (e.g., for $p\equiv q\equiv 1 \pmod{4}$, either $p$ is a quadratic residue modulo $q$ and $q$ is not one modulo $p$; or $p$ is not a quadratic residue modulo $q$ and $p$ is a quadratic residue modulo $q$). They fall into eight cases, though some of those cases themselves break into subcases. For example, Gauss looks at the case when $p$ and $q$ are both congruent to $1$ modulo $4$, and $\pm p$ is not a residue modulo $q$; then he takes a prime $\ell\neq p$ less than $q$ for which $q$ is not a qauadratic residue, and considers the cases in which $\ell\equiv 1 \pmod{4}$ or $\ell\equiv 3 \pmod{4}$ separately; the first case itself breaks into four separate *sub*subcases: since $p\ell$ is a quadratic residue modulo $q$, it is the square of some even $e$; then he considers the case when $e$ is not divisible by either $p$ nor $\ell$, when it is divisible by $p$ but not $\ell$; when it is divisible by $\ell$ but not $p$; and when it is divisible by $\ell$ and $p$. And so on. By the time Gauss finally gets to the eighth and final case, he is clearly somewhat exhausted, writing merely "The demonstration is the same as in the preceding case."
On the one hand, the proof is pretty much the first proof that one might think upon when encountering the problem. But the different cases are just way too messy, and one quickly loses sight of the forest because you are so intent on staring at the beetles in the bark of the tree in front of you.
Plenty of other proofs would follow (including five more by Gauss), ranging from the clever to the almost magical (do this, do that, and oops, quadratic reciprocity falls out).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 194, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95499187707901, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/103768-upper-bound-maclaurin-approximation.html
|
# Thread:
1. ## Upper Bound For Maclaurin Approximation
There is a homework problem that I don't even understand what they're asking. Can any of you help clarify what they mean?
f(x) = sin(x)
Find an upper bound for |[f(x)-[p9(x)]]|
-7 <= x <= 7
p9(x) is the nth order Maclaurin approximaxion to f
2. Originally Posted by soma
There is a homework problem that I don't even understand what they're asking. Can any of you help clarify what they mean?
Where should we start? Obviously this problem expects you to know what a "MacLaurin series" is. Do you know that? It is simply the Taylor's series at x= 0. Do you know what that is? If not look up "MacLaurin series" or "Taylor's series" in the index of your book.
And, I strongly suspect that in the section where this problem is given, there is a formula for the error you get in using cutting a McLaurin or Taylor's polynomial of degree n (that is, cutting off the McLaurin or Taylor's series at the nth power). It should look something like this:
$E\le \frac{M_{n+1}}{(n+1)!}x^{n+1}$ where " $M_{n+1}$" is an upper bound on the absolute value of the n+1 derivative of the function.
Since this problem asked about the "9th order MacLaurin series for sin(x)" for x between -7 and 7. Okay you need $M_{10}$. What is the 10th derivative of sin(x)? What is its largest value between -7 and 7 (radians: $2\pi$ is about 6.28 so this is from less than $2\pi$ to larger than $2\pi$).
3. Ok, so the tenth derivative of sin(x) is -sin(x), and the maximum bound of -sin(x) is when x = -pi/2
-sin(-pi/2) = 1
so
E <= [1/(9+1)!]*[(-pi/2)^(9+1)]
E <= .0000252020
Am I understanding this problem correctly?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951570451259613, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/49199-another-roulette-probability.html
|
# Thread:
1. ## Another Roulette probability
So say i bet \$200 when red comes up. If i win i will quit, if I lose I will bet a second time of \$400. Regardless the outcome, I will quit after 2nd time. Assume I have a probability of 1/2 wining each bet. What is the probabilty I go home as a winner? Why is this system not in use?
2. Originally Posted by weakmath
So say i bet \$200 when red comes up. If i win i will quit, if I lose I will bet a second time of \$400. Regardless the outcome, I will quit after 2nd time. Assume I have a probability of 1/2 wining each bet. What is the probabilty I go home as a winner? Why is this system not in use?
The probability that you go home a winner is $\frac{1}{2} + \left( \frac{1}{2} \right) \, \left( \frac{1}{2} \right) = \frac{3}{4}$.
But consider the following .....
Let X be the random variable amount of winnings. The possible values of X are \$200 and -\$600.
1. Calculate E(X).
2. The probability of winning on red is not 1/2 in any casino I've ever visited.
3. ## Roulette Probaility
There are 37 unique values on a roulette wheel
18 are coloured Red, 18 are coloured Black & 1 Green (Zero)
The house edge is 2.7% (of every \$100 wagered the Casino can expect to keep \$2.70)
The Table Limit (maximum wager by a single player) is typically much less than 36 x minimum wager.
Typical payback is 35:1, so even if you could cover 36 spots your pay back is always -1
A RED / BLACK strategy will never work in the long run (18/37)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Casino's know there are no fool proof systems for winning, but love fools who prove it
4. Since I am new to math, I won't try to describe mathematically why you can't win at roulette, but I can describe a little experiment I tried. I used a computer program and ran all kinds of betting strategies. I found that with a decent stake up front, say 10,000 and a conservative betting strategy, I could consistently make money playing the odds on red or black. So, I decided to check out a local casino (I live right by some very large casinos). I went to the roulette wheel and watched for a while.
Guess what? It doesn't matter what the computer simulation showed me. It left out the fact that I would have to sit for hours around a bunch of grungy losers who are all smoking huge quantities of cigarrettes and losing their rent money. It's certainly possible to make a respectable dollar/hr average, but the working conditions in that particular occupation are horrible.
And if you try to beat your system, win fast and get out? You end up a grungy loser gambling your rent money and smoking huge quantities of cigarrettes.
This direct observation of the game really needs to be in any math formula related to roulette.
5. Originally Posted by Berkeley
Since I am new to math, I won't try to describe mathematically why you can't win at roulette, but I can describe a little experiment I tried. I used a computer program and ran all kinds of betting strategies. I found that with a decent stake up front, say 10,000 and a conservative betting strategy, I could consistently make money playing the odds on red or black. So, I decided to check out a local casino (I live right by some very large casinos). I went to the roulette wheel and watched for a while.
Guess what? It doesn't matter what the computer simulation showed me. It left out the fact that I would have to sit for hours around a bunch of grungy losers who are all smoking huge quantities of cigarrettes and losing their rent money. It's certainly possible to make a respectable dollar/hr average, but the working conditions in that particular occupation are horrible.
And if you try to beat your system, win fast and get out? You end up a grungy loser gambling your rent money and smoking huge quantities of cigarrettes.
This direct observation of the game really needs to be in any math formula related to roulette.
Go and check:
1. You simulation to make sure it is simulating the game as played
2. You psuedo-random number generator
(Casino owners are pretty confident you cannot do what you claim for your simulations, in fact they are prepared to bet on it, and they don't gamble!)
CB
6. Originally Posted by CaptainBlack
Go and check:
1. You simulation to make sure it is simulating the game as played
2. You psuedo-random number generator
(Casino owners are pretty confident you cannot do what you claim for your simulations, in fact they are prepared to bet on it, and they don't gamble!)
CB
I think you're missing the point.
7. Originally Posted by Berkeley
I think you're missing the point.
I'd say CaptainBlack has scored a direct bullseye hitting the point. The point being that it's very doubtful that your simulation is correctly simulating the real life situation.
Originally Posted by Berkeley
Since I am new to math, I won't try to describe mathematically why you can't win at roulette, but I can describe a little experiment I tried. I used a computer program and ran all kinds of betting strategies. I found that with a decent stake up front, say 10,000 and a conservative betting strategy, I could consistently make money playing the odds on red or black. So, I decided to check out a local casino (I live right by some very large casinos). I went to the roulette wheel and watched for a while.
[snip]
It's certainly possible to make a respectable dollar/hr average [snip] Mr F says: If you mean making it by betting on the roulette wheel (which is what this thread is discussing) then you are wrong. Period.
[snip]
8. ## Casino Games - math
9. Originally Posted by mr fantastic
I'd say CaptainBlack has scored a direct bullseye hitting the point. The point being that it's very doubtful that your simulation is correctly simulating the real life situation.
It's assumed that one is playing either red or black or odd or even. The wheel in either a computer simulation with random generation or the actual roulette wheel in the casino is still going to hit one of those 50% of the time minus the two greens (0 or 00) on an American wheel.
I said he was missing the point, because the point was that in reality, sitting in the casino using a betting strategy that's making say \$40/h is not a pleasant experience, and it's not going to make a person rich. In fact, to play roulette with a strategy that works may be one of the worst jobs in the world, unless one is a complete dolt who doesn't mind literally wasting their life away in a casino. And that kind of person is not the kind of person who posesses the risk-taking personality to gamble in the first place. People gamble to get rich quick.
10. Originally Posted by Berkeley
It's assumed that one is playing either red or black or odd or even. The wheel in either a computer simulation with random generation or the actual roulette wheel in the casino is still going to hit one of those 50% of the time minus the two greens (0 or 00) on an American wheel.
I said he was missing the point, because the point was that in reality, sitting in the casino using a betting strategy that's making say \$40/h is not a pleasant experience, and it's not going to make a person rich. In fact, to play roulette with a strategy that works may be one of the worst jobs in the world, unless one is a complete dolt who doesn't mind literally wasting their life away in a casino. And that kind of person is not the kind of person who posesses the risk-taking personality to gamble in the first place. People gamble to get rich quick.
There is no betting strategy on roulette in a casino such that your expected winnings is greater than or equal to zero. Period. Finito. Your simulation is wrong.
11. Originally Posted by mr fantastic
There is no betting strategy on roulette in a casino such that your expected winnings is greater than or equal to zero. Period. Finito. Your simulation is wrong.
Yep mr. fantastic right. There is no system to winning in roulette. I should know I'm in gambling anonymous classes lol. I have lost 4 thousand in 30 minutes and many more thousands. The only advice I can give for roulette is that numbers tend to repeat and if you play lets say 27 and 3 is above and 35 is below 27 on the wheel play those numbers as well because the ball tends to jump out and may land on those numbers. Just to repeat again, there is no system that helps wins in roulette.
12. Originally Posted by Berkeley
I think you're missing the point.
Which point would that be? You make a mathematical claim in your post; that you have a valid simulation of roulette and a "system" that consistently wins in that simulation.
As we know that it is impossible for you to have a winning system for roulette, we conclude that either your simulation is invalid, or that you have not run the experiment properly and are just seeing the effects of the enormous variability that you can get with the simulation or you are not telling the truth.
And that is the point of my post!
CB
13. Originally Posted by Berkeley
I said he was missing the point, because the point was that in reality, sitting in the casino using a betting strategy that's making say \$40/h is not a pleasant experience, and it's not going to make a person rich. In fact, to play roulette with a strategy that works may be one of the worst jobs in the world, unless one is a complete dolt who doesn't mind literally wasting their life away in a casino. And that kind of person is not the kind of person who posesses the risk-taking personality to gamble in the first place. People gamble to get rich quick.
The same argument would apply to working in MacDonnalds, or teaching high school or almost all "normal" jobs.
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549904465675354, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/36405/complex-and-elementary-proofs-in-number-theory/83773
|
## Complex and Elementary Proofs in Number Theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Prime Number Theorem was originally proved using methods in complex analysis. Erdos and Selberg gave an elementary proof of the Prime Number Theorem. Here, "elementary" means no use of complex function theory.
Is it possible that any theorem in number theory can be proved without use of the complex numbers?
On the one hand, it seems a lot of the theorems using in analytic number theory are about the distributions of primes. Since the Prime Number Theorem has an elementary proof, this might suggest that elementary proofs exist in other cases.
On the other hand, the distribution of primes is intimately related to the zeros of the Riemann Zeta function. Perhaps the proofs of other statements in analytic number theory require more direct references to the Riemann Zeta function.
This topic is more of a fascination for me, as I am not a number theorist. I would be interested if there are other examples of elementary proofs of theorems originally proved with complex analytic methods.
-
1
Although you're not a number theorist, can you tell us what you are? This may be useful in giving answers that are meaningful to you. – KConrad Aug 22 2010 at 22:26
Grad student in algebraic topology. Have fond memories of my undergraduate number theory course; maybe I was feeling nostalgic today. – Micah Miller Aug 23 2010 at 0:05
1
The following answer to a related question on the elementary proof of the PNT may be of interest: math.stackexchange.com/questions/2530/… – Emerton Aug 23 2010 at 0:12
## 7 Answers
Yes, there is a theorem to this effect by Takeuti given in his book "Two applications of logic to mathematics". He shows roughly that complex analysis can be developed in a conservative extension of Peano arithmetic.
-
7
Yes - but I don't think that's the kind of thing people have in mind when they talk of replacing complex analysis with elementary arguments. I'm pretty sure that what Erdos and Selberg did had nothing to do with developing Hadamard's proof in a conservative extension of Peano arithmetic. – Gerry Myerson Aug 22 2010 at 23:21
4
Right, but it's going to be very hard to make mathematically precise the notion of giving an elementary proof which doesn't include the proofs you can construct via Takeuti's results. – Noah Snyder Aug 23 2010 at 4:23
3
@Noah, of course, you are correct, but in an abstractly similar situation US Supreme Court Justice Potter Stewart said "I know it when I see it." – Gerry Myerson Aug 23 2010 at 5:37
1
My understanding is that given Hadamard's proof (which uses complex analysis) and Takeuti's result you could posit the existence of, and even construct, an elementary proof - but said elementary proof would not look anything like the Erdos-Selberg proof. Takeuti tells you how to turn an elevator into a staircase, whereas Erdos-Selberg build a staircase from scratch. (Anyone have a better metaphor?) – Gerry Myerson Aug 24 2010 at 0:43
2
@Gerry: It is my impression that the Erdos-Selberg proof doesn't build the staircase from scratch but rather uses motivations from complex analysis in finding certain formulas expressible in an elementary way. It isn't a mechanical process from the complex to the real, but there is analogy involved. – Pace Nielsen Aug 24 2010 at 18:01
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I just heard about a very interesting sounding new approach to analytic number theory by A. Granville and K. Soundararajan, which seems to fit to your question: "Since 1859 the only coherent approach to these problems has been based on Riemann's idea connecting the distribution of prime numbers to the zeros of the Riemann zeta function -- which are the zeros of an analytic continuation. Some might argue that this is "unnatural" and ask for an approach that is less far removed from the original problems. Recently Soundararajan and I have proposed a different approach to the whole subject of analytic number theory, based on our concept of pretentiousness -- recently we have realized our dream of being able to develop the whole subject in a coherent way, without using the zeros of the Riemann zeta function."(link to course website, course notes) Perhaps someone tells us more about it?
-
Here is a recent paper by Dimitris Koukoulopoulos, which obtains the strongest known form of the Prime Number Theorem without heavy use of complex analysis. One can find the symbol $i$ in the paper, but it does not rely on the analytic continuation of the zeta function.
This is a direct extension of the "pretentious" methods of Granville and Soundararajan, mentioned in an answer by Thomas Riepe.
-
It depends on what one would count as "number theory". Even if one´s interest ultimately lies "only" in algebraic number subfields of the reals, one could argue that there are objects attached to these structures (e.g. L- and Zeta-functions) that are number-theoretical and are of interest for themselves, that is, they are more than just a way prove other things. I think a nice example here is Artin´s conjecture on the holomorphy of the Artin´s L-function -> http://en.wikipedia.org/wiki/Artin_L-function. To elaborate a little bit on this, there is absolutely no reason why an arbitrary ordinary Dirichlet series should admit meromorphic continuation, let alone being entire.
Assuming that the purely algebraic and real-analytic approach turns out to suffice for almost all number-theorerical purposes, objects like the complex-valued L-functions may indeed not contribute so much more to our mathematical understanding, but they certainly do so to our "philosphical" (or if you want "meta-mathematical") understanding.
Moreover, in the end one of the things that matter are the interconnections between various fields of mathematics. In that case one a priori has to deal with structures even "higher" than the complex numbers, e.g. automorphic forms -> http://en.wikipedia.org/wiki/Langlands_program.
-
Although stuffs like the prime number theorem or Dirichlet's theorem on primes in arithmetic progression are believed to form the integral parts of analytic number theory, I have something to say, that might interest you. There is a very intuitive way of guessing the asymptotic expression x/logx for pie(x). I was going through the book "What is Mathematics" by Courant, Robbins and Stewart a few days back, from where I got this idea. It involves expressing log n!in two ways, one by its asymptotic expression nlogn and the other involves De-Polygnac's theorem on the greatest power of a prime that divides n!.Then the two expressions are compared. The next step involves dividing the interval [2,x] into a large number of sub-intervals where #primes is approximated by P(x)dx. This P(x) is actually assumed to be a smooth prime density function whose definite integral gives the value of pie(x). The comparison yields an asymptotic expression for P(x), namely
(x-1)/(xlogx) whose integral is approximately x/logx for large x. However, as mentioned there, the primary difficulty behind the proof is really the existence of such a smooth density function P(x). This argument may seem to be a more statistical in nature! You can refer to Hardy's work on the zeros of the Riemann Zeta function. Another fact that might interest you as well, a theorem that states that for any k, there are k collinear points on the prime number graph. You can consult Carl Pomerance's paper for a proof, which uses the concept of convex hull.
-
2
There are a lot of interesting facts in mathematics, but we're looking for answers to a specific question here, and I don't see how Pomerance's paper qualifies. – Gerry Myerson Sep 2 at 5:32
accidentally flagged Gerry's comment while intending to +1 it :-( – Yemon Choi Sep 2 at 6:03
I have studied both the "elementary" and "analytic function theoretic" proofs of PNT and I find that the analytic version is more natural and appealing whereas the so called "elementary" version looks more crafted specifically to avoid complex analysis.
I can illustrate my point by a very simple example. While studying calculus I learned the technique of integration by parts and using this one could evaluate the integral $\displaystyle \int e^{ax}\sin bx\,dx$.
But a simple trick using complex numbers would make the evaluation of the integral so easy and at the same time so marvelous:
$\displaystyle A = \int\,e^{ax}\cos bx\,dx, \,\, B = \int\,e^{ax}\sin bx\,dx$
so that $\displaystyle A + iB = \int\,e^{(a + ib)x}\,dx = \frac{e^{(a + ib)x}}{a + ib}$
The answer follows by equating real and imaginary parts. This uses only the algebra of complex numbers.
The Cauchy's integral theorem which lies at the heart of all complex analysis is way more powerful than it seems (on a first sight) and can make various proofs much more simple and natural than their elementary counterparts.
-
2
This doesn't seem to be an answer to the question. – Gerry Myerson Sep 2 at 5:30
Taking complex analysis from number theorists is like taking away their lives. So called "elementary is just relativly simple and direct.
-
3
I don't think so. The complex variable proofs of the Prime Number Theorem are generally held to be simpler than the elementary proofs, and I'm not sure the word "direct" can be defined sharply enough to get a consensus on which approach is the more direct. – Gerry Myerson Aug 24 2010 at 6:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447203874588013, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/noncommutative-algebra+examples-counterexamples
|
# Tagged Questions
1answer
1k views
### An example of a division ring $D$ that is **not** isomorphic to its opposite ring
I recall reading in an abstract algebra text two years ago (when I had the pleasure to learn this beautiful subject) that there exists a division ring $D$ that is not isomorphic to its opposite ring. ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546111822128296, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/68086?sort=votes
|
## odd betti numbers of a projective bundle
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to know if the odd Betti numbers of a projective bundle P(E) for some vector bundle E over say a compact complex smooth algebraic variety B are zero just as in the case for ordinary projective spaces over Spec(k), or more generally how to generalize standard calculations of the cohomology of projective space to projective bundles.
-
3
In Griffiths-Harris you find a description for the cohomology of a projective bundle. In particular, if h^i(B) is nonzero then h^i(P(E)) is also nonzero. – Remke Kloosterman Jun 17 2011 at 20:13
In a reasonable cohomology theory where one can define Chern classes, one always has this relation between the cohom. of $P(E)$ and of $X$ (see e.g. Grothendieck's paper on Chern classes). For singular cohom. one can apply Kunneth formula. – shenghao Jun 18 2011 at 14:34
In Grothendieck's Chern classes paper, "this specific" property you asked below is built into the axiom A1 (see p.5), and for singular cohom. he said this is well-known (see top of p.9). I don't know a precise reference, but I think it must be in some standard alg. top. book, maybe Bott-Tu? Anyway you may prove it using Leray to the map $f:P(E)\to B,$ which degenerates at $E_2$ by, for instance, Deligne's weight argument. Along the way you may need proper base change in topology. – shenghao Jun 19 2011 at 10:27
## 1 Answer
If $E$ is of rank $r$ then $H^i(P_B(E)) = \sum_{t = 0}^{r-1} H^{i-2t}(B)$ (where the summands with negative $i - 2t$ are omitted). So $H^{odd}(P_B(E)) = 0$ if and only if $H^{odd}(B) = 0$.
-
Thanks. Any sources? – DZN Jun 18 2011 at 5:44
Does anyone know a source for this specific result (I don't see this specific result in GH or Grothendieck's paper on Chern classes)? – DZN Jun 18 2011 at 23:13
Any textbook. Griffiths-Harris, Fulton, etc. – Sasha Jun 19 2011 at 17:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8474475741386414, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/73219?sort=newest
|
## Hyperbolicity on Riemann Surfaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For Riemann surfaces there are at least to possible notions of hyperbolicity. The classical one given by the Uniformization Theorem, or equivalently the type problem, which essentially says that a simply connected Riemann surfaces is conformally equivalent to one of the following:
• Riemann Sphere $\mathbb{C}\cup{\infty}$ (elliptic type).
• Complex plane (parabolic type).
• Open unit disk (hyperbolic type).
On the other hand, given a Riemann surface one can asks if it is hyperbolic in the Gromov's sense. In other words, does there exists $\delta>0$ such that all the geodesic triangles in the surface are $\delta$-thin?
It seems to me that this two notions of hyperbolicity are not equivalent and one can have counterexamples in both directions. For instance, the two dimensional torus $\mathbb{T}^2$ is hyperbolic in Gromov's sense (since it is compact), but it's also a quotient of the Euclidean plane by a free action of a discrete group of isometries and therefore, of parabolic type.
My questions are: what is a sufficient condition for a surface of hyperbolic type to be Gromov's hyperbolic? what is known about the relation of these two notions?
Related Question: Let $G$ be an infinite planar graph with uniformly bounded degree and assume that the simple random walk is transient. Is the graph necessarily Gromov's hyperbolic?
-
"necessary condition to guarantee" makes no sense. Do you mean "sufficient condition"? Please edit the question. – Sam Nead Aug 19 2011 at 21:19
corrected, thanks! – ght Aug 20 2011 at 1:00
2
In order to talk about hyperbolicity in Gromov's sense, you need to specify a specific metric structure on the Riemann surface, as they only come equipped with a conformal structure (pointed out in R W's answer). A natural choice is the Riemannian metric with canonical constant Gaussian curvature, but do you want to restrict to this case? I take it from some comments below that you don't, and would like to consider Riemann surfaces with some arbitrary choice of Riemannian metric conformally equivalent to this one. It might be worth editing something like this into your question. – jc Aug 24 2011 at 16:37
## 3 Answers
NEW ANSWER:
As there has been much confusion on this point (some of it mine...):
Definition: A Riemannian 2-manifold $S$ is of hyperbolic type if the universal cover of $S$ is conformally equivalent to the open unit disk, $D$.
On the other hand we have
Definition: A hyperbolic surface $S$ is a surface equipped with a complete Riemannian metric of constant curvature minus one.
It is an exercise to show that all hyperbolic surfaces are surfaces of hyperbolic type. On the other hand, a surface of hyperbolic type need not be hyperbolic. As an easy example of this, choose your favorite positive function $f$ on the disk $D$ and use $f$ to scale the Poincare metric. This new metric is (almost surely) not constant curvature but is conformally equivalent to the Poincare metric.
With these definitions in place: the original question is ill-posed. Knowing that a surface $S$ is of hyperbolic type does not suffice to tell us the metric. To be precise, there are conformally equivalent metrics $\rho_0$ and $\rho_1$ on the open disk $D$ so that the first is Gromov hyperbolic and the second is not. (Eg, let $\rho_0$ be the Poincare metric while $\rho_1$ has larger and larger "mushrooms" as you walk to infinity.)
OLD ANSWER (written in terms of the above definitions):
I'll assume that you are asking for a sufficient condition to ensure that a hyperbolic surface $S$ is Gromov hyperbolic. One condition is that $S$ has finite area. In this case $S$ has a compact core (which is of no interest in this setting) and a finite number of cusps. A cusp is obtained by modding out a horodisk by a parabolic isometry. All cusps are quasi-isometric to rays. Thus $S$ is quasi-isometric to a tree having one vertex and one ray per cusp.
A simpler condition is that $\pi_1(S)$ is finitely generated. The allowed surfaces are now somewhat more complicated: in addition to cusps there can be funnels in the complement of the compact core. A funnel is obtained by modding out a half-plane by a hyperbolic isometry. All funnels are quasi-isometric to the hyperbolic plane. (This is a nice exercise!) So, here, $S$ is quasi-isometric to the one-point union of a collection of hyperbolic planes and rays.
As for the "opposite direction": When the group is infinitely generated things can be very strange. For example, consider any cubic, connected graph $X$, of infinite diameter, where all edges have length one. (This is a very large class of metric spaces, even after passing to quasi-isometric equivalence classes.) Then, for any such graph $X$ there is a hyperbolic surface $S_X$ quasi-isometric to $X$.
-
1
Thanks Sam! This is exactly what I meant. Can you please explain me a little bit more why this condition is sufficient? – ght Aug 20 2011 at 0:17
1
This might be a stupid question but how do you prove that a simply connected hyperbolic surface (i.e. conformally equivalent to the unit disk) is Gromov's hyperbolic? – Val Aug 20 2011 at 0:56
I think you also need the surface to have finite area to be q.i. to a tree, so that there are rank 1 cusps. Otherwise, the surface (with finitely generated fundamental group) will be geometrically finite, and have finitely many ends which are each q.i. to the hyperbolic plane or cusps. – Agol Aug 21 2011 at 18:37
1
@ght I think there is some mismatch between your definitions of "surface" here. In his answer and comments, Sam Nead is working with hyperbolic surfaces in the sense of hyperbolic geometry, i.e. those with Riemannian metrics of constant negative Gaussian curvature. You are more concerned with Riemann surfaces of hyperbolic type, which only carry a conformal structure (a whole equivalence class of Riemannian metrics!). His last comment thus points out that a "Riemannian manifold of hyperbolic type" and a "hyperbolic surface" thus are not the same. – jc Aug 24 2011 at 16:27
2
@All - Reading Val's question more carefully, I see that Val is not asking about hyperbolic surfaces (as he/she wrote) but rather asking about surfaces of hyperbolic type (as he/she also wrote!). So, Val and ght are asking the same question "Is every surface of hyperbolic type in fact Gromov hyperbolic?" The answer to this question is "No". This does not contradict anything I said about hyperbolic surfaces. I'll edit my answer to make this clear. – Sam Nead Aug 24 2011 at 19:48
show 6 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
About the related question: it is a result of Babai that a (connected, locally finite) vertex-transitive, planar graph is isomorphic to the 1-skeleton of an Archimedean tiling of the sphere, or the Euclidean plane, or the hyperbolic plane. So assuming transience singles out the hyperbolic plane, and implies Gromov-hyperbolicity for the graph. See http://www.cs.uchicago.edu/files/tr_authentic/TR-2001-04.ps
-
Is this Theorem 3.1 of that paper? If so, you need to add the "zero or one endedness" hypothesis. In particular, as far as I understand the definition of Archimedean tilings, some hypothesis is needed to exclude regular trees. – Sam Nead Aug 24 2011 at 19:58
@ Sam: Correct. I quoted in a rush. – Alain Valette Aug 24 2011 at 22:19
I don't quite understand your question about surfaces as the notions of hyperbolicity you are talking about deal with two different structures: a Riemann surface is endowed with a conformal structure, whereas Gromov's hyperbolicity is a property of metric spaces (in particular, of Riemannian manifolds).
There is also some confusion in the torus example: when saying that it is of parabolic type you are talking about its universal cover, whereas when claiming that it is Gromov hyperbolic you are talking about the torus itself (actually it is not really fair to say that a compact metric space is Gromov hyperbolic).
Finally, concerning graphs the answer is no, because you can always attach to your favorite transient planar graph a sequence of circles with increasing radii - it won't change transience and planarity, but will prevent the resulting graph from being hyperbolic.
EDIT: I would still strongly advise against basing any examples or counterexamples on "compact hyperbolic spaces". Although formally they do have the $\delta$-hyperbolicity property, the whole point of developing this theory was to look at the large scale geometry of such spaces, and in particular at their behavior at infinity. If you wish, the notion of a compact hyperbolic space is as rich as that of a compact vector space. This is what I meant by saying that "it is not really fair to say that a compact metric space is Gromov hyperbolic".
As for your revised question, one can formulate it in a more general way: when is a quotient of a Gromov hyperbolic space also Gromov hyperbolic? In order to see its scope, you may first look at the discrete case, where any regular graph is a quotient of the corresponding homogeneous tree.
-
3
Seeing that compact metric spaces are Gromov hyperbolic, I think it's perfectly fair to call them Gromov hyperbolic. – Richard Kent Aug 19 2011 at 14:50
1
Given a Riemann surface you can always put a metric on it and consider it as a metric space. Therefore, both notions of hyperbolicity make sense. A connected metric surface is a quotient of one of the following - the sphere (curvature +1) - the Euclidean plane (curvature 0) - the hyperbolic plane (curvature −1). by a free action of a discrete subgroup of isometries. Hence, with this notion the torus is a Riemann surface and manifold who is of parabolic type. Other thing is that ALL compact metric spaces are indeed Gromov hyperbolic so the torus is Gromov hyperbolic. – ght Aug 19 2011 at 15:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358199834823608, "perplexity_flag": "head"}
|
http://gilkalai.wordpress.com/2010/10/25/the-simonovits-sos-conjecture-was-proved-by-ellis-filmus-and-friedgut/?like=1&source=post_flair&_wpnonce=fe7f791e1e
|
Gil Kalai’s blog
## The Simonovits-Sos Conjecture was Proved by Ellis, Filmus and Friedgut
Posted on October 25, 2010 by
Let be a family of graphs with N={1,2,…,n} as the set of vertices. Suppose that every two graphs in the family have a triangle in common. How large can be?
(We talked about it in this post.)
One of the most beautiful conjectures in extremal set theory is the Simonovich-Sos Conjecture, the proposed answer to the question above:
Let be a family of graphs with N as the set of vertices. Suppose that every two graphs in the family have a triangle in common. Than $|{\cal F}| \le 2^{{{n}\choose {2}}-3}$.
A few weeks ago David Ellis, Yuval Filmus, and Ehud Friedgut proved the conjecture. The paper is now written. The proof uses Discrete Fourier analysis/spectral methods and is quite involved. This is a wonderful achievement.
The example showing that this is tight are all graphs containing a fixed triangle.
Vera Sos’s, a great mathematical hero of mine, is ageless. Nevertheless, she had a birthday conference in early September. The paper by Ellis, Filmus, and Friedgut is dedicated to Vera on the occasion of her birthday. What a nice birthday gift!
Let me add my own wishes: Happy birthday, Vera!
### Like this:
This entry was posted in Combinatorics, Open problems. Bookmark the permalink.
### 10 Responses to The Simonovits-Sos Conjecture was Proved by Ellis, Filmus and Friedgut
1. Gil Kalai says:
Let me add the extremal problems on set systems where the ground set is regarded as the set of edges of the complete graph (so you have extra structure) is a very interesting subject initiated by Simonovits and Sos. Look, for example at the formulations of the polynomial Hales Jewett problem http://gowers.wordpress.com/2009/11/14/the-first-unknown-case-of-polynomial-dhj/. Such problems are also related to the counterexample by Jeff Kahn and me to Borsuk’s problem.
2. Sasho Nikolov says:
Is there a result that generalizes extremal results like Erdos-Ko-Rado, the Siminovits-Sos conjecture (or EFF theorem now ) and similar results. For example, let $U$ be the ground set (universe), and let $\mathcal{X} \subseteq {U \choose k}$. We could conjecture that if $\mathcal{F} \subseteq 2^U$ and for any two $F_1, F_2 \in \mathcal{F}$ we have $X \subseteq F_1 \cap F_2$ for some $X \in \mathcal{X}$, then $|\mathcal{F}| \leq 2^{|U| - k}$. In EKR, $\mathcal{X}$ is the set of singleton subsets of $U$. In Simonovits-Sos, $U = {[n] \choose 2}$ and $\mathcal{X}$ is the set of triangles. I noticed that the paper above establishes this theorem for all odd cycles.
Are there counter examples? In case there are, it might be interesting to look for ways to characterize $\mathcal{X}$ that make the above true.
Are there bolder/more subtle generalizations?
• Gil Kalai says:
Dear Shasho, this is a good question. I am not aware of very general conjecture in the form you propose, but several problems results can be described in this way.
• Yuval says:
Take a look at the classic “Some intersection theorems for ordered sets and graphs”. You would like to have the stronger condition that the optimal families are kernel families (e.g. all supersets of a given triangle).
They conjectured that this is the case in the following four settings:
(1) All cyclic translates of a some fixed set. They proved it for intervals; Griggs and Walker proved it for infinitely many values of n, “Anticlusters and intersecting families of subsets”.
(2) The Simonovits-Sos conjecture.
(3) Like (2), replacing “triangle” by a path of length 3; an unpublished counterexample was given by Christofides (David has it).
(4) Sets of integers whose intersection contains a 3-term arithmetic progression. As far as I know, widely open.
• Yuval says:
If you take unrestricted 2-intersecting families, then instead of getting 1/4 of the sets, you’ll get almost 1/2 of them (take all sets with at least n/2+1 elements). The more interesting question is identifying the “Ahlswede-Khachatrian spectrum”, which is the maximal mu_p measure for various p (your question corresponds to p=1/2).
3. Yuval says:
Another possibly related line of research is exemplified by “Extremal set systems with restricted k-wise intersections” by Fueredi and Sudakov. On the title page it reads “Communicated by Gil Kalai” so I’m sure Gil could tell us more about it.
• Yuval says:
(should have been a comment to Sasho)
• Gil Kalai says:
Many thanks for your comments, Yuval
• Sasho Nikolov says:
Thanks for the insightful comments from me too, Yuval! And congratulations on a great result!
I was thinking if one direction to go is to look at families of graphs such that each pair of graphs in the family share a copy of some size $k$ *connected* graph $H$. The counterexample to Simonovits-Sos with “triangle” replaced by “length-3 path” refutes that. Oh well..
4. Yuval says:
A concrete “puzzle” is whether kernel sets based on a triangle are also optimal for cycle-intersecting families. Stated differently, is there some \$n\$ and a family of more than \$2^{n-3}\$ graphs on \$n\$ vertices such that the intersection of any two contains a cycle?
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265586137771606, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/54642/how-to-assign-coordinates-to-the-elements-of-a-flat-metric-space/54665
|
# How to assign coordinates to the elements of a flat metric space
Consider the metric space $(M, d \,)$ where set $M$ contains sufficiently many (at least five) distinct elements,
and consider the assignment $c_f$ of coordinates to (the elements of) set $M$,
$c_f \, : \, M \leftrightarrow {\mathbb{R}}^3; \, c_f[ P ] := \{ x_P, y_P, z_P \}$
such that distinct coordinates values are assigned to distinct elements of set $M$, and
such that for the function
$f \, : \, ({\mathbb{R}}^3 \times {\mathbb{R}}^3) \rightarrow {\mathbb{R}};$
$f[ \{ x_P, y_P, z_P \}, \{ x_Q, y_Q, z_Q \} ] :=$ ${\sqrt{ (x_Q - x_P)^2 + (y_Q - y_P)^2 + (z_Q - z_P)^2 }} \equiv {\sqrt{ \sum_{ k \in \{ x \, y \, z \} } (k_Q - k_P)^2 }}$
and for any three distinct elements $A$, $B$, and $J$ $\in M$ holds
$f[ c_f[ A ], c_f[ J ] ] \, d[ B, J ] = f[ c_f[ B ], c_f[ J ] ] \, d[ A, J ]$.
Is the metric space $(M, d \,)$ therefore flat?
(i.e. in the sense of vanishing Cayley-Menger determinants of distance ratios between any five elements of set $M$.)
-
This probably belongs on math.se or even on mathoverflow. – Emilio Pisanty Feb 21 at 17:35
Aren't coordinates being used in physics; thus requiring a physics-based definition? – user12262 Feb 21 at 20:47
They are. But addition is also used in physics and that doesn't make it more physics than math. Your question is purely mathematical (geometrical, at that) in nature, I think. – Emilio Pisanty Feb 21 at 21:48
I admit that the physics import of my question is subtle, namely to point out that determining and considering distance ratios is paramount and indispensible for characterizing a metric space (such as determining possible flatness, or evaluating curvature) while any assignment of coordinates is at best secondary. Surely there are various ways of putting this point (more or less) in the required form of a question. I hope to still do better myself; for instance generalizing from $\kappa = 0$ (flatness) to any value $\kappa$; eventually asking ... – user12262 Feb 23 at 14:30
... under which conditions the assignment of coordinate-tuples is sufficient for establishing that a given metric space constitutes a manifold. p.s. "addition is also used in physics and that doesn't make it more physics than math" Right, per se. However, it remains to define what to add; or whether to use any other operation instead. p.p.s. Gotta go, look up again whether/how subtraction is defined for Dirichlet cuts ... – user12262 Feb 23 at 14:30
show 1 more comment
## 1 Answer
Yes -- if the coordinates (real number triples) $c_f$ can be assigend to the elements of set $M$ as required in the statement of the question, given the distances (ratios) $d$ and function $f$ as described above then the metric space $(M, d \,)$ is flat.
Because: for any fifteen (real) numbers, $\{ x_\alpha, y_\alpha, z_\alpha \}$, $\{ x_\beta, y_\beta, z_\beta \}$, $\{ x_\gamma, y_\gamma, z_\gamma \}$, $\{ x_\phi, y_\phi, z_\phi \}$ and $\{ x_\lambda, y_\lambda, z_\lambda \}$ the following determinant vanishes
0 = $\begin{array}{|cccccc|} 0 & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\alpha - k_\beta)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\alpha - k_\gamma)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\alpha - k_\phi)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\alpha - k_\lambda)^2}} & 1 & \\ {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\beta - k_\alpha)^2}} & 0 & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\beta - k_\gamma)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\beta - k_\phi)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\beta - k_\lambda)^2}} & 1 & \\ {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\gamma - k_\alpha)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\gamma - k_\beta)^2}} & 0 & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\gamma - k_\phi)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\gamma - k_\lambda)^2}} & 1 & \\ {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\phi - k_\alpha)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\phi - k_\beta)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\phi - k_\gamma)^2}} & 0 & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\phi - k_\lambda)^2}} & 1 & \\ {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\lambda - k_\alpha)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\lambda - k_\beta)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\lambda - k_\gamma)^2}} & {\small{\sum\limits_{ k \in \{ x \, y \, z \} } (k_\lambda - k_\phi)^2}} & 0 & 1 & \\ 1 & 1 & 1 & 1 & 1 & 0 & \end{array}$.
Consequently, for any five distinct elements $A$, $B$, $J$, $K$ and $Q$ $\in M$ holds
0 = $\begin{array}{|cccccc|} 0 & \left(\frac{f[ c_f[ A ], c_f[ B ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ A ], c_f[ J ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ A ], c_f[ K ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ A ], c_f[ Q ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 1 & \\ \left(\frac{f[ c_f[ B ], c_f[ A ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 0 & \left(\frac{f[ c_f[ B ], c_f[ J ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ B ], c_f[ K ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ B ], c_f[ Q ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 1 & \\ \left(\frac{f[ c_f[ J ], c_f[ A ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ J ], c_f[ B ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 0 & \left(\frac{f[ c_f[ J ], c_f[ K ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ J ], c_f[ Q ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 1 & \\ \left(\frac{f[ c_f[ K ], c_f[ A ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ K ], c_f[ B ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ K ], c_f[ J ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 0 & \left(\frac{f[ c_f[ K ], c_f[ Q ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 1 & \\ \left(\frac{f[ c_f[ Q ], c_f[ A ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ Q ], c_f[ B ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ Q ], c_f[ J ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & \left(\frac{f[ c_f[ Q ], c_f[ K ] ]}{f[ c_f[ A ], c_f[ B ] ]}\right)^2 & 0 & 1 & \\ 1 & 1 & 1 & 1 & 1 & 0 & \end{array}$;
and therefore also
0 = $\begin{array}{|cccccc|} 0 & \left(\frac{d[ A, B ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ A, J ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ A, K ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ A, Q ]}{d[ A, B ]}\right)^2 & 1 & \\ \left(\frac{d[ B, A ]}{d[ A, B ]}\right)^2 & 0 & \left(\frac{d[ B, J ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ B, K ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ B, Q ]}{d[ A, B ]}\right)^2 & 1 & \\ \left(\frac{d[ J, A ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ J, B ]}{d[ A, B ]}\right)^2 & 0 & \left(\frac{d[ J, K ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ J, Q ]}{d[ A, B ]}\right)^2 & 1 & \\ \left(\frac{d[ K, A ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ K, B ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ K, J ]}{d[ A, B ]}\right)^2 & 0 & \left(\frac{d[ K, Q ]}{d[ A, B ]}\right)^2 & 1 & \\ \left(\frac{d[ Q, A ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ Q, B ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ Q, J ]}{d[ A, B ]}\right)^2 & \left(\frac{d[ Q, K ]}{d[ A, B ]}\right)^2 & 0 & 1 & \\ 1 & 1 & 1 & 1 & 1 & 0 & \end{array}$.
Thus, the (normalized) Cayley-Menger determinants of distance ratios between any five elements of set $M$ vanishes; the metric space $(M, d \,)$ is flat. (However, the metric space $(M, d \,)$ is then still not necessarily plane, or even straight.)
The suitable assignment of real number triples $c_f$ to elements of any flat metric space, together with the described function $f$ therefore provides a good (scaled-isometric) representation of the given flat metric space.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8856261968612671, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/6947/riemannian-2-manifolds-not-realized-by-surfaces-in-mathbbr3
|
# Riemannian 2-manifolds not realized by surfaces in $\mathbb{R}^3$?
A smooth surface $S$ embedded in $\mathbb{R}^3$ whose metric is inherited from $\mathbb{R}^3$ (i.e., distance measured by shortest paths on $S$) is a Riemannian 2-manifold: differentiable because smooth and with the metric just described. Two questions:
1. Are such surfaces a subset of all Riemannian 2-manifolds? Are there Riemannian 2-manifolds that are not "realized" by any surface embedded in $\mathbb{R}^3$? I assume _Yes_.
2. If so, is there any characterization of which Riemannian 2-manifolds are realized by such surfaces? In the absence of a characterization, examples would help.
Thanks!
Edit. In light of the useful responses, a sharpening of my question occurs to me: 3. Is the only impediment embedding vs. immersion? Is every Riemannian 2-manifold realized by a surface immersed in $\mathbb{R}^3$?
-
I think the canonical examples of 2-manifolds that can be immersed but not embedded in $\mathbb{R}^3$ are the en.wikipedia.org/wiki/Klein_bottle, en.wikipedia.org/wiki/Boy%27s_surface and various half-way models of en.wikipedia.org/wiki/Sphere_eversion – kahen Oct 16 '10 at 14:23
It seems like the flat torus should present problems beyond embedding/immersing...? – Aaron Mazel-Gee Oct 17 '10 at 3:55
1
– Joseph O'Rourke Oct 17 '10 at 16:02
## 3 Answers
There are pointers to a wealth of information on this question in the responses to the Math Overflow question, which you mentioned in your comment to Paul VanKoughnett's response.
In particular, Deane Yang's response gives a nice summary of the situation, and Bill Thurston's response seems to give a good perspective on the problem of trying to find a characterization of Riemannian manifolds that admit such an embedding.
Regarding the third question you mention in an edit. This is essentially a local problem. All this from the same MO question:
From BS's response: there is not even a local isometric embedding in general: http://www.springerlink.com/content/m775p64w64351260/
from Will Jaggy's response (and Deane Yang's comments on it): If the metric is analytic, then you can construct a local isometric embedding. Some recent progress on characterizing the requirements when the degree of smoothness is relaxed: http://arxiv.org/abs/1009.6214 The bibliography for that last one has no shortage of other relevant sounding titles.
-
Thanks, yasmar, this is very useful! Somehow I was not connecting the two topics directly in my mind... – Joseph O'Rourke Oct 16 '10 at 23:08
The hyperbolic plane cannot be smoothly isometrically embedded in $\mathbb{R}^3$. It can be so in $\mathbb{R}^5$. It is open (as far as I know) if it can be embedded in $\mathbb{R}^4$. I believe this is mentioned in Do Carmo's book on curves and surfaces.
Edit: Not a complete characterization, but Amsler has shown (see below for reference) that any Riemannian surface with constant negative curvature, if attempted to be imbedded in $\mathbb{R}^3$, must have singularities.
Amsler, M.H., Des surfaces a courbure negative constante dans l'espace a trois dimensions et de leurs singularites, Math. Ann. 130, 1955, 234-256
-
Thanks for the example and the reference! I have do Carmo's book in my office; will look up. Thanks! – Joseph O'Rourke Oct 16 '10 at 14:21
– Willie Wong♦ Oct 17 '10 at 0:30
The Whitney embedding theorem says you can always embed a smooth $n$-manifold in $\mathbb{R}^{2n}$, and immerse it in $\mathbb{R}^{2n-1}$. Nonorientable Riemann surfaces, for example, don't embed in $\mathbb{R}^3$, but there are some pretty good immersions (the typical picture of the Klein bottle is a good example).
For a Riemannian manifold, Nash and Kuiper proved that there's a $C^1$ globally isometric embedding into $\mathbb{R}^{2n+1}$ (and, in fact, that you can arbitrarily closely approximate any metric $C^\infty$ embedding into at least $\mathbb{R}^{n+1}$ by a global isometric $C^1$ embedding). For a global isometric $C^\infty$ embedding, it looks like the current lower bound is max$(n(n+1)/2+2n,n(n+1)/2+n+5)$. For a local one, you can do it into $n(n+1)/2+n$-space.
This means that for a globally isometric and analytic embedding of a surface, you might have to go up to $\mathbb{R}^{10}$. Ew.
-
– Joseph O'Rourke Oct 16 '10 at 17:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8998181819915771, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/105321?sort=votes
|
## second fundamental form
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi all,
Currently I'm reading a paper about the geometry of Grassmannians:
www.omup.jp/modules/papers/riemann/04Nagatomo.pdf
In there, the author regards the second fundamental form of the k-dimensional tautological bundle $\tau \rightarrow Gr:=Gr_k(\mathbb{C}^N)$ over a grassmannian given by
$H=\pi_{\tau^c} \circ \nabla^0 \in \Omega^1(Gr,Hom(\tau,\tau^c))$.
Here, $\tau^c$ denotes the complementary bundle of $\tau \subset Gr\times \mathbb{C}^N$, such that $\tau \oplus \tau^c \cong Gr \times \mathbb{C}^N$ with respect to the natural hermitian metric and $\pi_{\tau^c}$ is the projection on it. From this embedding we obtain natural connections on $\tau\rightarrow Gr$ and $\tau^c\rightarrow Gr$ by projecting the flat connection. It's easy to show that $H$ is indeed a 1-form with values in $Hom(\tau,\tau^c)$ as claimed.
In Lemma 2.1 on page 43 the author says that both the second fundamental form of $\tau\rightarrow Gr$ and $\tau^c\rightarrow Gr$ is parallel. Now at this point I have my question: As far as I know there is only one sense what parallel in this context means, namely
$(\nabla_X H)_Y\sigma:=\nabla^{\tau^c}_X(H_Y\sigma)-H_{\nabla_X Y}\sigma-H_Y\nabla^{\tau}_X\sigma=0$.
Can this be true? My first naive approach was writing the whole equation in terms of the flat connection and projections but this leads to nothing, I guess.
I wonder why none of the well known textbooks on differential geometry (of complex vector bundles) e.g. Kobayashi doesn't mention this fact if it is true. Maybe thd definition of "parallel" is not the one meant. So what could be the right one then? I would be grateful for any references.
best regards, Alex
-
1
While I haven't looked at the details, it sounds quite believable that H is parallel. In general, homogeneous things have constant curvatures... – Robert Haslhofer Aug 23 at 22:08
Yes, using theory about the holonomy group of homogeneous spaces together with the existence of invariant connections on homogeneous spaces seems to lead to an answer. I've not yet figured out the details but: I still wonder about the fact why there isn't a simple direct calculation. – Alex_K Aug 24 at 16:17
## 1 Answer
Don't know if anyone is interested in the (quite simple) answer: The bundles $Hom(\tau,\tau^c)$ and $TGr$ are not only isomorphic as bundles but also as bundles with their natural connection, i.e. the induced Hom-connection from $\nabla^{\tau}$ respectively $\tau^{\tau^c}$ and den Levi-Civita-Connection. We have to show: $\nabla_X(H_Y\sigma)=H_{\nabla_XY}\sigma+H_Y\nabla_X\sigma$ where everytime the only possible connection is meant. This is equivalent to $\nabla_XH_Y\sigma-H_Y\nabla_X\sigma=H_{\nabla_XY}\sigma$ but $\nabla_XH_Y\sigma-H_Y\nabla_X\sigma=:(\nabla^{Hom}_XH_Y)(\sigma)$. Using the fact, that the isomorphism between $Hom(\tau,\tau^c)$ and $TGr$ is given by $X \mapsto H_X$ and the fact, that this isomorphism preserves the connection, we're done.
You could also use the fact, that $Gr=G/H$ is a homogeneous space, so $G(G/H,H)$ is a principle $H$-fibre bundle with left $G$-action and there is a $G$-invariant connection on this fibre bundle. The $G$-invariance of all tensor fields gives then also that the second fundamental form is parallel.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357414245605469, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/41676/why-do-physicists-believe-that-particles-are-pointlike/41677
|
# Why do physicists believe that particles are pointlike?
String theory gives physicists reason to believe that particles are 1-dimensional strings because the theory has a purpose - unifying gravity with the gauge theories.
So why is it that it's popular belief that particles are 0-dimensional points? Was there ever a proposed theory of them being like this? And why?
What reason do physicists have to believe that particles are 0-dimensional points as opposed to 1-dimensional strings?
-
1
We represent them as dots. Many particles, like protons have a uud quark composition. In fact, the study of the proton structure is a big field of research. – Debangshu Oct 25 '12 at 14:13
1
If you like this question you may also enjoy reading this and this post. – Qmechanic♦ Oct 25 '12 at 14:40
1
Olly, if you'd like your accounts merged could you please link to your other account? – David Zaslavsky♦ Oct 25 '12 at 15:06
1
I'd swear that we did this just a week or two ago... – dmckee♦ Oct 25 '12 at 19:05
– jjcale Oct 25 '12 at 20:06
show 3 more comments
## 8 Answers
In the standard model (as in all traditional relativistic quantum field theory), particles are pointlike. All experimentally available facts about microphysics seem to be consistent with the standard model. This is the (fully sufficient) reason for believing that particles in Nature are pointlike.
Pointlike is a technical term that refers to the fact that in the standard model, the Lagrangian is a function of fields at the same point (rather than of integrals over fields in some small neighborhood of this point, described by form factors specifying the ''form'' of the particle).
The main reason why, nevertheless, many physicists speculate that (at much higher resolution) particles might not be pointlike is that there is no known way how to harmonize quantum field theory with gravitational forces, whereas string theory (where particles are stringlike) seems to offer a potential way of doing so. Nobody knows to which extent these speculations will turn out to be correct.
-
Kepler developed the model of Solar system where all planets were point-like. At that time this model was consistent with all observations. – Murod Abdukhakimov May 14 at 15:43
Occam's razor suggests that in the simplest explanation is the most probable. Physicists will assume that elementary particles are point-like, until they have evidence to suggest otherwise.
-
2
Why are point particles simpler? This is not a reasonable answer--- Heisenberg and Dirac both suspected that particles were blobs. It just is difficult to make a theory of this kind of thing relativistically invariant outside of string theory. – Ron Maimon Oct 25 '12 at 23:42
2
That's what I meant --- it seems to be simpler to assume that particles are point-like. – hwlin Oct 27 '12 at 14:30
Ron - keep in mind that this answer says that its ok to consider them as point-like. All that means is you don't have to worry about size in any experiment on the fundamental particles. When you start looking to create a model of an actual electron, then the point model is not all that good, as you 'point' out... – Tom Andersen Dec 1 '12 at 22:32
Elementary particles don't really have a shape or a size, these are emergent qualities that stem from interactions between particles. In quantum physics a particle is represented by its quantum state, and if you want to describe that in space you get a wave function which tells us how much of the particle is present at any given point in space. Because there is no theoretical limit for the size of the spatial region where the wavefunction is nonzero you cannot assign a finite size to the particle. You can imagine the particle either as infinitely small (i.e. point like), or just say that the concept of size is not very meaningful.
-
3
The concept of zero size in quantum field theory comes from local polynomial interactions. If you smear out the interaction, you can define a nonlocal field theory with blob-particles. – Ron Maimon Oct 25 '12 at 23:41
If we stick in non-polynomial terms in the lagrangian (eg. $\sin[\phi]$, $\phi^{3/2}$ terms), are we smearing out the interactions? – James Oct 27 '12 at 14:30
Pointlike and point are entirely different concepts. The planet Jupiter is pointlike to likely 6 or more decimals of precision when studying the dynamical evolution of the Solar system. Does not mean that Jupiter is a point! Just because something behaves pointlike has always meant that we just don't know enough yet. String theory is one theory about a deeper level, there are others.
So I don't think that many physicists actually think that the electron is a point. Its just that you don't need to worry about any structure when working at piddling energies of a 100GeV or less...
-
Your question is based on an assumption that the vacuum is empty, and matters (including particles) are things we placed in the empty vacuum. But the Casimir effect shows that the vacuum is not empty but a dynamical medium. This led to an emergence point of view of elementary particles: they are quantized collective motions of the vacuum medium.
In one approach, we regard the vacuum as a collection of qubits. (ie the space is a qubit ocean.) If those qubit form a string-net liquid, then the quantized collective motions of qubits can give rise to photons, electrons, etc. So elementary particles, like photons and electrons, are not elementary in the sense that there are underlying theories, such as the quantum qubit model on lattice, from which they can be derived as an effective approximation (see for example our paper arXiv:hep-th/0302201). Under such an emergence picture, if we examine elementary particles closely, we see the qubits that form the whole space. The question weather elementary particles are point-like or not do not make sense within the emergence approach.
The string-net condensation provides a unified origin for gauge interactions and Fermi statistics: Both elementary gauge bosons (such as photons, gluons) and elementary fermions (such as electrons, quarks) can emerge as quasi-particles in a quantum spin model on lattice if the quantum spin model has a "string-net condensed state" as its ground state. An comparison between the string-net approach and the superstring approach can be found here.
There is a falsifiable prediction from the string-net theory: all fermions (elementary or composite) must carry gauge charges (see our paper cond-mat/0302460). The standard model contain composite fermions that are neutral for $U(1)\times SU(2)\times SU(3)$ gauge theory. So according to the string-net theory, the standard model is incomplete. The correct model should contain extra gauge theory, such as a $Z_2$ gauge theory. So the string-net theory predicts extra discrete gauge theory and new cosmic strings associated with the new discrete gauge theory.
The emergence approach may also produce (linear) quantum gravity from quantum spin models (see our paper arXiv:0907.1203). However, the emergence approach (such as the string-net theory), so far, fail to produce the chiral coupling between the $SU(2)$ weak interaction and the fermions.
-
What is the reason why (almost) all your links refer to articles or post written by you? I would honestly like to know your opinion. It is not just this post, it is general. Even if you only write answer about the field which you are expert, it seems weird to me. Without malice. – drake Nov 17 '12 at 21:08
2
The point of view that elementary particles are originated from long-range entanglements is not a common point of view. Also I mostly only answer the questions that I have my own answer which is not shared commonly. – Xiao-Gang Wen Nov 20 '12 at 4:51
I see, thank you for answering. – drake Nov 20 '12 at 5:53
actually "pointlike" is confusing. L.H. Thomas corrected a problem that Uhlenbeck and Goudsmit were having with their model of Spin, and his analysis required transporting a frame around an orbit- to get the Thomas Precession. So an electron is not strictly pointlike - it is somehow 'frame like', or at least it requires a frame to deal with spin. In any event, it is not really a point.
-
I agree. I don't think they are actually points. – Tom Andersen Dec 1 '12 at 22:23
Canonical particles possess an actual radius of hardness, which is determined by the Compton's expression $\lambda_\text{Compton} = \frac{h}{mc}$. One can read more about it here http://inerton.wikidot.com/canonical-particle
Why do particle physicists speculate about point-like particles? It seems to me this is associated with their education; namely, their teachers told them wrong things and implanted an abstract tunnel vision approach to the reality. It is a pity but this is the truth.
-
1
The notion that particles are thought of as point-like simply because thats what has been taught is naive and completely wrong. Thinking of them as point-like is just a simplification because there isn't any evidence to the contrary. – Brandon Enright Apr 5 at 21:37
Elementary particle which you are referring here, does not seem to be what we think. It is neither like the point or string (as far as today is concerned) these theories are valid but up to a certain strength. These particles can be take up any shape but moreover for our convenience we use dots or points to represent. As they are highly active particles and according to Heisenberg's uncertainty principle i think to determine the position and momentum together is not possible, due to which we are not able to determine the shape.
And as far as your question about 1-d and o-d is concerned i think it's vague to ask as if we had known how it looks we would have already made it appear like that instead of dot's and strings. For E.g - a car we know how it looks and we make it. Same way if an elementary particle we had known how it appears then the problem would have been solved.
-
doesn't answer the question – Arnold Neumaier Nov 15 '12 at 15:16
## protected by Qmechanic♦Apr 5 at 20:06
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528844356536865, "perplexity_flag": "middle"}
|
http://www.newworldencyclopedia.org/entry/Superconductivity
|
# Superconductivity
From New World Encyclopedia
A magnet levitates above a high-temperature superconductor (with boiling liquid nitrogen underneath). The levitation indicates that the superconductor expels an applied magnetic field, a property known as the "Meissner effect."
Superconductivity, discovered in 1911 by Heike Kamerlingh Onnes, is a phenomenon occurring in certain materials at extremely low temperatures (on the order of −200 degrees Celsius), characterized by exactly zero electrical resistance and exclusion of the interior magnetic field (the Meissner effect). Materials with such properties are called superconductors.
Superconductors are used to make some of the most powerful electromagnets known to man, including those used in MRI machines. They have also been used to make digital circuits, highly sensitive magnetometers, and microwave filters for mobile phone base stations. They can also be used for the separation of weakly magnetic particles from less magnetic or nonmagnetic particles, as in the pigment industries. Promising future applications include high-performance transformers, power storage devices, electric power transmission, electric motors (such as for maglev trains), and magnetic levitation devices.
## Overview
The electrical resistivity (the measure of how much a material resists an electric current) of a metallic conductor decreases gradually as the temperature is lowered. However, in ordinary conductors such as copper and silver, impurities and other defects impose a lower limit. Even near absolute zero, a sample of copper shows non-zero resistance. The resistance of a superconductor, on the other hand, drops abruptly to zero when the material is cooled below a temperature called its "critical temperature"—typically 20 Kelvin (K) or less. An electrical current flowing in a loop of superconducting wire will persist indefinitely with no power source (provided that no energy is drawn from it).
Superconductivity occurs in a wide variety of materials, including simple elements like tin and aluminum, various metallic alloys, and certain kinds of ceramic materials known as high-temperature superconductors (HTS). Superconductivity does not occur in noble metals like gold and silver, nor in most metals that can be spontaneously magnetized.
In 1986, the discovery of HTS, with critical temperatures in excess of 90 K, spurred renewed interest and research in superconductivity for several reasons. As a topic of pure research, these materials represented a new phenomenon not explained by the current theory. Also, because the superconducting state persists up to more manageable temperatures, more commercial applications become feasible, especially if materials with even higher critical temperatures could be discovered.
## History of superconductivity
Superconductivity was discovered in 1911 by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently discovered liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. For this discovery, he was awarded the Nobel Prize in Physics in 1913.
In subsequent decades, superconductivity was found in several other materials. In 1913, lead was found to be superconductive at 7 K, and in 1941 niobium nitride was found to be superconductive at 16 K.
The next important step in understanding superconductivity occurred in 1933, when Walter Meissner (1882-1974) and Robert Ochsenfeld (1901-1993) discovered that superconductors expelled applied magnetic fields, a phenomenon that has come to be known as the "Meissner effect." In 1935 F. and H. London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.
In 1950 Lev Landau (1908-1968) and Vitalij Ginzburg (1916 - ) formulated what came to be called the phenomenological Ginzburg-Landau theory of superconductivity. This theory had great success in explaining the macroscopic properties of superconductors. In particular, Alexei Abrikosov showed that the theory predicts the division of superconductors into the two categories, now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau having died in 1968).
Also in 1950, James Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This discovery revealed that the internal mechanism responsible for superconductivity was related to the attractive force between electrons and the ion lattice beneath—known as electron-phonon interactions.[1]
The complete, microscopic theory of superconductivity was finally proposed in 1957 by John Bardeen (1908-1991), Leon Cooper, and John Schrieffer. It came to be known as the BCS theory. Superconductivity was independently explained by Nikolay Bogolyubov (1909-1992). The BCS theory explained the superconducting current as a superfluid of "Cooper pairs"—pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972. In 1959 Lev Gor'kov showed that the BCS theory becomes equivalent to the Ginzburg-Landau theory close to the critical temperature.
Generalizations of these theories form the basis for understanding the closely related phenomenon of superfluidity (because they fall into the Lambda transition universality class), but the extent to which similar generalizations can be applied to unconventional superconductors is still controversial.
In 1962 the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse Electric Corporation. In the same year, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the "Josephson effect," is exploited by superconducting devices such as SQUIDs (superconducting quantum interference devices). Josephson was awarded the Nobel Prize for this work in 1973.
Until 1986, physicists had believed that the BCS theory forbade superconductivity at temperatures above about 30 K. That year, however, Johannes Bednorz and Karl Müller discovered superconductivity in a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was soon found by Paul C. W. Chu of the University of Houston and M. K. Wu at the University of Alabama in Huntsville that replacing the lanthanum with yttrium (to make YBCO) raised the critical temperature to 92 K. This latter discovery was significant because liquid nitrogen could then be used as a refrigerant (at atmospheric pressure, the boiling point of nitrogen is 77 K). This is important commercially because liquid nitrogen can be produced cheaply on-site with no raw materials, and is not prone to some of the problems (such as solid air plugs) of liquid helium in piping. Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics.
## Elementary properties of superconductors
Superconductors possess both common and individual properties according to each kind. An instance of a common property of superconductors is that they all have exactly zero resistivity to low applied currents when there is no magnetic field present. Individual properties include the heat capacity and the critical temperature at which superconductivity is destroyed.
Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature above which superconductivity disappears. On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present. The existence of these "universal" properties implies that superconductivity is a thermodynamic phase and that these distinguishing properties are largely independent of microscopic details.
### Zero electrical "dc" resistance
Electric cables used by the European Organization for Nuclear Research (CERN). Regular cables (background) for 12,500 amps of electric current used at a particle accelerator called the Large Electron-Positron Collider (LEP); superconductive cable (foreground) for the same amount of electric current used at the Large Hadron Collider (LHC).
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source "I" and measure the resulting voltage "U" across the sample. The resistance of the sample is given by Ohm's law:
$R = \frac{U}{I}$.
If the voltage is zero, then the resistance is zero, which means that the electric current is flowing freely through the sample and the sample is in its superconducting state.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years, and theoretical estimates for the lifetime of persistent current exceed the lifetime of the universe.
In a normal conductor, an electrical current may be visualized as a fluid of electrons moving across a heavy ionic lattice (the conducting material), consisting of atoms that are electrically neutral. The electrons are constantly colliding with the ions (electrically neutral atoms) in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat (which is essentially the vibrational kinetic energy, energy due to motion, of the lattice ions). As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance.
In superconductors, on the other hand, the electronic fluid is not made up of individual electrons, but rather pairs of electrons called Cooper pairs, held together by an attractive force arising from the microscopic vibrations in the lattice. According to quantum mechanics, this Cooper pair fluid requires a minimum amount of energy, ∆E, for it to conduct an electrical current. Specifically, the energy supplied to the fluid needs to be greater than the thermal energy (temperature) of the lattice in order for superconductivity to appear. This is why superconductivity is achieved at extremely low temperatures.
### Superconducting phase transition
In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from less than 1 K to around 20 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2001, the highest critical temperature found for a conventional superconductor is 39 K for magnesium diboride (MgB2), although this material displays enough exotic properties that there is doubt about classifying it as a "conventional" superconductor. Cuprate superconductors can have much higher critical temperatures: YBCO (YBa2Cu3O7), one of the first cuprate (copper based) superconductors to be discovered, has a critical temperature of 92 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The explanation for these high critical temperatures remains unknown.
The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition (when a material changes state, such as from solid to liquid). One such change, as seen above with the Cooper pairing, is that the electronic fluid in a normal conductor becomes a Cooper pair fluid in the superconducting state and this fluid also becomes a superfluid.
### Meissner effect
When a superconductor is placed in a weak external magnetic field, the field penetrates the superconductor for only a short distance, called the penetration depth, after which it decays rapidly to zero. This is called the Meissner effect, and is a defining characteristic of superconductivity. For most superconductors, the penetration depth is on the order of 100 nanometers.
The Meissner effect states that a superconductor expels all magnetic fields. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field. An equation (known as the London equation) predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
The Meissner effect breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs.
In Type I superconductors, superconductivity is abruptly lost when the strength of the applied field rises above a critical value. Depending on the geometry of the sample, one may obtain an intermediate state consisting of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field.
In Type II superconductors, raising the applied field past a critical value leads to a mixed state in which an increasing amount of magnetic flux (an amount of something that flows through a unit area in a unit time) penetrates the material, but there remains no resistance to the flow of electrical current as long as the current is not too large.
At a second critical field strength, superconductivity is destroyed. Most pure elemental superconductors (except niobium, technetium, vanadium and carbon nanotubes) are Type I, while almost all impure and compound superconductors are Type II.
## Applications
Superconductors are used to make some of the most powerful electromagnets known to man, including those used in MRI machines and the beam-steering magnets used in particle accelerators. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries.
Superconductors have also been used to make digital circuits and microwave filters for mobile phone base stations.
Superconductors are used to build Josephson junctions, which are the building blocks of SQUIDs (superconducting quantum interference devices)—the most sensitive magnetometers known. Series of Josephson devices are used to define the SI volt. Depending on the particular mode of operation, a Josephson junction can be used as a photon detector or as mixer. The large resistance change at the transition from the normal- to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors.
Other early markets are arising where the relative efficiency, size, and weight advantages of devices based on high-temperature superconductors outweigh the additional costs involved.
Promising future applications include high-performance transformers, power storage devices, electric power transmission, electric motors (such as for propulsion of vactrains or maglev trains), magnetic levitation devices, and fault current limiters. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (such as transformers) will be more difficult to develop than those that rely upon direct current.
## Superconductors in popular culture
Superconductivity is a popular device in science fiction, due to the simplicity of the underlying concept—zero electrical resistance—and the rich technological possibilities. One of the first mentions of the phenomenon occurred in Robert A. Heinlein's novel Beyond This Horizon (1942). Notably, the use of a fictional room temperature superconductor was a major plot point in the Ringworld novels by Larry Niven, first published in 1970. Organic superconductors were featured in a science fiction novel by physicist Robert L. Forward. Also, superconducting magnets may be invoked to generate the powerful magnetic fields needed by Bussard ramjets, a type of spacecraft commonly encountered in science fiction.
The most troublesome property of real superconductors, the need for cryogenic cooling, is often circumvented by postulating the existence of room temperature superconductors. Many stories attribute additional properties to their fictional superconductors, ranging from infinite heat (thermal) conductivity in Niven's novels to providing power to an interstellar travel device in the Stargate movie and TV series (real superconductors conduct heat poorly, though superfluid helium has immense but finite heat conductivity).
## References
### Books
• Gianfranco, Vidali. 1993. Superconductivity: The Next Revolution? Cambridge: Cambridge University Press. ISBN 0521377579
• Tinkham, Michael. 2004. Introduction to Superconductivity, 2nd ed. Mineola, NY: Dover Publications. ISBN 0486435032
• Tipler, Paul and Ralph Llewellyn. 2002. Modern Physics, 4th ed. W. H. Freeman. ISBN 0716743450
### Journal articles
• H.K. Onnes (1911). . Commun. Phys. Lab. 12 (120).
• W. Meissner and R. Oschenfeld (1933). . Naturwiss. 21 (787).
• F. London and H. London (1935). . Proc. R. Soc. London A149 (71).
• V.L. Ginzburg and L.D. Landau (1950). . Zh. Eksp. Teor. Fiz. 20 (1064).
• E.Maxwell (1950). . Phys. Rev. 78 (477).
• C.A. Reynolds et. al. (1950). . Phys. Rev. 78 (487).
• J. Bardeen, L.N. Cooper, and J.R. Schrieffer (1957). . Phys. Rev. 108 (1175).
• N.N. Bogoliubov (1958). . Zh. Eksp. Teor. Fiz. 34 (58).
• L.P. Gor'kov (1959). . Zh. Eksp. Teor. Fiz. 36 (1364).
• B.D. Josephson (1962). . Phys. Lett. 1 (251).
• J.G. Bednorz and K.A. Mueller (1986). . Z. Phys. B64 (189).
• M. K. Wu, J. R. Ashburn, C. J. Torng, P. H. Hor, R. L. Meng, L. Gao, Z. J. Huang, Y. Q. Wang, and C. W. Chu (1987). Superconductivity at 93 K in a New Mixed-Phase Y-Ba-Cu-O Compound System at Ambient Pressure. Physical Review Letters 58: 908–910.
• Kleinert, Hagen. 1982. “Disorder Version of the Abelian Higgs Model and the Order of the Superconductive Phase Transition.” Lett. Nuovo Cimento 35: 405.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909247636795044, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/tagged/modern-portfolio-theory+statistics
|
# Tagged Questions
2answers
103 views
### What do the terms in-sample and out-of-sample estimates mean in MVO?
How do the in-sample estimates and out-of-sample estimates I so often hear authors refer to in emperical analysis of MVO differ?
3answers
139 views
### How to see if a set of asset returns corresponds to a known correlation matrix?
Let's say I have an arbitrary set of $n$ period returns for $k$ assets, and a given $k \times k$ correlation matrix (of asset returns), which is known a priori. Does it makes sense, or is it even ...
1answer
989 views
### Why is the first principal component a proxy for the market portfolio, and what other proxies exist?
Let's say that I have a universe of stocks from a certain sector. I want to compute the market portfolio of this sector. Beta is the covariance between each stock and the market. But how do you ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193721413612366, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/95681-permutations-question-4k-blocks.html
|
# Thread:
1. ## Permutations question - 4k blocks
Hi Folks,
I'm not a student - but investigating the theoretical side of my work appears to have lead me to mathematics.
So, I have a question and I hope I'm in the right section . I'm happy to receive a straight answer but would also like to learn how the answer was derived so I can do it myself in future.
It's to do with how data is stored on a disk:
Storage is held in the following units (ascending order)
Bits - Bytes - Kilobytes (or k)
Let's say that each block of data held on the disk is 4k in size - that means for arguments sake that there are 4000 bytes per 4k block.
Each byte is made up of 8 bits. The bits are boolean.
Now - I'm advised that there are 256 combinations of 1 and 0 in an 8 bit byte. If there are 4000 of them in a 4k block, how many permutations of 4k block are there?
Any help appreciated.
2. Originally Posted by dannyboy1121
Hi Folks,
I'm not a student - but investigating the theoretical side of my work appears to have lead me to mathematics.
So, I have a question and I hope I'm in the right section . I'm happy to receive a straight answer but would also like to learn how the answer was derived so I can do it myself in future.
It's to do with how data is stored on a disk:
Storage is held in the following units (ascending order)
Bits - Bytes - Kilobytes (or k)
Let's say that each block of data held on the disk is 4k in size - that means for arguments sake that there are 4000 bytes per 4k block.
Each byte is made up of 8 bits. The bits are boolean.
Now - I'm advised that there are 256 combinations of 1 and 0 in an 8 bit byte. If there are 4000 of them in a 4k block, how many permutations of 4k block are there?
Any help appreciated.
If I may rephrase your question, I think you are asking how many 4,000-byte blocks are possible if each byte is 8 bits. The answer is
2^(8 * 4,000) = 2^32,000
which is approximately $9.117 \times 10^{9632}$
(a pretty big number).
3. ## Thanks
I just wanted to say thanks for taking the time to reply to this mail. It's much appreciated.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9710613489151001, "perplexity_flag": "middle"}
|
http://www.speedylook.com/Amy_Grant.html
|
# Amy Grant
Girl of old the Quantum theory, the quantum mechanical constitutes the pillar of a whole of Théorie S Physique S which one gathers under the general name of Quantum physics. This denomination is opposed to that traditional Physique, this one failing in its description of the microscopic world - Atome S and particles - like in that of certain properties of the electromagnetic Rayonnement.
The basic principles of quantum mechanics were established primarily between 1922 and 1927 by Bohr, Dirac, of Broglie, Heisenberg, Jordan, Pauli and Schrödinger. They allow a complete description of the dynamics of a nonrelativistic massive particle. Bohr proposed an interpretation of the formalism, called Interprétation of Copenhagen, based on the principle of correspondence.
The basic principles were supplemented by Bose and Fermi in order to authorize the description of a together of identical particles , opening the way with the development of a Physique quantum statistics. Lastly, in 1930, the mathematician Von Neumann specified the rigorous mathematical framework of the theory.
The Quantum theory of the fields is one of its relativistic extensions the most used to XXIe century.
### General panorama
#### Introduction
Girl of old the Quantum theory, quantum mechanics fixes a coherent mathematical framework which made it possible to cure all the dissensions between certain experimental results highlighted at the end of the 19th century and the corresponding theoretical predictions of the traditional Physique.
Quantum mechanics began again and developed the idea of Dualité wave-particle introduced by of Broglie in 1924 consisting in regarding the matter particles not only specific corpuscles, but also as Onde S, having a certain space extent (see Wave mechanics). Bohr introduced the concept of “complementarity” to solve this apparent paradox: any physical object is well at the same time a wave and a Corpuscule, but these two aspects, mutually exclusive, cannot be observed simultaneously. If an undulatory property is observed, the corpuscular aspect disappears. Reciprocally, if a corpuscular property is observed, the undulatory aspect disappears.
In 2007, no contradiction could be detected between the predictions of quantum mechanics and the experimental tests associated. This success has a price alas: the theory rests on an abstracted mathematical formalism, which makes its access difficult enough for the layman.
#### Some examples of success
Historically, the theory initially made it possible to correctly describe the electronic structures of the Atome S and the Molécule S, like their interactions with an electromagnetic field. It also makes it possible to explain the behavior of the condensed matter, in particular:
• the structure of the crystals and their vibrations;
• properties of electric Conductivity and thermal Conduction of metals thanks to the Theory of the bands;
• the existence and properties of the Semiconductor S;
• the Tunnel effect;
• the Supraconductivity and Superfluidity.
Another great success of quantum mechanics was to solve the Paradoxe of Gibbs: in Physical statistics traditional, identical particles are regarded as being discernible, and the Entropie is then not a extensive Grandeur. The agreement between the theory and the experiment was restored by taking account of the fact that identical particles are indistinguishable in quantum mechanics.
The Quantum theory of the fields, generalization relativistic of quantum mechanics, makes it possible as for it to describe the phenomena where the full number of particles is not preserved: Radioactivity, Nuclear fission (i.e. the disintegration of the Atomic nucleus) and nuclear Fusion.
## Canonical quantification
### Traditional plane wave
In traditional physics, a Onde planes progressive monochromatic pulsation $\ omega$ being propagated in the direction of X positive is written:
$\ Psi \left(X, T\right) = \ Psi_ \left\{0\right\} \ e^ \left\{- I \left(\ Omega T - K X\right)\right\}$
where the amplitude $\ Psi_ \left\{0\right\}$ is a certain constant, whereas $i$ is the imaginary unit which is defined so that $\ scriptstyle \left\{I\right\} ^2 = -1$. If we introduce into this expression the quantum relations of Broglie, we can reveal the sizes energy E and impulse p :
$\ Psi \left(X, T\right) \ = \ \ Psi_ \left\{0\right\} \ e^ \left\{- I \left(\left\{\ frac \left\{E\right\} \left\{\ hbar\right\} T \ frac \left\{p_x\right\} \left\{\ hbar\right\} X\right\}\right)\right\}$
where $\ hbar$ is the reduced Constante of Planck. This expression spreads easily in dimension 3:
$\ Psi \left(\ vec \left\{R\right\}, T\right) \ = \ \ Psi_ \left\{0\right\} \ e^ \left\{- \ frac \left\{I\right\} \left\{\ hbar\right\} \left(And \ vec \left\{p\right\} \ cdot \ vec \left\{R\right\}\right)\right\}$
To obtain energy, it is enough to derive compared to time:
$i \ \left\{\ hbar\right\} \ \ frac \ = \ E \ \ Psi \left(\ vec \left\{R\right\}, T\right)$
and to obtain the impulse, to take the Gradient:
$- \ I \ \left\{\ hbar\right\} \ \left\{\ nabla\right\} \ Psi \ = \ \ vec \left\{p\right\} \ \ Psi \left(\ vec \left\{R\right\}, T\right)$
### Rules of the canonical quantification
The canonical quantification consists in replacing the traditional dynamic variables of position and impulse, of the real numbers, by operators, according to the following rules of substitution:
• with the coordinate of position $x^i$ is associated an operator with position $\ hat \left\{X\right\} ^i$ such as:
$\ hat \left\{X\right\} ^i \ F \left(\ vec \left\{R\right\}\right) \ = \ x^i \ F \left(\ vec \left\{R\right\}\right)$
• with the variable of impulse $p_i$ is associated an operator impulse $\ hat \left\{p\right\} _i$ such as:
$\ hat \left\{p\right\} _i \ F \left(\ vec \left\{R\right\}\right) \ = \ - \ I \ \left\{\ hbar\right\} \ \ frac \left\{\ partial F \left(\ vec \left\{R\right\}\right)\right\}\left\{\ partial x^i\right\}$, is: $\ hat \left\{p\right\} _i \ = \ - \ I \ \left\{\ hbar\right\} \ \ frac \left\{\ partial\right\} \left\{\ partial x^i\right\}$
• with the variable energy is associated the operator with temporal derivation:
$E \ F \left(\ vec \left\{R\right\}, T\right) \ = \ I \ \left\{\ hbar\right\} \ \ frac \left\{\ partial F \left(\ vec \left\{R\right\}, T\right)\right\}\left\{\ partial T\right\}$, is: $E \ \ to \ I \ \left\{\ hbar\right\} \ \ frac$
### Equation of Schrödinger
See also: Equation of Schrödinger
#### Heuristic derivation of the equation
The Hamiltonien giving the total mechanical energy of a nonrelativistic massive particle subjected to a force deriving from a potential is given by the traditional expression:
$H \left(\ vec \left\{R\right\}, \, \ vec \left\{p\right\}\right) \ = \ \ frac \left\{p^2\right\} \left\{2m\right\} \ + \ V \left(\ vec \left\{R\right\}\right) \ = \ E$
This size contains all the necessary information being studied traditional of the dynamic evolution of the system via the canonical equations of Hamilton, with the help of the data of an initial condition. With this traditional particle a wave $\ Psi is associated \left(\ vec \left\{R\right\}, T\right)$, which one seeks the equation of evolution. According to the rules of the canonical quantification, the traditional Hamiltonian becomes an operator:
$\ hat \left\{H\right\} \ = \ \ frac \left\{\ hat \left\{p\right\} ^2\right\} \left\{2m\right\} \ + \ V \left(\ hat \left\{R\right\}\right) \ = \$
- \ \ frac {\ hbar^2} {2m} \ \ vec {\ nabla} ^2 \ + \ V (\ vec {R})
The operator $\ vec \left\{\ nabla\right\} ^2$ is the Laplacien $\ Delta = \ vec \left\{\ nabla\right\} ^2$. The traditional equation of conservation of energy:
$H \left(\ vec \left\{R\right\}, \, \ vec \left\{p\right\}\right) \ = \ E$
give, while multiplying each dimensioned by the function of wave, the equation of Schrödinger dependant on time:
$- \ \ frac \left\{\ hbar^2\right\} \left\{2m\right\} \ \ Delta \ \ Psi \left(\ vec \left\{R\right\}, T\right) \ + \ V \left(\ vec \left\{R\right\}\right) \ \ Psi \left(\ vec \left\{R\right\}, T\right) \ = \ I \ \left\{\ hbar\right\} \ \ frac$
It is valid only for small traditional speeds in front of the Speed of light in the vacuum.
#### Physical interpretation of the function of wave
The physical interpretation of the function of wave $\ Psi$ will be given by Born in 1926: the module with the square of this function of wave $\ left| \Psi \right|^2 = \ overline \left\{\ Psi\right\} \ Psi$ represents the density of probability of presence of the particle considered, i.e.:
be interpreted as being the probability of finding the particle in a small $dV$ volume located in the vicinity of the point $\ vec \left\{R\right\}$ of space at the moment $t$. In particular, the particle being necessarily located some share in whole space, one in the condition of standardization:
This statistical interpretation poses a problem when the studied quantum system is the whole Universe, as in quantum Cosmologie. In this case, the physicists theorists preferentially use the interpretation known as of the “multiple worlds” of Everett.
#### Methods of resolution
Apart from some particular cases where one can integrate it exactly, the equation of Schrödinger does not lend itself in general to an exact analytical resolution. It is necessary then:
• is to develop techniques of approximations like the Théorie of the disturbances.
• is to solve it numerically. This numerical resolution in particular makes it possible to visualize the provision curious about the orbital S electronic.
### Formalism of Dirac: fundamental arms, kets, and postulates
See also: Notation bra-ket
Dirac introduced in 1925 a powerful notation, derived from the mathematical theory of the linear forms on a vector Space. In this abstracted formalism, the Postulats of quantum mechanics take a concise and particularly elegant form.
## Quantum mechanics and relativity
See also: Quantum theory of the fields
Quantum mechanics is a theory not relativist : it does not incorporate the principles of the restricted Relativité. By applying the rules of the canonical quantification to the relativistic relation of dispersion, one obtains the equation of Klein-Gordon (1926). The solutions of this equation present however serious difficulties of interpretation within the framework of a supposed theory of describing a only particle: one cannot in particular build a density of probability of presence everywhere positive, because the equation contains a temporal derivative second. Dirac will then seek another relativistic equation of the first order in time , and will obtain the equation of Dirac, which describes very well the Fermion S of Spin a-half like the electron.
The Quantum theory of the fields makes it possible to interpret all the relativistic quantum equations without difficulty.
The equation of Dirac naturally incorporates the Invariance of Lorentz with quantum mechanics, as well as the interaction with the electromagnetic field but which is still treated in a traditional way (one speaks about semi-traditional Approximation). It constitutes the Mécanique quantum relativist. But of the fact precisely of this interaction between the particles and the field, it is then necessary, in order to obtain a coherent description of the unit, to also apply the procedure of Quantification to the electromagnetic field. The result of this procedure is the quantum electrodynamic in which the unit between field and particle is even more transparent since from now on the matter it also is described by a field. The quantum electrodynamics is a particular example of Quantum theory of the fields.
Other theories quantum of the fields were developed thereafter with the fur and as the other fundamental interactions were discovered (électrofaible Théorie, then quantum Chromodynamique).
## Inequalities of Heisenberg
See also: Principle of uncertainty
The relations of uncertainty of Heisenberg translate impossibility of preparing a quantum state corresponding to precise values of certain couples of combined sizes. This is related to the fact that the quantum operators associated with these traditional sizes do not commutate not .
### Inequality position-impulse
Let us consider for example the position $x \,$ and the impulse $p_x \,$ of a particle. By using the rules of the canonical quantification, it is easy to check that the operators of position and impulse check:
$\ left \ hat \left\{X\right\} ^i, \ hat \left\{p\right\} _j \ right F \left(\ vec \left\{R\right\}\right) \ = \ \ left \left(\ hat \left\{X\right\} ^i \ hat \left\{p\right\} _j - \ hat \left\{p\right\} _j \ hat \left\{X\right\} ^i \ right\right) F \left(\ vec \left\{R\right\}\right) \ = \ I \ hbar \ \ delta^i_j \ F \left(\ vec \left\{R\right\}\right)$
The relation of uncertainty is defined starting from the average standard deviations of combined sizes. In the case of the position $x \,$ and impulse $p_x \,$ of a particle, it is written for example :
$\left\{\ Delta\right\} X \, \ cdot \, \left\{\ Delta\right\} p_x \ \left\{\ Ge\right\} \ \ frac \left\{\ hbar\right\} \left\{2\right\}$
The more the state has a distribution tightened on the position, plus its distribution on the values of the impulse which is associated is broad for him. This property points out the case of the waves, via a result of the Transformée of Fourier, and expresses the duality wave-corpuscle here. It is clear that this leads to a questioning of the traditional concept of trajectory like differentiable continuous way.
### Inequality time-energy
There exists also a relation of uncertainty relating to the energy of a particle and the variable time. Thus, the duration $\ Delta T \,$ necessary to the detection of a particle of energy $E \,$ with $\ Delta E \,$ close checks the relation:
$\left\{\ Delta\right\} E \, \ cdot \, \left\{\ Delta\right\} T \ \left\{\ Ge\right\} \ \ frac \left\{\ hbar\right\} \left\{2\right\}$
However, the derivation of this inequality energy-time is rather different from that of the inequalities position-impulse.
Indeed, if the Hamiltonian is well the generator of the translations in time in Hamiltonian Mécanique, indicating that times and energy are combined, there does not exist operator time in quantum mechanics (“theorem” of Pauli), i.e. one cannot build of operator $\ hat \left\{T\right\} \,$ which would obey a relation between canonical commutation and the Hamiltonian operator $\ hat \left\{H\right\} \,$:
$\ left \ hat \left\{H\right\}, \ hat \left\{T\right\} \ right \ = \ I \ hbar \ \ hat \left\{1\right\}$
this for a very fundamental reason: quantum mechanics was indeed invented so that each stable physical system has a fundamental state of energy miminum . The argument of Pauli is the following: if the operator time existed, it would have a continuous spectrum. However, the operator time, obeying the canonical relation of commutation, would be also the generator of the translations in energy . This involves whereas the Hamiltonian operator would have to him also a continuous spectrum , in contradiction with the fact that the energy of any stable physical system must be limited in a lower position .
## Intrication
See also: quantum Intrication
### Definition
Intrication is a quantum state (see also Fonction of wave) describing two traditional systems (or more) not factorisables in a product of states corresponding to each traditional system.
Two systems or two particles can be intricate as soon as there exists an interaction between them. Consequently, the intricate states are the rule rather than the exception. A measurement taken on one of the particles will change its quantum state according to the quantum postulate of measurement. Because of intrication, this measurement will have a simultaneous effect on the state of the other particle. Nevertheless, it is incorrect to compare this change of state to a transmission of information faster than speed of light (and thus a violation of the theory of relativity). The reason is that the result of measurement of the first particle is always random in the case of intricate states. It is thus impossible “to transmit” some information that it either since the modification of the state of the other particle, for immediate that it is, does not remain quite as random about it. The correlations between measurements of the two particles, although very real and highlighted in many laboratories all over the world, remain undetectable as long as the results of measurements are not compared, which implies necessarily a traditional exchange of information, respectful of relativity (see also the Paradoxe EPR). The quantum Téléportation makes use of intrication to ensure the transfer of the quantum state of a physical system towards another physical system. This process is the only known means to transfer quantum information perfectly. It cannot exceed speed of light and “is also désincarné”, in the sense that there is no transfer of matter (contrary to the fictitious teleportation of Star Trek).
This state should not be confused with the state of superposition . The same quantum object can have two (or more) superimposed states . For example the same photon can be in the state " polarity longitudinale" and " polarity transversale" at the same time. The Chat of Schrödinger is simultaneously in the state " mort" and " vivant". A photon which passes a semi-reflective blade is in the state superimposed " photon transmis" and " photon réfléchi". It is only at the time of the act of measurement that the quantum object will have a given state.
In the formalism of the quantum physics, a state of intrication of several objects quantum is represented by a tensorial Produit of the vectors of state of each quantum object. A state of superposition relates to only one object quantum (which can be an intrication), and is represented by a linear Combinaison various possibilities of states from this one.
### Quantum teleportation
See also: quantum Teleportation
One can determine the state of a quantum system only by observing it, which causes to destroy the state in question. This one can on the other hand, once known, being recreated in theory elsewhere. In other words, the duplication is not possible in the quantum world, only is a rebuilding in another place , close to the concept of teleportation in the Science-fiction.
Worked out theoretically in 1993 per C.H. Bennett, G. Arm-band, C. Crépeau, R. Jozsa, A. Peres, and W. Wootters in the article Teleporting year unkown dual quantum state by classical and EPR channels , of the Physical Review Letter , this rebuilding was carried out in experiments in 1997, on photons, by the team of Anton Zeilinger with In, and more recently on atoms of Hydrogène.
## Some paradoxes
These “paradoxes” question us on the interpretation of quantum mechanics, and reveal in certain cases at which point our intuition can appear misleading in this field which does not concern directly the daily experiment of our directions.
; Cat of Schrödinger
This paradox highlights the problems of interpretation of the postulate of Réduction of the package of wave.
See also: Cat of Schrödinger
; Paradox EPR and experiment of Alain Aspect
This paradox highlights the not-locality of the quantum physics, implied by the intricate states.
See also: Paradox EPR, Experiment of Aspect
; Experiment of Marlan Scully
This experiment can be interpreted as a demonstration that the results of an experiment recorded at one moment T depend objectively on an action carried out at a later time T+t. According to this interpretation, the not-locality of the intricate states would not be only space, but also temporal.
However, the Causalité is not strictly violated because it is not possible - for fundamental reasons - to highlight, before the T+t moment, that the state recorded at the moment T depends on a later event. This phenomenon cannot thus give any information on the future.
See also: Experiment of Marlan Scully
; Contrafactuality
According to quantum mechanics, events which could have occurred, but which did not occur , influential on the results of the experiment.
See also: Contrafactualité (physical)
## The décohérence: quantum world in the traditional world
See also: Décohérence
Whereas the principles of quantum mechanics apply a priori to all the objects contained in the universe (us including), why do we continue to perceive classically the essence of the world Macroscopique? In particular, why the quantum superpositions aren't observable in the macroscopic world? The theory of the Décohérence explains their very fast disappearances because of the inevitable coupling between the quantum system considered and its environment.
This theory received an experimental confirmation with the studies relating to systems Mésoscopique S for which the time of décohérence is not too short to remain measurable, such as for example a system of some photons in a cavity (Haroche and Al , 1996)
## See too
### Related articles
#### Interpretation
There exist many interpretations of the effects of quantum mechanics, some being in total contradiction with others. For lack of observable consequences of these interpretations, it is not possible to slice in favor of one or other of these interpretations. Only exception, the school of Copenhagen whose principle is precisely to refuse any interpretation of the phenomena.
• 1924 : Assumption of Broglie
• 1927: School of Copenhagen
• 1927: Theory of the pilot wave
• 1952: Theory of Broglie-Bohm
• 1957: Theory of Everett (multiple universes)
• 1970: quantum Décohérence
• 1986: compromise Interpretation
#### Problems, paradoxes and experiments
• Problem of quantum measurement
• Revolved quantum
• Contrafactualité
• Paradoxes of quantum mechanics
• Cat of Schrödinger
• Paradox EPR
• Experiment of Aspect
• Experiment of Marlan Scully
• Slits of Young
• Experiment of quantum Afshar
• Gum
#### Mathematics
• Constant of Planck
• Constant of Dirac
• Equation of Schrödinger
• Amplitude of probability
• Notation bra-ket
• Space of Hilbert
• Oscillating quantum harmonic
• geometrical Phase
• Integral of way
• Spin
#### Relativistic quantum mechanics
See also: Mechanical quantum relativist
• standard Model
• Quantum physics
• Quantum theory of the fields
• Principle of exclusion of Pauli
• Physical Equation of Dirac
• of the particles
• Diagram of Feynman
#### Quantum data processing
See also: Data-processing quantum
• quantum Information
• quantum Computer
• quantum Qubit
• Cryptography
#### Quantum vacuum
See also: quantum Vacuum
#### Others
• Chronology of microscopic physics
• Atom of hydrogen
Random links: Fanny Hill | Haudricourt | Battle off Florida | Chupanides | Tragopan de Blyth
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 44, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8924815654754639, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/136542-integer-modulo-4-vectors.html
|
# Thread:
1. ## integer modulo 4 vectors
Hi;
is there anything wrong in saying that if u is a vector over $Z_4$ then for all $a,b \in Z_4\; au+bv=u+u+...+u+v+v+...+v=(a+b)v$
so that the distributive property holds for vectors over Z_4.
Also if
$c_1v_1+c_2v_2+...+c_kv_k=a_1v_1+a_2v_2+...+a_kv_k$
then by the inverse property of groups and the above distributive property
$(c_1-a_1)v_1+(c_2-a_2)v_2+...+(c_k-a_k)v_k=0$
where 0 is the zero vector.
Is there anything wrong with my post at all no matter how trivial? thanks
2. Originally Posted by Krahl
Hi;
is there anything wrong in saying that if u is a vector over $Z_4$ then for all $a,b \in Z_4\; au+bv=u+u+...+u+v+v+...+v=(a+b)v$
so that the distributive property holds for vectors over Z_4.
Also if
$c_1v_1+c_2v_2+...+c_kv_k=a_1v_1+a_2v_2+...+a_kv_k$
then by the inverse property of groups and the above distributive property
$(c_1-a_1)v_1+(c_2-a_2)v_2+...+(c_k-a_k)v_k=0$
where 0 is the zero vector.
Is there anything wrong with my post at all no matter how trivial? thanks
What exactly are you asking? If it's a vector field?
Well...maybe you mean field? No, it has non-trivial zero divisors (2)
3. no, i know $Z_4^n$ isn't a field, i was just wondering whether any of my ascertions were true. it doesn't form a vector space but it does form a structure (with different definitions of linear independence, $v_1,v_2,...,v_k$ are linearly independent if $c_1v_1+...+c_kv_k=0 \Longrightarrow c_iv_i=0 \forall i$), and i wanted to know whether i could say the distributive property holds in the above sense. Thanks.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519003629684448, "perplexity_flag": "head"}
|
http://nrich.maths.org/2429/solution
|
### DOTS Division
Take any pair of two digit numbers x=ab and y=cd where, without loss of generality, ab > cd . Form two 4 digit numbers r=abcd and s=cdab and calculate: {r^2 - s^2} /{x^2 - y^2}.
### Novemberish
a) A four digit number (in base 10) aabb is a perfect square. Discuss ways of systematically finding this number. (b) Prove that 11^{10}-1 is divisible by 100.
### Latin Numbers
Let N be a six digit number with distinct digits. Find the number N given that the numbers N, 2N, 3N, 4N, 5N, 6N, when written underneath each other, form a latin square (that is each row and each column contains all six digits).
# What a Joke
##### Stage: 4 Challenge Level:
First can I express my delight at the spreadsheet supplied by the Colyton Maths Challenge Group, which enabled them to enter values of $A$ and $H$ to find a solution. I think the cell H8 needed to be "=10*F3+D3" to work (and it does with the right substitution!) - but what a good idea. Well done. Like Angele (no school given), the Colyton group noticed that the problem might be easier with some rearrangement and making $JOKE$ the subject.
At this point it is clear that the maths group, like Andrei (Tudor Vianu College) used exhaustive methods, probably made easier by the spreadsheet.
Here is a alternative approach based on the solution offered by Lee (no school given).
I can rearrange the problem $$JOKE = \frac{AHHAAH}{HA}$$ Now $AHHAAH$ is made up of $AH00AH$ and $HA00 = 100 \times HA$ added together.
But $AH00AH$ is divisible by $AH$ So $AH00AH = 10001 \times AH$
So $AHHAAH = 10001 \times AH + HA \times 100$
$$JOKE = \frac{AHHAAH}{HA} = \frac{100 \times HA +AH \times(10001)}{HA}$$ The numerator has to be divisible by the denominator ($HA$) for this to give a four digit answer ($JOKE$). As $100 \times HA$ is divisble by $HA$ it only needs us to look at $10001 \times AH$.
But $10001$ is not prime, it is $73 \times 137$.
There is no combination of digits such that $HA$ divides $AH$ So $HA$ divides $10001$ giving an answer $73$
$$H=7$$ $$A=3$$ $$J=5$$ $$O=1$$ $$K=6$$ $$E=9$$
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086587429046631, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/112671/showing-a-coercivity-condition-for-this-bilinear-form
|
## Showing a coercivity condition for this bilinear form
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $\Omega \subset \mathbb{R}^n$ is a compact domain. Let $f$ and $J$ (and also $\frac 1J$) be $C^1$ functions on $\Omega$. Consider the bilinear form $a:H^1(\Omega) \times H^1(\Omega) \to \mathbb{R}$ $$a(u,v) = \int_\Omega uvf + \int_\Omega \nabla u MM^T\nabla v - \int_\Omega \nabla u MM^T\nabla J \frac{v}{J}$$ where $M = D\Phi$ is the matrix representation of the derivative of a diffeomorphism $\Phi$ between two compact hypersurfaces in $\mathbb{R}^n$ (so $\Phi$ and its derivatives are bounded).
How do I show that there exists a $C$ such that $$a(u,u) + C\lVert u \rVert^2_{L^2(\Omega)} \geq K\lVert u \rVert^2_{H^1(\Omega)}$$ for some $K$. (i.e. that $a$ satisfies a coercivity condition).
I don't know how to show this. How do I deal with the last term in $a$, which has a minus sign? The second term is fine since it becomes $|\nabla u M|^2 > 0$ since $M$ represents derivative of the diffeomorphism $\Phi$ and therefore has full rank. But basically I can't get a positive constant in front of $\lVert \nabla u \rVert_{L^2}$ term.
-
1
Cross-posted at math.stackexchange: math.stackexchange.com/questions/237955/…. – Davide Giraudo Nov 17 at 17:28
## 1 Answer
I think the first thing to iron out is that $\Omega$ should be open, since compact would be closed and then even defining $H^1(\Omega)$ is not trivial. Maybe assume $\Omega \subset \mathbb{R}^N$ is open and $J,f$ being $C^1(\overline{\Omega})$.
However, this is not the issue you are interested in. You want to know the relationship between
$\int_\Omega \nabla u MM^t \nabla u\;dx$
and
$||\nabla u||_{L^2(\Omega) }$.
Given $M$, can you show that the eigenvalues of $MM^t$ are strictly positive? The ellipticity constant is just a statement on the eigenvalues of the matrix associated with the coefficients. So what you are looking to do is to prove that this matrix is strictly positive definite. I hope you can compute this successfully.
EDIT: The result you are looking for in this case is to show that $\xi MM^t\xi \geq c$ for all $\xi \in S^{N-1}$, which would imply the coercivity you are looking for. A priori, $MM^t$ positive definite means you can this inequality with $c=0$, but you need a little better if this is your approach. For example, if the eigenvalues of $MM^t$ are ${\lambda_i}_{i=1..N}$, with $\min_i \lambda_i = \tilde{\lambda}>0$, and these eigenvalues correspond to eigenvectors $x_i$ which form an orthogonal basis for $\mathbb{R}^N$, then this is true, as follows.
Any $x\in S^{N-1}$ can be represented as $x=\sum_{i=1}^N c_i x_i$, where $\sum_i c_i^2=1$, and we compute $MM^t x = \sum_i c_i MM^t x_i = \sum_i c_i \lambda_i x_i$, therefore $xMM^tx = \sum_i c_i^2 \lambda_i \geq \tilde{\lambda}$, where we have used $\sum_i c_i^2=1$.
This is only an example - you should verify that the eigenvalues correspond to orthonormal eigenvectors to do this, or adapt the proof to something which is close and true in your case.
-
@Daniel Thanks for the answer. MM^T is symmetric and positive definite (by assumption), so a theorem says all the eigenvalues are positive. This is enough, right? – soup89 Nov 17 at 22:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562963247299194, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/53708/how-can-i-understand-work-conceptually/53711
|
# How can I understand work conceptually?
I'm in a mechanical physics class, and I'm having a hard time understanding what the quantity of work represents. How can I understand it conceptually?
-
## 2 Answers
Start with force; force does work when it acts on a body. This displaces the body from its original position. A force does work when it results in this body moving.
Unit wise, it is Joules [J] or Newton-Meters [Nm] or [N-m]. this would imply $W=F \cdot d$ where $d$ is the distance.
I prefer this definition; the work done by some force on an object travels along some curve S is given by a line integral $$W=\int_{S} \vec{F} \cdot d\vec{x}$$ when path dependent. When path independent, one obtains instead that $$W=U(d_{1})-U(d_{2})$$ where $U$ is called the potential energy, measured in joules. $d$ is a point at which the potential is evaluated; this can be time dependent.
Hope this helps!
-
This website has an excellent definition of work: http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html
This chart from the same website explains how work relates to other mechanical concepts: http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216979742050171, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/204750-need-help-derivatives.html
|
1Thanks
• 1 Post By hollywood
# Thread:
1. ## Need help with derivatives
Hello everyone, I have a few derivatives problems I could use some guidance with. I know the procedures for finding derivatives (product/quotient rule, etc...), but for some reason I feel like I am doing things the hard way.
Here is the problem I am stuck on now, but I am studying for a huge test on Monday and I'm sure I will be asking quite a bit of questions between now and then.
Find dy/dx at x=1
(x+2)(x^2+1)(x^3+2)
Long story short, I used the product rule to find the derivative of the 3 parts, and then plugged in 1 for x. My answer was 50 and it is incorrect. Can someone maybe show me the correct steps to work this problem? The more detail the better, because I need to see everything about how a problem works. I used the product rule to find the derivatives of the first 2 parts, then took that and used the power rule again. Obviously this is not correct, and now I am wondering just how you are supposed to solve this problem. Do I need to first multiply 2 of the 3 terms together and then use the power rule?
Help is appreciated.
2. ## Re: Need help with derivatives
$f'(x)=3 x^2 (2+x) \left(1+x^2\right)+2 x (2+x) \left(2+x^3\right)+\left(1+x^2\right) \left(2+x^3\right)=2+8 x+12 x^2+4 x^3+10 x^4+6 x^5$
3. ## Re: Need help with derivatives
Originally Posted by jkh1919
Hello everyone, I have a few derivatives problems I could use some guidance with. I know the procedures for finding derivatives (product/quotient rule, etc...), but for some reason I feel like I am doing things the hard way.
Here is the problem I am stuck on now, but I am studying for a huge test on Monday and I'm sure I will be asking quite a bit of questions between now and then.
Find dy/dx at x=1
(x+2)(x^+1)(x^3+2)
Long story short, I used the product rule to find the derivative of the 3 parts, and then plugged in 1 for x. My answer was 50 and it is incorrect. Can someone maybe show me the correct steps to work this problem? The more detail the better, because I need to see everything about how a problem works. I used the product rule to find the derivatives of the first 2 parts, then took that and used the power rule again. Obviously this is not correct, and now I am wondering just how you are supposed to solve this problem. Do I need to first multiply 2 of the 3 terms together and then use the power rule?
Help is appreciated.
Are you familiar with logarithmic derivatives?
$y=(x+2)(x^2+1)(x^3+2)$
If we take the natural log of both sides we get
$\ln(y)=\ln(x+2)+\ln(x^2+1)+\ln(x^3+2)$
Now if we take the derivative of both sides we get
$\frac{y'}{y}=\frac{1}{x+2}+\frac{2x}{x^2+1}+\frac{ 3x^2}{x^3+2}$
This gives
$y'=y \left( \frac{1}{x+2}+\frac{2x}{x^2+1}+\frac{3x^2}{x^3+2} \right)$
If we evaluate this at zero we get
$y'(1)=y(1) \left( \frac{1}{1+2}+\frac{2(1)}{1^2+1}+\frac{3(1)^2}{1^3 +2} \right)$
$y'(1)=(3)(2)(3)\left( \frac{1}{3}+\frac{2}{2}+\frac{3}{3}\right)=42$
4. ## Re: Need help with derivatives
Originally Posted by TheEmptySet
Are you familiar with logarithmic derivatives?
$y=(x+2)(x^2+1)(x^3+2)$
If we take the natural log of both sides we get
$\ln(y)=\ln(x+2)+\ln(x^2+1)+\ln(x^3+2)$
Now if we take the derivative of both sides we get
$\frac{y'}{y}=\frac{1}{x+2}+\frac{2x}{x^2+1}+\frac{ 3x^2}{x^3+2}$
This gives
$y'=y \left( \frac{1}{x+2}+\frac{2x}{x^2+1}+\frac{3x^2}{x^3+2} \right)$
If we evaluate this at zero we get
$y'(1)=y(1) \left( \frac{1}{1+2}+\frac{2(1)}{1^2+1}+\frac{3(1)^2}{1^3 +2} \right)$
$y'(1)=(3)(2)(3)\left( \frac{1}{3}+\frac{2}{2}+\frac{3}{3}\right)=42$
Thanks. I am not familiar with logarithmic derivatives, and actually that is the first time I have heard that term. I appreciate the help using this example, and will definitely note it for future reference, but right now I am really trying to perfect the use of the chain, product, and quotient rules, and need someone to explain to me the steps that were taken to solve the problems.
Here is the response above:
Originally Posted by MaxJasper
$f'(x)=3 x^2 (2+x) \left(1+x^2\right)+2 x (2+x) \left(1+x^3\right)+\left(1+x^2\right) \left(1+x^3\right)=1+4 x+9 x^2+4 x^3+10 x^4+6 x^5$
This is the product rule, correct? I can see what was done to get the answer, but some things I am not so clear about.
In particular, this:
+2 x (2+x) (1+x^3)+(1+x^2) (1+x^3)
Can someone point out to me why the term (1+x^3) is there?
I can see that the steps used to solve this problem were to take the derivative of the 3rd term, multiply it by the first two, then ADD it to, the derivative of the 2nd term, multiplied by the other two terms, but I can't tell what happened after that. I would think that we'd still need to take the derivative of the 1st term, multiply it by the other 2 terms, and add that to the results from the other two. So pretty much I understand, up until this point:
3 x^2 (2+x)(1+x^2)+2 x (2+x) (1+x^3)
But still can not figure out where the (1+x^3) came from
5. ## Re: Need help with derivatives
I went through and multiplied and combined from the original equation. x6+2x5+x4+4x3+2x2+2x+4(no clue if this is how you do it)...
Then I took the derivative of that. 6x5+10x4+4x3+12x2+4x+2
From the derivative of that, I plugged 1 into it for x. From that, I got an answer of 38... I hope that made sense...
6. ## Re: Need help with derivatives
Your method of first expanding then taking derivative: if such a problem appears in your exam then all your time is wasted on answering just one question! Get efficient. f'(1)=42 and 38 is wrong because your expansion is wrong after all the efforts. Such errors usually occur when expanding these kinds of polynomials.
7. ## Re: Need help with derivatives
I think the simplest way to do this is to use the "extended" product rule: (fgh)'= f'gh+ fg'h+ fgh'
(x(x^2+ 1)(x^3+ 2))'= (1)(x^2+ 1)(x^3+ 2)+ x(2x)(x^3+ 2)+ x(x^2+ 1)(3x^2)
Now set x= 1 in that.
8. ## Re: Need help with derivatives
I have never heard of the extended product rule... I will have to ask my professor about that! This seems like a much easier way to do this, seeing as natural logs aggravate me to no end...
9. ## Re: Need help with derivatives
The extended product rule is just the regular product rule applied twice:
$(fgh)'$
$=((fg)(h))'$
$=(fg)'h+fgh'$
$=(f'g+fg')h+fgh'$
$=f'gh+fg'h+fgh'$
- Hollywood
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569082856178284, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/159232-symmetric-subgroup-alternating-groups-print.html
|
# symmetric subgroup of alternating groups
Printable View
• October 11th 2010, 04:17 PM
Beaky
symmetric subgroup of alternating groups
Prove that $A_{n}$ contains a subgroup isomorphic to $S_{n-2}$ for all $n\geq 3$
$A_{n}$ being the alternating group of degree n and S the symmetric group.
I really have no idea how to approach this and it's had me stumped for quite a while. Any help would be much appreciated.
• October 11th 2010, 07:28 PM
tonio
Quote:
Originally Posted by Beaky
Prove that $A_{n}$ contains a subgroup isomorphic to $S_{n-2}$ for all $n\geq 3$
$A_{n}$ being the alternating group of degree n and S the symmetric group.
I really have no idea how to approach this and it's had me stumped for quite a while. Any help would be much appreciated.
Say $S_{n-2}$ acts on $\{1,2,...,n-2\}$ , and $A_n\,\,on\,\,\{1,2,...,n\}$ , and let $\pi:= (n-1,\,\,n)$ be the transposition
that interchanges n-1 and n .
Define a map $f: S_{n-2}\rightarrow A_n$ by $f(\sigma):=\left\{\begin{array}{ll}\sigma&if\,\sig ma\in A_{n-2}\\\sigma\pi &if\,\sigma\in S_{n-2}-A_{n-2}\end{array}\right.$
Now show $f$ is a group monomorphism.
Tonio
• October 12th 2010, 09:37 AM
Beaky
Ok thanks a lot. It seems obvious enough now, but I don't think I would have thought of that.
All times are GMT -8. The time now is 07:00 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9699222445487976, "perplexity_flag": "head"}
|
http://nrich.maths.org/5533/solution
|
### Days and Dates
Investigate how you can work out what day of the week your birthday will be on next year, and the year after...
### Plum Tree
Label this plum tree graph to make it totally magic!
### Magic W
Find all the ways of placing the numbers 1 to 9 on a W shape, with 3 numbers on each leg, so that each set of 3 numbers has the same total.
# Sums of Pairs
##### Stage: 3 and 4 Challenge Level:
Here's some sound and efficient algebra from Conor from Queen Elizabeth's Hospital :
We have three numbers, $x, y$ and $z$.
$x + y = 11 \quad (1)$
$y + z = 17 \quad (2)$
$z + x = 22 \quad (3)$
Adding $(1)$ and $(2)$ gives :
$x + 2y + z = 28$
Putting that another way :
$(x + z) + 2y = 28 \quad (4)$
Substituting $(3)$ into $(4)$ gives :
$22 + 2y = 28$
So $y = 3$.
Substituting $y = 3$ into $(1)$ gives $x = 8$ and substituting it into $(2)$ gives $z = 14$.
But Eden has had an excellent approach too :
$11 + 17 + 22$ uses every number twice and makes a total of $50$
So the total of the three numbers that we need to find is $25$.
If two of them make $11$ together the one not used must have been $14$.
Two together made $17$, the one not used this time must have been $8$.
And finally, a pair have a sum of $22$, so the other number is $3$.
The three numbers are $3, 8,$ and $14$
And Mark used some deductive reasoning combined with an exhaustive approach:
The smallest number must be between $1$ and $5$ to make a smallest sum of $11$.
If it is $1$ then to make $11$ the second number is $10$.
If the third number is $x$
Then $x+1= 17$
and $x+10 = 22$
This does not work.
I tried the smallest number as $2$, then the second number is $9$
Then $x+2= 17$ and $x+9 = 22$
This also does not work but the two values of $x$ are closer than last time so I think $3$ will work.
Trying the first number as $3$ and the second as $8$
Then $x+3= 17$ and $x+8 = 22$
This works.
The three numbers are $3, 8$ and $14$.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443274140357971, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/190486-3-2-matrix-complex-elements.html
|
# Thread:
1. ## 3*2 Matrix with complex elements
Folks,
Given that the set of all 3*2 matrices with complex elements is a complex vector space, with the usual definitions of addition and scalr multiplication of matrices, what is its dimension?
Are the following subsets subspaces?
1) The set of 3*2 with real entries
2) The set of matrices with first row (0,0)
3) The set of matrices with first row (1,1)
I understand the dimension of a vector space is the number of vectros in any basis for the space, ie the number of coordinates necessary to specify any vector. How do I find out what its dimension is?
Thanks
2. ## Re: 3*2 Matrix with complex elements
the "naive" way is to ask yourself: how many complex numbers do i need to specify to identify my "vector"? for 3x2 matrices, that number is 6 (one for each entry).
however, mathematics professors being what they are, will usually insist you display a basis (a linearly independent spanning set). can you think of a set of six matrices,
each of which captures the idea of "a single matrix coordinate"? once you have done this, show linear independence and spanning.
again, with any vector space V, there are 3 things you need to show for any subset W:
1) W is non-empty (preferrably by showing the 0-element is a member. if the 0-element (0-vector) is not in W, you will not obtain a vector space).
2) if u,v are in W then their vector sum u+v must also be.
3) if c is any scalar in your underlying field, and u is in W, then cu must also be in W. be careful with this one. if you are working with a complex vector space, it is not sufficient to check this property for real scalars only.
attempt to show these properties (or give a counter-example) for each of the sets in your post. that's how it's done.
3. ## Re: 3*2 Matrix with complex elements
Originally Posted by Deveno
the "naive" way is to ask yourself: how many complex numbers do i need to specify to identify my "vector"? for 3x2 matrices, that number is 6 (one for each entry).
can you think of a set of six matrices, each of which captures the idea of "a single matrix coordinate"? once you have done this, show linear independence and spanning.
Sorry, I am still struggling to get a start...I dont know how to approach this? If its a 3*2 matrix then would it have 3 solutions in ${\mathbb{R}^3}$?
4. ## Re: 3*2 Matrix with complex elements
If I write a sample 3*2 matrix as
$\begin{bmatrix}2 & 5\\ 1 & 4 \\ 6 & 8 \end{bmatrix}$
THis matrix contains 6 real entries, but I am unbale to write expression since I believe I need it to be a 3*3 or a 2*3 matrix. How would I proceed to show its closed under vector addition and scalar multiplication?
Thanks
5. ## Re: 3*2 Matrix with complex elements
you're not being asked to "solve" a system of linear equations, you're being asked to display a basis for a vector space.
here is a similar problem:
find the dimension of the vector space of all 2x2 complex matrices.
i claim:
$\left\{\begin{bmatrix}1&0\\0&0\end{bmatrix},\begin {bmatrix}0&1\\0&0\end{bmatrix}, \begin{bmatrix}0&0\\ 1&0 \end{bmatrix},\begin{bmatrix}0&0\\0&1\end{bmatrix} \right\}$
is a basis for this vector space. clearly this is a spaning set, since:
$\begin{bmatrix}a&b\\c&d\end{bmatrix} = a\begin{bmatrix}1&0\\0&0\end{bmatrix} + b\begin{bmatrix}0&1\\0&0\end{bmatrix} + c\begin{bmatrix}0&0\\1&0\end{bmatrix} + d\begin{bmatrix}0&0\\0&1\end{bmatrix}$
now, to show linear independence, suppose that:
$\begin{bmatrix}0&0\\0&0\end{bmatrix} = c_1\begin{bmatrix}1&0\\0&0\end{bmatrix} + c_2\begin{bmatrix}0&1\\0&0\end{bmatrix} + c_3\begin{bmatrix}0&0\\1&0\end{bmatrix} + c_4\begin{bmatrix}0&0\\0&1\end{bmatrix}$
then:
$\begin{bmatrix}0&0\\0&0\end{bmatrix} = \begin{bmatrix}c_1&c_2\\c_3&c_4\end{bmatrix}$
so $c_1 = c_2 = c_3 = c_4 = 0$ so our set is linearly independent, and is thus a basis.
since our basis has 4 elements, the dimension of the vector space is 4.
6. ## Re: 3*2 Matrix with complex elements
Originally Posted by Deveno
you're not being asked to "solve" a system of linear equations, you're being asked to display a basis for a vector space.
here is a similar problem:
find the dimension of the vector space of all 2x2 complex matrices.
i claim:
$\left\{\begin{bmatrix}1&0\\0&0\end{bmatrix},\begin {bmatrix}0&1\\0&0\end{bmatrix}, \begin{bmatrix}0&0\\ 1&0 \end{bmatrix},\begin{bmatrix}0&0\\0&1\end{bmatrix} \right\}$
is a basis for this vector space. clearly this is a spaning set, since:
$\begin{bmatrix}a&b\\c&d\end{bmatrix} = a\begin{bmatrix}1&0\\0&0\end{bmatrix} + b\begin{bmatrix}0&1\\0&0\end{bmatrix} + c\begin{bmatrix}0&0\\1&0\end{bmatrix} + d\begin{bmatrix}0&0\\0&1\end{bmatrix}$
now, to show linear independence, suppose that:
$\begin{bmatrix}0&0\\0&0\end{bmatrix} = c_1\begin{bmatrix}1&0\\0&0\end{bmatrix} + c_2\begin{bmatrix}0&1\\0&0\end{bmatrix} + c_3\begin{bmatrix}0&0\\1&0\end{bmatrix} + c_4\begin{bmatrix}0&0\\0&1\end{bmatrix}$
then:
$\begin{bmatrix}0&0\\0&0\end{bmatrix} = \begin{bmatrix}c_1&c_2\\c_3&c_4\end{bmatrix}$
so $c_1 = c_2 = c_3 = c_4 = 0$ so our set is linearly independent, and is thus a basis.
since our basis has 4 elements, the dimension of the vector space is 4.
I can only write 4 of the required 6 matrices...
$\left[ \begin{array}{ccc} 1 & 0 \\ 0 & 1\\ 1 & 1\end{array},\begin{array}{ccc} 0 & 1\\ 1 & 0\\ 0 & 0\end{array},\begin{array}{ccc} 1 & 1 \\ 0 & 0\\ 0 & 1\end{array},\begin{array}{ccc} 0 & 0 \\ 1 &1\\ 1 & 0\end{array}, \right]$
There is a lack of symmetry for me to complete....
7. ## Re: 3*2 Matrix with complex elements
One of the things you need to clarify is whether you are thinking of this as a vector space over the real numbers or over the complex numbers. For example, with the set of pairs of complex numbers, with the usual addition and scalar multiplication, over the real numbers, we can take (a+ bi, c+ di)= a(1, 0)+ b(i, 0)+ c(0, 1)+ d(0, i) so it has dimension 4. The same set over the complex numbers has (a+ bi, c+di)= (a+ bi)(1, 0)+ (c+di)(0, 1) and so the dimension is 2.
8. ## Re: 3*2 Matrix with complex elements
Originally Posted by HallsofIvy
One of the things you need to clarify is whether you are thinking of this as a vector space over the real numbers or over the complex numbers. For example, with the set of pairs of complex numbers, with the usual addition and scalar multiplication, over the real numbers, we can take (a+ bi, c+ di)= a(1, 0)+ b(i, 0)+ c(0, 1)+ d(0, i) so it has dimension 4. The same set over the complex numbers has (a+ bi, c+di)= (a+ bi)(1, 0)+ (c+di)(0, 1) and so the dimension is 2.
1) What is the significance of saying 'vector space over the real numbers' or 'vector space over the complex numbers'? You have written to different ways (RHS) of expressing the same LHS. I dont understand the difference.
2) Is the above an example for a 2*2 matrix?
3) How does one do the above for a 3*2?
9. ## Re: 3*2 Matrix with complex elements
the "underlying field" of a vector space can make a difference when talking about dimension.
i don't understand why you came up with those particular 4 3x2 matrices. why not the 6 matrices who have one entry 1, the rest 0?
10. ## Re: 3*2 Matrix with complex elements
Originally Posted by Deveno
you're not being asked to "solve" a system of linear equations, you're being asked to display a basis for a vector space.
here is a similar problem:
find the dimension of the vector space of all 2x2 complex matrices.
i claim:
$\left\{\begin{bmatrix}1&0\\0&0\end{bmatrix},\begin {bmatrix}0&1\\0&0\end{bmatrix}, \begin{bmatrix}0&0\\ 1&0 \end{bmatrix},\begin{bmatrix}0&0\\0&1\end{bmatrix} \right\}$
is a basis for this vector space. clearly this is a spaning set, since:
$\begin{bmatrix}a&b\\c&d\end{bmatrix} = a\begin{bmatrix}1&0\\0&0\end{bmatrix} + b\begin{bmatrix}0&1\\0&0\end{bmatrix} + c\begin{bmatrix}0&0\\1&0\end{bmatrix} + d\begin{bmatrix}0&0\\0&1\end{bmatrix}$
now, to show linear independence, suppose that:
$\begin{bmatrix}0&0\\0&0\end{bmatrix} = c_1\begin{bmatrix}1&0\\0&0\end{bmatrix} + c_2\begin{bmatrix}0&1\\0&0\end{bmatrix} + c_3\begin{bmatrix}0&0\\1&0\end{bmatrix} + c_4\begin{bmatrix}0&0\\0&1\end{bmatrix}$
then:
$\begin{bmatrix}0&0\\0&0\end{bmatrix} = \begin{bmatrix}c_1&c_2\\c_3&c_4\end{bmatrix}$
so $c_1 = c_2 = c_3 = c_4 = 0$ so our set is linearly independent, and is thus a basis.
since our basis has 4 elements, the dimension of the vector space is 4.
$\left[\begin{array}{ccc} 0 & 0 \\ 0 & 0\\0 & 0\end{array}\right]=c_1\left[ \begin{array}{ccc} 1 & 0 \\ 0 & 0\\0 & 0\end{array}\right]+c_2\left[ \begin{array}{ccc} 0 & 1\\ 0 & 0\\ 0 & 0\end{array}\right]+c_3\left[ \begin{array}{ccc} 0 & 0 \\ 1 & 0\\ 0 & 0\end{array}\right]+c_4\left[ \begin{array}{ccc} 0 & 0 \\ 0 & 1\\ 0 & 0\end{array}\right]+c_5\left[ \begin{array}{ccc} 0 & 0 \\ 0 & 0\\ 1 & 0\end{array}\right]+c_6\left[ \begin{array}{ccc} 0 & 0 \\ 0 & 0\\ 0& 1\end{array}\right]$
implies $c_1+c_2+c_3+c_4+c_5+c_6=0$
This implies the set is linearly independant with a dimension of 6? What is next?
My attempt based on HallsofIvy post
3*2 matrix over the real numbers
$\left[\begin{array}{ccc} a+bi & c+di \\ e+fi & g+hi\\j+ki & l+mi\end{array}\right]=a(1,0)+b(i,0)+c(1,0)+d(i,0)........$ Hence 12 dimensions
3*2 over the complex numbers
$\left[\begin{array}{ccc} a+bi & c+di \\ e+fi & g+hi\\j+ki & l+mi\end{array}\right]=(a+bi)(1,0)+(c+di)(0,1)+........$ Hence 6 dimensions.........?
11. ## Re: 3*2 Matrix with complex elements
Every element of the matrix is an ordered pair. If the matrix has only one non-zero element, all matrices with only this one element are a linear combination of two matrices, one with (0,1) as the sole element, and the other with (1,0) as the sole element. So for each element in the matrix you need two matrices with (1,0) and (0,1) as the sole elements. You need a total of 12 matrices, two for each element. The dimension of the vector space is 12.
12. ## Re: 3*2 Matrix with complex elements
Originally Posted by Hartlw
Every element of the matrix is an ordered pair. If the matrix has only one non-zero element, all matrices with only this one element are a linear combination of two matrices, one with (0,1) as the sole element, and the other with (1,0) as the sole element. So for each element in the matrix you need two matrices with (1,0) and (0,1) as the sole elements. You need a total of 12 matrices, two for each element. The dimension of the vector space is 12.
For both the real case and the complex case? Is this not conflicting with post # 7
13. ## Re: 3*2 Matrix with complex elements
Originally Posted by bugatti79
For both the real case and the complex case? Is this not conflicting with post # 7
For the real case, every base matrix has a 1 in one element and every other element a 0, for a total of six, the dimension is six. For complex case, see my previous post.
EDIT: I think post seven is discussing a complex vector space consisting of complex vectors a + bi. Not the same thing as a complex vector space each element of which is a matrix. You can probably think of each matrix as the sum of a real (a,0) at each location and imaginary, (,0,b) at each location, component . In any event, any complex matrix can be expressed as a linear combination of the 12 base matrices I described in my previous post.
EDIT: The matrices themselves satisfy requirements of linear vector space with the usual definition of matrix addition and multiplication by a scalar.
The base matrices are:
(0,1) (0,0)
(0,0) (0,0)
(0,0) (0,0)
(1,0) (0,0)
(0,0) (0,0)
(0,0) (0,0)
(0,0) (1,0)
(0,0) (0,0)
(0,0) (0,0)
..... for 12 base matrices (dim 12).
14. ## Re: 3*2 Matrix with complex elements
Originally Posted by Hartlw
I think post seven is discussing a complex vector space consisting of complex vectors a + bi.
So based on your post is this the answer to the following original question?
Given that the set of all 3*2 matrices with complex elements is a complex vector space, with the usual definitions of addition and scalr multiplication of matrices, what is its dimension?
3*2 matrix over the complex numbers
$\left[\begin{array}{ccc} a+bi & c+di \\ e+fi & g+hi\\j+ki & l+mi\end{array}\right]=a(1,0)+b(i,0)+c(1,0)+d(i,0)........$
15. ## Re: 3*2 Matrix with complex elements
No
If you denote the matrices in my previous post by E1, E2, E3 , E4, ...
Then an arbitrary complex matrix is expressed as a linear combination of E1, E2, E3,...,E12
(1,3) (5,3)
(3,5) (5,7) = 1xE1 + 3xE2 + 5xE3 + ..........
(3,3) (4,3)
Above is a matrix. The ordered pair (1,3) is just a std shortcut way of expressing 1+i3.
A complex matrix has to be expressed as a sum of complex matrices. Not as a sum of complex numbers. The addition of two complex matrices is a complex matrix by term by term addition, added just like complex numbers.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077857732772827, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/150533-another-limit-question.html
|
# Thread:
1. ## Another limit question.
I got another question that was lim as x->3 (3x-8)/(4x-12). I didn't see if it was possible to factor anything so I found that the limit doesn't exist because the left hand limit doesn't equal the right hand limit. for this question I just wanted to know if you could have found another answer algebraically.
2. Originally Posted by darksoulzero
I got another question that was lim as x->3 (3x-8)/(4x-12). I didn't see if it was possible to factor anything so I found that the limit doesn't exist because the left hand limit doesn't equal the right hand limit. for this question I just wanted to know if you could have found another answer algebraically.
$\lim_{x \to 3}\frac{3x - 8}{4x - 12} = \lim_{x \to 3}\frac{3x - 8}{\frac{4}{3}(3x - 9)}$
$= \frac{3}{4}\lim_{x \to 3}\frac{3x - 8}{3x - 9}$
$= \frac{3}{4}\lim_{x \to 3}\frac{3x - 9 + 1}{3x - 9}$
$= \frac{3}{4}\lim_{x \to 3}\left(1 + \frac{1}{3x - 9}\right)$
$\to \infty$.
3. Originally Posted by Prove It
$\lim_{x \to 3}\frac{3x - 8}{4x - 12} = \lim_{x \to 3}\frac{3x - 8}{\frac{4}{3}(3x - 9)}$
$= \frac{3}{4}\lim_{x \to 3}\frac{3x - 8}{3x - 9}$
$= \frac{3}{4}\lim_{x \to 3}\frac{3x - 9 + 1}{3x - 9}$
$= \frac{3}{4}\lim_{x \to 3}\left(1 + \frac{1}{3x - 9}\right)$
$\to \infty$.
I'm not really sure but if the limit goes to infinity does it exist? I'm not sure what your answer means.
4. Originally Posted by Prove It
$\lim_{x \to 3}\frac{3x - 8}{4x - 12} = \lim_{x \to 3}\frac{3x - 8}{\frac{4}{3}(3x - 9)}$
$= \frac{3}{4}\lim_{x \to 3}\frac{3x - 8}{3x - 9}$
$= \frac{3}{4}\lim_{x \to 3}\frac{3x - 9 + 1}{3x - 9}$
$= \frac{3}{4}\lim_{x \to 3}\left(1 + \frac{1}{3x - 9}\right)$
$\to \infty$.
Since $\displaystyle {\lim_{x \to 3^+}\frac{3x - 8}{4x - 12} = + \infty}$ and $\displaystyle {\lim_{x \to 3^-}\frac{3x - 8}{4x - 12} = - \infty}$ the limit does not exist.
Note that $f(x) = 1 + \frac{1}{3x - 9}$ is a hyperbola. It has a vertical asymptote at x = 3 and the above limits are exactly what you see when you draw its graph.
5. I think I've made a mistake here...
If the function approaches $3$ from the left, then we have the denominator with negative values, so approaching $-\infty$, while if the function approaches $3$ from the right, then we have the denominator with positive values, so approaching $\infty$.
Therefore the function does not exist.
6. Thanks for your help; I was able to find the answer by evaluating the left and right limit, but I wasn't sure how to do it algebraically.
7. "Trying" to set x= 3 and observing that you get " $\frac{1}{0}$" is sufficient to conclude that the limit does not exist. Of course, saying that the limit is $\infty$ is just another way of saying there is no limit.
8. Originally Posted by HallsofIvy
"Trying" to set x= 3 and observing that you get " $\frac{1}{0}$" is sufficient to conclude that the limit does not exist. Of course, saying that the limit is $\infty$ is just another way of saying there is no limit.
I would argue there's significant difference between something like $\displaystyle{ \lim_{x \to 1} \frac{1}{(x-1)^2} = + \infty}$ on the one hand and $\displaystyle{ \lim_{x \to 1} \frac{1}{x-1}}$ which does not exist (and in particular, is not equal to $\infty$) on the other hand.
9. Originally Posted by mr fantastic
I would argue there's significant difference between something like $\displaystyle{ \lim_{x \to 1} \frac{1}{(x-1)^2} = + \infty}$ on the one hand and $\displaystyle{ \lim_{x \to 1} \frac{1}{x-1}}$ which does not exist (and in particular, is not equal to $\infty$) on the other hand.
I agree, it depends on the context in which the symbol $\infty$ is being used.
10. Originally Posted by mr fantastic
I would argue there's significant difference between something like $\displaystyle{ \lim_{x \to 1} \frac{1}{(x-1)^2} = + \infty}$ on the one hand and $\displaystyle{ \lim_{x \to 1} \frac{1}{x-1}}$ which does not exist (and in particular, is not equal to $\infty$) on the other hand.
Yes, it is saying that the limit does not exist, for a specific reason and so is more precise than just saying "the limit does not exist". But it is still saying "the limit does not exist".
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9738614559173584, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/76051/find-all-solutions-of-this-diophantine-equation-of-the-second-degree-in-three-va
|
# Find all solutions of this diophantine equation of the second degree in three variables
Consider the Diophantine equation $Q(x,y,z)=1$, where $Q(x,y,z)$ is the quadratic form $x^2+y^2-z^2$. Let $S \subseteq {\mathbb Z}^3$ denote the set of all solutions. It is rather easy to find several parametric solutions, but it seems harder to find a complete enumeration of all the solutions. Specifically, say that a subset $T$ of ${\mathbb Z}^3$ is a polynomial image when there is a $r>0$ and three polynomial maps $P,Q,R : {\mathbb Z}^r \to {\mathbb Z}^3$ such that $T$ is the image of $(P,Q,R)$. My question is : Can $S$ be written as a finite union of polynomial images?
The older stackexchange question Integral solutions of $x^2+y^2+1=z^2$ is about the same equation, but it does not answer the specific question above.
-
## 1 Answer
Let $\Gamma$ the set of orthogonal unimodular matrices, that is $$\Gamma=M(n,\mathbb Z) \cap O(n).$$ This group acts by left multiplication on the set $S$ of solution of your equation. Moreover, the set of $\Gamma$-orbits in $S$ is finite! That is, there is a finite set $F$ in $S$ such that: for any $X\in S$ there are $g\in\Gamma$ and $Y\in F$ such that $$X=gY.$$ I think this allows you to write $S$ as a finite union of polynomial images. Am I right?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535337090492249, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/114045/prove-a-property-of-inner-product-spaces
|
# Prove a property of inner product spaces
Prove: if $u$ and $v$ are vectors in an inner product space and $c$ is a scalar, then $\langle u,cv\rangle =c\langle u,v\rangle$.
I am a little confused since in my textbook it shows 2 contradictory properties in two different theorems. In one it shows $c\langle u,v\rangle = \langle cu,v\rangle$ and here it wants the opposite. Please explain
-
what do you mean by " then =c" – zapkm Feb 27 '12 at 16:49
Welcome to MathSE. I see that you are relatively new here. So I wanted to let you know a few things about MathSE. We like to know where the problem is from what you've tried on a problem; this prevents people from wasting their time telling you thinks you already know, and helps make sure the answers are at an appropriate level. If this is homework, please consider adding the [homework] tag; people will still help, so don't worry. – Arturo Magidin Feb 27 '12 at 16:52
sorry i don't know how yo use latex and like because of this it isnt appearing. Does anyone know of a good site to teach me latex? – Sarah Feb 27 '12 at 16:53
Thanks @ArturoMagidin – Sarah Feb 27 '12 at 16:56
## 1 Answer
The two properties are not "contradictory", they are complementary. Both of them are true.
(To say that they are contradictory would be like saying that "$30 = 2\times 15$" is contradictory with "$30 = 3\times 10$". They aren't contradictory, they can both hold at the same time).
For inner products over the real numbers, both equalities hold: $$\langle c\mathbf{u},\mathbf{v}\rangle = c\langle\mathbf{u},\mathbf{v}\rangle = \langle\mathbf{u},c\mathbf{v}\rangle$$ for all vectors $\mathbf{u}$ and $\mathbf{v}$ and all scalars $c$.
In order to prove it, however, one needs to know exactly what properties of the inner product you are assuming. I'm guessing that they are the following:
1. $\langle \mathbf{u},\mathbf{u}\rangle\geq 0$ for all $\mathbf{u}$; $\langle \mathbf{u},\mathbf{u}\rangle = 0$ if and only if $\mathbf{u}=\mathbf{0}$;
2. $\langle \mathbf{u}+\mathbf{w},\mathbf{v}\rangle = \langle \mathbf{u},\mathbf{v}\rangle + \langle\mathbf{w},\mathbf{v}\rangle$ for all $\mathbf{u},\mathbf{v},\mathbf{w}$.
3. $c\langle \mathbf{u},\mathbf{v}\rangle = \langle c\mathbf{u},\mathbf{v}\rangle$ for all $\mathbf{u}, \mathbf{v}$ and all $c$.
4. $\langle\mathbf{u},\mathbf{v}\rangle = \langle\mathbf{v},\mathbf{u}\rangle$ for all $\mathbf{u},\mathbf{v}$.
If this ist he case, start with $\langle \mathbf{u},c\mathbf{v}\rangle$, and then use 4, 3, and 4 again to get the desired result.
-
Thanks @Arturo Magidin – Sarah Feb 27 '12 at 17:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512814879417419, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/62925/philosophical-question-related-to-largest-known-primes/63059
|
## Philosophical Question related to Largest Known Primes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The other day while discussing math, and primes specifically, the following question came to mind, and I figured I'd ask it here to see what people's opinions on it might be.
Main Question: Suppose that tomorrow someone proves that some function always generates (concrete) primes for any input. How should this affect lists such as the Largest Known Primes?
Let me give a little more detail to demonstrate why I feel this question is not entirely trivial or fanciful.
Firstly, the requirement that the function be able to concretely generate primes is meant to avoid 'stupid' examples such as Nextprime(n) which, given a 'largest known prime' P, yields a larger prime Nextprime(P). Note however that the definition of Nextprime does not actually explicitly state what this prime is, any implementation of it (in Maple or Mathematica for example) simply loops through the integers bigger than the input, testing each for primality in some fashion.
On the other hand, one candidate for such a function might be the Catalan sequence defined by:
$C(0) = 2$, $C(n+1) = 2^{C(n)}-1$
Although $C(5) = 2^{170141183460469231731687303715884105727}-1$ is far too large to test by current methods (with rougly $10^{30}$ times as many digits as the current largest known prime), and although the current consensus is that $C(5)$ is likely composite, it does not seem entirely out of the realm of possibility that someone might eventually find some very clever way of showing $C(5)$ is prime, or even that $C(n)$ is always prime, or perhaps some other concretely defined sequence.
The point is this: once one knows that every element of a sequence is prime, does this entirely negate things like the list of largest known primes? Or does the fact that $C(n)$ for $n\geq 5$ has too many digits to ever calculate all of them (instead only being able to calculate the first few or last few digits) mean that even if they were somehow proven prime it would not technically be 'known'?
Note also that in the realm of finite simple groups the analogous question is already tough to decide since there are infinite families of such groups known, but concrete descriptions (such as generators and relations or character tables) are not always available or even computable within reasonable time constraints. Likewise one could pose analogous questions in other branches (largest volume manifolds with certain constraints, etc.)
Anyhow, it seems like a reasonable question for serious mathematicians to consider, so I just want to hear what other's opinions are on the subject (and if anyone can think of a better title, feel free to suggest).
-
4
In the course of the proof of Hilbert's 10th, there is a multivariate polynomial such that all positive values it takes at integers must be primes, which is not quite what you ask about but pretty close. Anyway, such polynomials are known explicitly, but as far as I can tell (disclaimer: I'm not a specialist!), it has not had any practical impact. – Thierry Zell Apr 25 2011 at 14:18
2
As I recall, although such polynomials are known explicitly, it is notoriously hard to find inputs which make them positive, and even then the resulting primes so far are small, making the polynomial essentially useless for discovering large primes. On the other hand, if one discovered ways of generating sets of inputs that gave really large primes, that would give an example of such a function. – ARupinski Apr 25 2011 at 14:21
3
What does it mean to "negate things like the list of largest known primes"? – Mariano Suárez-Alvarez Apr 25 2011 at 14:31
2
This doesn't count (it's a cleverly disguised sieve), but it's pretty cool. youtube.com/… – Tony Huynh Apr 25 2011 at 15:00
11
You haven't yet given a coherent definition of "concrete". The function which maps a positive integer $n$ to the $n$th prime number $p_n$ is about as concrete as any function I know (e.g. even I can write a computer program which performs this function). In the absence of this, I don't see a math question here, or a math-philosophy question either. – Pete L. Clark Apr 25 2011 at 20:13
show 9 more comments
## 6 Answers
First of all, I don't think the idea that "knowing a prime requires knowing its decimal expansion" accords well with mathematical practice. Unless I'm mistaken, the largest known primes are all Mersenne primes, and (for good reason!) are almost always written in the form p=2k-1, not by their decimal expansions. Granted, the currently-known Mersennes are small enough that one could calculate their decimal expansions in Maple or Mathematica, if for some reason one wanted to. But even if that weren't the case (say, if k had 10,000 digits), I'd still be perfectly happy to describe p=2k-1 as a "known prime," provided someone knew both k and a proof that p was prime.
On the other hand, similar to what you suggested with your "NextPrime" function, what about
p := the 1010^10000th prime number ?
Certainly p exists, and one can even write a program to output it. But is p therefore "known"? Saying so seems to stretch the meaning of the word "known" beyond recognition.
Trying to arrive at some principled criterion that separates the two examples above, here's the best that I came up with:
An n-digit prime number p is "known" if there's a known algorithm to output the digits of p that runs in poly(n) time (together with a proof that the algorithm does indeed output a prime number and halt in poly(n) steps).
(Strictly speaking, the above definition covers "known-ness" for infinite families of primes, rather than individual primes -- since once you fix p, you can always output it in O(1) time. But this is a standard caveat.)
As far as I can see, the above definition correctly captures the intuition that a prime p is "known" if we know a closed-form formula for p (which can be evaluated in polynomial time), but not if we merely know a non-constructive definition of p (for which it takes exponential time to determine which p we're talking about).
A very interesting test case for my definition is
p := the first prime larger than 1010^10000.
According to my definition, the above prime is currently "unknown", but will become "known" if someone proves the conjecture that the spacing between two consecutive n-digit primes never exceeds q(n) for some fixed polynomial q.
If you accept my definition, then a "function that always generates primes" almost certainly would trivialize largest-prime contests, since presumably it would give a deterministic way to generate n-digit primes in nO(1) time, for n as large as you like (which is not something that we currently have).
Now, maybe there are cases where my definition fails to match up with "intuitive knowability" -- if so, I look forward to seeing counterexamples!
-
2
What about f(x) = the smallest prime greater than lg x? Sieving the numbers between lg x and 2 lg x takes polynomial time, but I wouldn't say that we 'know' f(2^(10^(10^10000))). – Charles Apr 26 2011 at 18:51
Scott, if we except your definition then we have a "function that generates primes" both under standard ulra difficult number theory conjectures (Cramer's conjecture) but also under strong derandomization conjectures from CC. I do not see why it trivializes largest primes contests. – Gil Kalai Apr 26 2011 at 18:54
In any case, the type of formula the OP asked about seems stronger than just "computable by a polynomial algorithm". Although CC may be applied to give a definition. The hypothetical example in the question you have an n digit prime by log*(n) arithmetic operations (exponentiation included). this looks like a rather low complexity... – Gil Kalai Apr 26 2011 at 18:58
1
@Charles: I don't know if he just fixed this or not, but in the way it is currently written the running time is a function of the number of digits of $p$, not the digits of the input to the generator so your example is incorrect. – Mikola Apr 26 2011 at 20:23
1
Certainly complexity theoretic definitions of that kind are nice and quite useful. (BTW, as far as I am concerned to know something with probability very close to 1 is "really" to know.) The opinion that once you show that a certain human activity represents an effort in P means that it is "trivialized" is a bit too far since it is quite plausible that all human activities represent efforts in P. – Gil Kalai Apr 27 2011 at 6:11
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't think in principle a sharp line can be drawn between "known" primes such as $2^{43112609}-1$ (the current record), and on the other hand "unknown" but well-defined ones like the first prime larger than a given number.
As has already been pointed out, the current record prime is still small enough to count as known according to the naive idea that we know its decimal representation. It has a few million digits, and finding them requires little extra computer time compared to testing primality in the first place.
If this situation changes due to new methods for primality testing of huge numbers (say whose digits cannot even be stored in all the world's computer memory), I guess we will fall back on an intuitive notion of when a number is known.
It will be difficult to come up with a sharp definition of known-ness in terms of computational complexity, since one can find an $n$ digit prime in polynomial time (albeit not provably deterministically) and naturally the record primes will be in the range where the degree (or even the constant) of the polynomial is crucial.
Speculating about future development of number theory and primality testing, I think the consensus is that if $C(5)$ is proved prime tomorrow, it will go to the top of the list, whereas if someone establishes that $C(n)$ is prime for every $n$, the conclusion will be that infinitely many primes are known explicitly.
Another possibility is improved primality testing of general numbers. If arbitrary numbers could be tested for primality as efficiently as Mersenne numbers, then possibly the new world record primes would consist of fancy patterns of digits, spiced with secret encodings of geek humor.
The situation is in principle different for the record twin primes. The current record is $65516468355\cdot 2^{333333}\pm 1$, and as far as I understand, it is not rigorously known that there are any larger twin primes at all. If someone proved with some sort of sieve that an interval of larger numbers must contain a twin prime pair, then I guess the list of record twin primes would split into "explicitly" and "theoretically" known pairs.
-
I think you are mis-understanding the purpose of the "largest prime" list. There are areas of mathematics in which we genuinely do not know how to generate or catalogue the objects of certain types. Prime numbers do not fall into this category.
Rather, these lists basically exist as a benchmark for number-theory computer algorithms. For example, if you develop a new supercomputer, you can prove its prowess by testing primality of some bigger numbers not on the list.
-
Having contributed to the Largest Prime Lists in the past, I agree that much of the interest in it derives from testing faster algorithms and architectures. In light of this, it seems that if new ways of generating large primes are discovered then I am curious what others think about to what extent topics such as the Largest Known Primes lists might need to be suitably modified; i.e. would one need to distinguish whether a prime was 'discovered' in order to test the speed of a computer, or as the output of some deterministic function, or for some other reason. – ARupinski Apr 27 2011 at 0:05
I think that the discovery of a prime-generating algorithm would simply split the study up into a "list of largest known primes not generated by algorithm X" and "primes generated by algorithm X", much like the classification of finite simple groups.
Following this to its logical conclusion, even if we provably knew every algorithm generating $n$-digit primes in poly($n$)-time, we'd still be left with the question of which primes weren't generated by one of these functions, giving a concept of 'sporadic' prime, and it would then be these that were of interest.
-
I've always understood "the largest known primes" to colloquially refer to primes which (i) have decimal expansions which can be written down in a short amount of time and (ii) have primality proofs which have been double/triple checked.
I imagine that if the sequence you suggest were shown to always be prime, people would stop talking about the largest known primes. They might continue to search for ways to find lots of primes of the same large size, and improve the bound on that size.
-
Edit: The first version of this answer proposed a criterion based on Kolmogorov complexity, but as Scott Aaronson commented, this is not very restrictive since primes all have rather low complexity: primality not being very hard to design an inefficient test for, the n'th prime $p_n$ has complexity at most $\log n \approx \log\log p_n$. It appears that what I had in mind was more or less what he wrote in his answer.
However, it seems to me that the issue of complexity is part of what makes the question interesting: extremely large numbers should be expected to have extremely large complexity, and it is the disparity between the low complexity of some short description like C(5) and the high complexity of the digit string of C(5) itself which makes the description seem not to be concrete. More generally, one may doubt the knowability of extremely large numbers on the grounds that they cannot be written explicitly, as stated in the question.
So contrary to the claim that C(5) is not a concrete description of a number, I think that due to its low complexity it is much more concrete than its size would suggest, and its base-2 digits are easily listed; most numbers of its magnitude are wholly abstract and their digits in any base will never be known. In fact, I am not entirely sure that a prime-enumerating algorithm which computes C(5) in time polynomial in C(4) (its number of binary digits), as Scott suggests, is actually especially concrete, unless it takes an input significantly smaller than C(5) (note that C(5) is about the C(5)'th prime).
That is, a computation may be efficient without being concrete. In the spirit of Alastair Litterick's answer, I'd like to suggest that
An algorithm for listing (some infinite family of) primes which is efficient in the sense of Scott's answer is also "concrete" if the length of its output is superpolynomial in the length of its input.
More generally, I suppose it would make sense to quantify just how much larger the output is, for the purposes of probing very distant primes.
Since the question declares itself to be philosophical, I also have some philosophical thoughts on its meaning. These don't address the question in the same way as above.
I think that the word "known" should be taken with a grain of salt even in the case of the current record largest prime, even if it were expressed in base-10 digits. Looking for record-size primes is an activity outside of mathematics, just like astronomers' search for extrasolar planets is outside of geology, though if we ever went to one, we could study it geologically. With a proof in hand that there are infinitely many primes, finding a particular one is useful only if we require specific numbers for some task, like cryptography, whose execution is not even a matter of computer science once it is implemented. This is not to say that the construction of a prime search is not a matter of both mathematics and computer science: for example, testing Mersenne primes is a strategy drawn from mathematics, since it is not known that there are infinitely many of those, and doing the testing efficiently is computer science. However, successful execution of the search is neither.
In contrast, knowing a prime, or anything, requires being able to answer questions about it; better, the questions should not have known general answers. For example, "is the last digit 3?" is a fine question, but that just asks for the residue modulo 10, and Dirichlet's theorem already describes the answer to that question statistically. One might be curious about the Chebyshev bias (which residue class has the most primes up to a certain size) but that can't be settled one way or another by looking at individual examples. On the other hand, even 2 is not fully understood as a prime, since we can't say modulo which primes it is a primitive root (implicitly, for example "the ones which are 5 mod 7"). This, like Mersenne primes, is another list not known to be infinite.
Aside from conjectures about individual primes that can be tested on specific numbers, there are statistical conjectures, similar in nature to Dirichlet's theorem, which are not settled and also can't be settled by a sparse prime search. For example, one might want to know whether a particular prime $p_n$ begins a maximal prime gap (larger than any preceding gap), for which the only possible computation is of an exhaustive list of primes up to and including $p_{n + 1}$.
Suppose, though, that we had an algorithm to generate a list of all primes, in order, giving all n-digit primes in provably polynomial time. We could still not verify the Riemann hypothesis in the form that $$\lvert \pi(x) - \operatorname{Li}(x) \rvert = O(x^{1/2} \log x)$$ unless we had a much deeper understanding of the behavior of that algorithm. Not knowing this, it would be unreasonable to say that the prime numbers are "known" as a set. And I don't think it's too high a standard to say that if all primes are "known", then the prime numbers are known as a whole.
In short, I don't think that any mere list of primes, finite or infinite, can constitute knowledge of its elements.
-
Ryan, your definition doesn't seem to do what you want it to. In particular, the 10^10^10000th prime is certainly "known" according to your definition, since the description that I just gave AND the decimal expansion of the number BOTH have minuscule Kolmogorov complexities! One can easily construct a small Turing machine that outputs the decimal expansion of the 10^10^10000th prime. The fact that that Turing machine will take an astronomical amount of time is irrelevant for Kolmogorov-complexity purposes! I focused on polynomial time precisely in order to rule such things out. – Scott Aaronson Apr 26 2011 at 17:50
1
You're right, I confused the kinds of complexity. After some thought, it seems that what have in mind comes out much the same as what you said: a description (Turing machine) is cheating if it takes much longer than the size of the number it computes. – Ryan Reich Apr 26 2011 at 18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961319625377655, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/95729?sort=votes
|
## What are supersingular varieties?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For varieties over a field of characteristic $p$, I saw people talking about supersingular varieties.
I wanted to ask "why are supersingular varieties interesting". However, as I don't want to ask an MO question with no background/context I thought I'd better define what a supersingular variety is. Unfortunately I can't. (And search engine doesn't help...) Can anyone here help me with this and explain why they are interesting?
A little bit more context: I have seen the definition of supersingular elliptic curves on textbooks by Hartshorne and Silverman. When I read about abelian varieties I saw "for abelian varieties of $\text{dim}>2$ being supersingular $\neq p$ rank being 0".
Illusie mentioned (in the "Motives" volume) that for $X$ a variety over a perfect field of characteristic $p$, $X$ is said to be ordinary if "$H_\text{cris}^*(X/W(k))$ has no torsion and $\text{Newt}_m(X)=\text{Hdg}_m(X)$ for all $m$". My guess is being supersingular should correspond to the other extreme, but what precisely is it? ($\text{Newt}_m(X)$ being a straight line for all $m$?)
-
5
I am not sure if there is a general definition. Your guess that it should mean that $\mathrm{Newt}_m(X)$ should be a straight line for all $m$ is right for abelian varieties and also K3 surfaces. But note that with this definition $\mathbb{P}^n$ would be supersingular, which sounds a bit odd since it is also ordinary (according to Illusie's definition that you mention). – ulrich May 2 2012 at 9:24
I have only seen the word "supersingular" attached to abelian varieties and K3 surfaces. This suggests that a condition like triviality of the canonical bundle makes the homological condition interesting, but I don't know why. – S. Carnahan♦ May 2 2012 at 9:56
The standard definition for Calabi-Yau varieties is that the height of the Artin-Mazur formal group is infinite, i.e. is $\hat{\mathbb{G}_a}$. Ordinary Calabi-Yau varieties have height 1, so it could be seen as the other extreme. – Matt May 3 2012 at 19:08
## 3 Answers
Not an answer, but some historical context (which I think is correct). An elliptic curve over $\mathbb{C}$ used to be called "singular" if its endomorphism ring was larger than $\mathbb{Z}$, i.e., what we now call having complex multiplication. Presumably this use of the word singular was to indicate that the curve was unusual. Then, when people looked at elliptic curves over finite fields, the found that some of them had endomorphism rings that were even larger than an order in a quadratic imaginary field, so those curves were "supersingular" in the sense of being even more unusual. Of course, it turns out that an alternative way to characterize those curves is as having no $p$-torsion (over the algebraic closure of their base field).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This doesn't really answer your question. But I think you might find these comments interesting (even though I might not be saying anything you don't know).
By a theorem of Mazur-Ogus (Katz' conjecture) the $m$-dimensional Newton polygon of a variety lies above or is equal to its $m$-dimensional Hodge polygon.
A variety is ordinary if these polygons are equal (for all $m$).
For abelian varieties the $m=1$ case suffices and you see that an abelian variety is ordinary iff it is ordinary in the usual sense.
By a theorem of Grothendieck-Katz most varieties are ordinary. This is stated more precisely also in Illusie's paper you mention.
You should take a look at Mazur's beautiful paper on Katz' conjecture.
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.bams/1183533965
Let me also talk about another nice subject which has to do with Newton polygons and supersingular varieties. Namely, "constructing" varieties with given Newton polygon.
If you stick to the case of curves there are many open questions (to my knowledge). For example, Mazur asks in loc. cit. (page 659) if all five different possible Newton polygons arising from a smooth projective curve of genus $3$ allowed by the restraint of Poincaré duality really arise from some curve or not. I don't know if this question has been answered by now (and let me add that it might actually be answered by now).
I can't really tell you anything else on supersingular varieties. It does seem to be a nice sport to look at "strata" in the moduli spaces of abelian varieties (=Shimura varieties). For example, every "symmetric" Newton polygon arises from an abelian variety (and the Newton polygon of an abelian variety is symmetric). See http://arxiv.org/abs/math/0007201 for even more beautiful statements.
For "strata" of Shimura varieties see
http://arxiv.org/abs/1011.3230 (Wedhorn-Viehmann) http://arxiv.org/abs/1111.6830 (Kret)
These have to do with showing existence of abelian varieties with certain Newton polygons.
So the moral of the story is that it's already pretty difficult to prove that certain polygons arise from geometric objects, e.g., the case of the genus $3$ curves and the Shimura variety business.
-
For a surface $S$, supersingular means that the étale cohomology group $H^{2}(S,\mathbb{Q}_\ell)$ ($\ell$ a prime, prime to the characteristic $p$) is generated by divisors on $S$ (thus the Picard number equals to the second Betti number). Supersingularity is useful if one wishes to compute the Zeta function of $S$.
Shioda worked on supersingular surfaces, see e.g. "An Example of Unirational Surfaces in Characteristic $p$" or "On Unirationality of Supersingular Surfaces". Supersingularity is a necessary condition for a surface to be unirational (be it is not sufficient). He gives examples of Fermat surfaces of degree $>4$ that are unirational.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417248964309692, "perplexity_flag": "head"}
|
http://nrich.maths.org/6583
|
nrich enriching mathematicsSkip over navigation
### Folium of Descartes
Investigate the family of graphs given by the equation x^3+y^3=3axy for different values of the constant a.
### Witch of Agnesi
Sketch the members of the family of graphs given by y = a^3/(x^2+a^2) for a=1, 2 and 3.
### Quick Route
What is the quickest route across a ploughed field when your speed around the edge is greater?
# Scientific Curves
##### Stage: 5 Challenge Level:
Curve sketching is an essential art in the application of mathematics to science. A good sketch of a curve does not need to be accurately plotted to scale, but will encode all of the key information about the curve: turning points, maximum or minimum values, asymptotes, roots and a sense of the scale of the function.
Sketch $V(r)$ against $r$ for each of these tricky curves, treating $a, b$ and $c$ as unknown constants in each case. As you make your plots, ask yourself: do different shapes of curve emerge for different ranges of the constants, or will the graphs look similar (i.e. same numbers of turning points, regions etc.) for the various choices?
1. An approximation for the potential energy of a system of two atoms separated by a distance $r$
$$V(r) = a\left[\left(\frac{b}{r}\right)^{12}-\left(\frac{b}{r}\right)^6\right]$$
2. A radial probability density function for an electron orbit
$$V(r) = ar^2e^{-\frac{r}{b}}$$
3. Potential energy for the vibrational modes of ammonium
V(r)=ar^2+be^{-cr^2}
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8930113315582275, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/18613-quadratic-formula-problem.html
|
# Thread:
1. ## quadratic formula problem
Hi, I'm actually in a college precal2 class, but I think this is an alegbra issue...
I'm sure I'm just being an idiot and doing something completely stupid here, as I haven't taken a math class in over a year and have forgotten everything, but this is my problem:
When working on factoring something for a rational function problem, I can't seem to get the quadratic formula to work out properly.
ex. f(x)= -x^2 - x + 6
y= ( 1 +or- sqrt( -1^2 - 4(-1)(6) ) / 2(=1)
solving this gets me y= -3 or 2
which means the factors are (x+3)(x-2), right? But then multiplying those back together gets x^2 + x - 6, which is the same as the start, but with all the signs flipped... am I suppposed to be putting a negative out front, like -(x+3)(x-2)? I'm really confused... does this also mean that if the original function was something like 2x^2 - x +6, I'd need to put a 2 somewhere? I don't get it, and can't remember what I'm supposed to do...
I'm sure this is just a stupid simple mistake, but I'd really like some help, I'm just frustrating myself at the moment.... Thanks!
2. Originally Posted by echidnajess
Hi, I'm actually in a college precal2 class, but I think this is an alegbra issue...
I'm sure I'm just being an idiot and doing something completely stupid here, as I haven't taken a math class in over a year and have forgotten everything, but this is my problem:
When working on factoring something for a rational function problem, I can't seem to get the quadratic formula to work out properly.
ex. f(x)= -x^2 - x + 6
y= ( 1 +or- sqrt( -1^2 - 4(-1)(6) ) / 2(=1)
solving this gets me y= -3 or 2
which means the factors are (x+3)(x-2), right? But then multiplying those back together gets x^2 + x - 6, which is the same as the start, but with all the signs flipped... am I suppposed to be putting a negative out front, like -(x+3)(x-2)? I'm really confused... does this also mean that if the original function was something like 2x^2 - x +6, I'd need to put a 2 somewhere? I don't get it, and can't remember what I'm supposed to do...
I'm sure this is just a stupid simple mistake, but I'd really like some help, I'm just frustrating myself at the moment.... Thanks!
$<br /> f(x)= a~x^2 + b~ x + c<br />$
quadratic formula:
$<br /> x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}<br />$
In your case $a=-1,\ b=-1,\ c=6$, so the roots are:
$<br /> x=\frac{1 \pm \sqrt{(-1)^2-4(-1)6}}{2(-1)}= \frac{1 \pm \sqrt{1+24}}{-2}=-3, 2<br />$
Now if we form $(x+3)(x-2)$ the coefficient of $x^2$ is $1$ so to get the original form back we always have to multiply through by the coefficient of $x^2$. In this case to get:
$<br /> f(x)= -x^2 - x + 6=-(x+3)(x-2)<br />$
RonL
3. Originally Posted by echidnajess
Hi, I'm actually in a college precal2 class, but I think this is an alegbra issue...
I'm sure I'm just being an idiot and doing something completely stupid here, as I haven't taken a math class in over a year and have forgotten everything, but this is my problem:
When working on factoring something for a rational function problem, I can't seem to get the quadratic formula to work out properly.
ex. f(x)= -x^2 - x + 6
y= ( 1 +or- sqrt( -1^2 - 4(-1)(6) ) / 2(=1)
solving this gets me y= -3 or 2
which means the factors are (x+3)(x-2), right? But then multiplying those back together gets x^2 + x - 6, which is the same as the start, but with all the signs flipped... am I suppposed to be putting a negative out front, like -(x+3)(x-2)? I'm really confused... does this also mean that if the original function was something like 2x^2 - x +6, I'd need to put a 2 somewhere? I don't get it, and can't remember what I'm supposed to do...
I'm sure this is just a stupid simple mistake, but I'd really like some help, I'm just frustrating myself at the moment.... Thanks!
f(x) = -x^2 -x +6
is the function of x only.
It's graph is a parabola that opens downward.
That's all.
If you want to get the factors of f(x) = -x^2 -x +6, you cannot.
But you can get its zeroes.
Meaning, the values of x when f(x) = 0.
When f(x) = 0, you can now factor that.
So,
0 = -x^2 -x +6
That's when you can use the Quadrtatic Formula to find the x's when f(x) = 0.
That's when you find the factors too.
Now,
0 = -x^2 -x +6 ------------(1)
Transpose all of those in the righthand side of the equation to the lefthand side, and transpose the zero to the righthand side,
x^2 +x -6 = 0 ------------(2)
Whoa, (1) became (2)?
So, (2) = (1) ?
Yes, it is.
Again,
0 = -x^2 -x +6
Divide both sides by -1,
0 = x^2 +x -6
Eh?
2x^2 +6x +34 = 0 -------------(3)
Divide both sides by 2,
x^2 +3x +17 = 0 -------------(4)
Are (3) and (4) equal?
Try graphing them.
------------------------------------------
Edit:
If you want to get the factors of f(x) = -x^2 -x +6, you cannot.
I'd be crucified for that blasphemy.
4. Originally Posted by ticbol
------------------------------------------
Edit:
If you want to get the factors of f(x) = -x^2 -x +6, you cannot.
I'd be crucified for that blasphemy.
f(x) = -x^2 -x +6 = (-x-3)(x-2)
RonL
« Problem | logs »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601148366928101, "perplexity_flag": "middle"}
|
http://cs.stackexchange.com/questions/tagged/pseudo-random-generators
|
Tagged Questions
Questions about algorithms that deterministically generate sequences of numbers that have stochastic properties of random sequences.
2answers
44 views
Random generator considerations in the design of randomized algorithms
It is well known that the efficiency of randomized algorithms (at least those in BPP and RP) depends on the quality of the random generator used. Perfect random sources are unavailable in practice. ...
0answers
69 views
Will the Mersenne Twister PRNG eventually produce all integer sequences of a certain length?
I'm attempting to use the MT19937 variant of the Mersenne Twister PRNG to accomplish something. Whether or not this something is feasible depends upon the answers to these two questions: What is the ...
0answers
28 views
Mersenne twister middle word
In some literature, as well as in Wikipedia, the middle term parameter m of Mersenne twister is called "number of parallel sequences". Why? What is meant here by "parallel sequences"?
2answers
58 views
Rigorous proof against pseudorandom generator
I am trying to teach myself the principles of cryptograhpy, and want to solve the following question: Let G be the algorithm that takes an input x = (x1, . . . , xn) from {0, 1} n (so each xi ∈ ...
1answer
65 views
LFSR sequence computation
I need to calculate the output of the sequence generated by this shift register but I cannot find anywhere how to do it. Everywhere the results are just given but there is no explanation how to do ...
1answer
127 views
Choosing taps for Linear Feedback Shift Register
I am confused about how taps are chosen for Linear Feedback Shift Registers. I have a diagram which shows a LFSR with connection polynomial $C(X) = X^5 + X^2 + 1$. The five stages are labelled: \$R4, ...
1answer
171 views
Proving the security of Nisan-Wigderson pseudo-random number generator
Let $\cal{S}=\{S_i\}_{1\leq i\leq n}$ be a partial $(m,k)$-design and $f: \{0,1\}^m \to \{0,1\}$ be a Boolean function. The Nisan-Wigderson generator $G_f: \{0,1\}^l \to \{0,1\}^n$ is defined as ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964733481407166, "perplexity_flag": "middle"}
|
http://meta.math.stackexchange.com/questions/3555/stock-answer-for-questions-relating-to-formal-proof
|
Stock answer for questions relating to formal proof
Lately there has been a slew of questions (see below) which basically say "how do I construct a formal proof of proposition X?" where $X$ is usually a tautology from propositional logic. These questions tend to be very low quality:
1. They often fail to give any description of what formal proof system should be used. The askers may be under the misconception that there is only one formal proof system, or that there is only one "Hilbert-style" proof system. In reality there are many systems in different books, and a "formal proof" in one would not be a "formal proof" in another.
2. There is a general method for obtaining these results, which is proved as the "completeness theorem" in the book the asker is using as a reference. So the situation is somewhat analogous to getting numerous questions about how to solve different specific systems of linear equations over $\mathbb{R}$, when there is a general method that will solve all of them. In this situation the method depends on the proof system, but it still applies to all provable formulas.
3. The questions are typically unmotivated, with no explanation at all for why the person is somehow interested in the proposition stated. Sometimes the asker explains that the problem is homework, which at least explains where it came from.
These questions seem to be an exact fit for the "too localized" closing option. An answer that works for the person asking the question is unlikely to work for anyone else, unless that person happens to be assigned the same problem from the same (unmentioned) book. I have started voting to close these for that reason.
Is it feasible to develop some sort of canonical answer for these questions (for example, a question of the form "How do I find a formal proof for a given proposition?"). We could then close new questions of this form as duplicates of that one. I think this was discussed for some algebra-related questions, but I don't know exactly what the outcome was.
Examples:
-
2
– Arturo Magidin Jan 27 '12 at 20:18
1
In case we write such a canonical answer, that could be marked "faq" and the future questions closed as (abstract) duplicates. – Srivatsan Jan 27 '12 at 20:40
3
Unfortunately, the problem is that there is no canonical answer, even for two questions that are word for word the same. When a question asks for a proof, it would be good if the OP were automatically asked about the book being used. Then, at least sometimes, a proper answer can be given. – André Nicolas Jan 28 '12 at 3:12
2 Answers
Not all of these questions are created equal.
I agree that there's not much to be done for questions that don't even specify which formal system they're working with. Even if the OP is asked to clarify in comments, the clarification often comes in a comment as well, meaning that the question is of limited usefulness later because one has to trawl through a comment thread to find out exactly what the answers (if any) are answers to.
On the other hand, for example, the first of the questions you link to is not one of those. There the OP explicitly stated the axioms he had to work with, and apparently didn't have any kind of completeness theorem to refer to, not even the deduction theorem. The answer thread was long and interesting, and I think we would have been worse off if that question had been closed with a pointer to a "figure it out yourself" stock answer.
It would be good to have a faq/stock answer to which we could redirect the incomplete questions of this kind. However, I don't think it should just direct the reader to refer to the completeness theorem in his textbook:
• Completeness might not hold at all for the formal system in question, such as if an intuitionistic proof is required.
• It is possible that the OP is asking for help for an exercise that the textbook uses as a stepping stone on the way to completeness.
• In extreme cases, there may not be a textbook involved at all. For example., the Op could be reading a quasi-popular description of how formal proof systems work and a bald assertion that "it can be proved that" such and such. (Imagine someone trying to learn formal logic from Wikipedia!) Or they could be working with an advanced text that presents a formal system in its "things you should already know" chapter, but uses different axioms from the ones the OP are used to.
So among the faq answers there should also be a guide for how to ask a complete question if the standard techniques don't work in the situation the reader find himself in. Probably that just means saying: be sure to state all the logical axioms and all the rules of deduction in your question, as well as any metaresults you have already proved that feel relevant (especially the deduction theorem, or other derived rules).
-
I can agree that those questions might be too localized, however
• Sometimes it's not pedagogically right to just go ahead and prove completeness theorem. Well, because during the proof you actually need to understand how are things proved in the formal systems. As Henning mentioned it might be a stepping stone.
• When I had troubles with proving some statements I used some techniques from mentioned questions, but that's just an anecdote.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9701215624809265, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/30466-fraction-problem.html
|
# Thread:
1. ## Fraction Problem
Man I dislike these fractions but I am having problems finding the answer to this one. $\frac{5}{2x}-\frac{4}{2x^2+3x}$
The site helps me a lot and I haven't found a math site that has help me as much as this one. Even if i am just reading someone threads or doing mine I find help in it somewheres.
2. Originally Posted by Itsbaxagain
Man I dislike these fractions but I am having problems finding the answer to this one. $\frac{5}{2x}-\frac{4}{2x^2+3x}$
The site helps me a lot and I haven't found a math site that has help me as much as this one. Even if i am just reading someone threads or doing mine I find help in it somewheres.
Are you supposed to subtract the fractions?
You need a common denominator. Note that the denominator of the second term is $2x^2 + 3x = x(2x + 3)$. The LCM of $x$ and $x(2x + 3)$ is $2x(2x + 3)$.
So....
$\frac{5}{2x} - \frac{4}{2x^2+3x}$
$= \frac{5}{2x} \cdot \frac{2x + 3}{2x + 3} - \frac{4}{2x^2+3x} \cdot \frac{2}{2}$
$= \frac{5(2x + 3) - 2 \cdot 4}{2x(2x + 3)}$
$= \frac{10x + 7}{2x(2x + 3)}$
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9728264808654785, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/199802-common-factors-integral-domain-z-sqrt-13-a.html
|
# Thread:
1. ## Common factors in integral domain Z[sqrt(-13)]
Consider the integral domain $\mathbb{Z}\[\sqrt{-13}\] = \{ x + y \sqrt{-13} | x,y \in \mathbb{Z} \}$.
Show that $2$ and $3 + \sqrt{-13}$ have no common factors in $\mathbb{Z}\[\sqrt{-13}\]$ except for $1$ and $-1$.
I can show that both $2$ and $3 + \sqrt{-13}$ are irreducible. However, is there another way to prove that $1$ and $-1$ are the only common factors?
2. ## Re: Common factors in integral domain Z[sqrt(-13)]
Showing irreducibility is not enough, you must also prove that the only units in your domain are 1 and -1.
3. ## Re: Common factors in integral domain Z[sqrt(-13)]
But should showing irreducibility be the first step though? Or is there a different solution that does not involve proving irreducibility?
4. ## Re: Common factors in integral domain Z[sqrt(-13)]
Suppose $\alpha = a + \sqrt{-13} b$ is a common factor.
Then $2 = \alpha x$ for some $x = c + \sqrt{-13}d$ and $N(2) = 2^2 = N(\alpha)N(x) = (a^2 + 13 b^2)(c^2 + 13d^2)$. It can be deduced that $a = \pm 1$ or $a = \pm 2$.
We also have $3 + \sqrt{-13} = \alpha y$ for some $y = e + \sqrt{-13}f$ and $N(3 + \sqrt{-13}) = 22 = N(\alpha)N(y) = (a^2 + 13 b^2)(e^2 + 13f^2)$. It can be seen that $a = \pm 1$ is the only possible value.
Hence $\alpha = \pm 1$ is the only common factor.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497795701026917, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/tagged/spatial-interaction-model
|
# Tagged Questions
The spatial-interaction-model tag has no wiki summary.
0answers
18 views
### three-way spatial interaction using mgcv
I am fitting a model with gam from the mgcv library which contains smooth terms. I would like to know how to specify a three-way interaction of a two-dimensional smooth of space and a two level ...
0answers
24 views
### account for spatial autocorrelation with a binomial regression model
I am using a binomial regression model for presence/absence, with 20 independent variables to test. The data has x and y coordinates and I would like to understand how can I take into account the ...
1answer
30 views
### Noise correlations depending on distance
If one has noise being generated over time $t\in[a,b]$, and the correlations of the value of the noise at times $t_0$ and $t_1$ turn out to be distributed according to a density function that depends ...
2answers
119 views
### Are the $1-SSe/SSt$ and $cor^2$ calculations of $R^2$ always equivalent?
I am trying to calculate the $R^2$ value for a production constrained spatial interaction model, using Fotheringham and O'Kelly (1989) as my guide. I get dramatically different values for R-Square, ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8740411400794983, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/227354/the-non-units-in-mathbbrx-form-a-principal-ideal?answertab=votes
|
# The non-units in $\mathbb{R}[[x]]$ form a principal ideal.
I'm having a bit of confusion regarding the ideal in $\mathbb{R}[[x]]$ consisting of non-units and I'm probably making some silly mistake somewhere. It's clear from order considerations that the units of this ring are the non-zero constants and so my intuition has suggested that the ideal of non-units is principal and generated by $x$. But, in this case, every element of $(x)$ is divisible by $x$. However, $1+x\in \mathbb{R}$ is not divisible by $x$ yet it is non-unit. Can someone point out where my error is?
Thank you.
-
Are you sure the set of non-units is an ideal? $1+x$ and $1-x$ are non-units, but... – wj32 Nov 2 '12 at 6:01
1
Whoa! Typographical error. I meant to write power series instead of polynomials. Correction soon to come. – Alexander Sibelius Nov 2 '12 at 6:07
$1+x$ is a unit! – nik Nov 2 '12 at 6:12
Alright, at this point it looks like a real power series is a unit if and only if it has a nonzero constant term. After I prove this, the fact that the non-units are principal will be trivial. Thanks again. – Alexander Sibelius Nov 2 '12 at 6:19
## 3 Answers
The units are not the non-zero constants. For example, $$(1-x)^{-1}=1+x+x^2+\cdots.$$ The ideal of non-units is indeed generated by $x$.
-
Ah, I see what I've done now. Thanks. – Alexander Sibelius Nov 2 '12 at 6:13
The units of this ring are the non-zero constants. $\;\;$ $\:1+x\:$ and $x$ are non-units, and $1$ is a unit.
$1+x+(-x) \: = \: 1+0 \: = \: 1$
The set of non-units in $\mathbb{R}[x]$ is not closed under addition,
therefore the set of non-units in $\mathbb{R}[x]$ is not an ideal.
-
Hint: What is
$$(1+x)\sum_{n=0}^\infty (-1)^n x^n ?$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946351945400238, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/207466/proving-terminal-vertices-and-total-vertices-of-a-full-binary-tree?answertab=oldest
|
# Proving terminal vertices and total vertices of a full binary tree?
I am trying to make a proof by induction of the following theorem.
````If T is a full binary tree with i internal vertices, then T has i + 1
terminal vertices and 2i + 1 total vertices.
````
I have done this so far but I am just starting to understand proofs and am stuck about what to do next.
````Base Case:
P(1): 1 internal vertex => 1+1 = 2 terminal vertices
Induction:
Assume true: P(n): n internal vertices => n+1 terminal vertices
Show true: P(n+1): n+1 internal vertices => (n+1)+1 = n+2 terminal vertices
````
After this I am unsure of how to proceed.
-
## 1 Answer
SKETCH of proof: Let $T$ be a full binary tree with $i+1$ internal vertices. Let $v$ be a terminal vertex of maximal height, and let $u$ be the parent of $v$. $T$ is full, so $u$ has two children; let $w$ be the other child. Let $T'$ be the tree that remains when you remove $v$ and $w$ from $T$. Verify that $T'$ has $i$ internal vertices, and therefore by the induction hypothesis $i+1$ terminal vertices. One of these terminal vertices is $u$. When you restore $v$ and $w$ to $T'$ to recover $T$, you gain one internal vertex ($u$), you lose one terminal vertex ($u$), and you gain two terminal vertices ($v$ and $w$). The net change in internal vertices from $T'$ to $T$ is $+1$, as is the net change in terminal vertices, so $T$ must have $(i+1)+1=i+2$ terminal vertices.
-
Verify that T′ has i internal vertices ... removing v and w turns u into a terminal vertex and thus reduces internals by -1 making i internals and i + 1 terminals? – wazy Oct 4 '12 at 23:32
You are right, I thought for some reason about complete binary trees – Belgi Oct 4 '12 at 23:52
@wazy: That’s right; and that means that $T$ must have had $i+2$ terminals. – Brian M. Scott Oct 5 '12 at 0:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100232720375061, "perplexity_flag": "head"}
|
http://gottwurfelt.wordpress.com/2012/05/30/clustering-of-college-graduates-is-it-getting-worse/
|
A mathematician blogs.
# Clustering of college graduates: is it getting worse?
An article in today’s New York Times, by Sabrina Tavernise, is entitled As College Grads Cluster, Some Cities Are Left Behind. A lot of old US cities with economies that used to be based on manufacturing are having trouble making the transition to our current, post-manufacturing economy. And one difficulty such cities face is a lack of college graduates.
Historically, most American cities had relatively similar shares of college graduates, in part because fewer people went to college. In 1970, the difference between the most-educated and least-educated cities, in terms of the portion of residents with four-year degrees, was 16 percentage points, and nearly all metro areas were within 5 points of the average. Today the spread is double that, and only half of all metro areas are within 5 points of the average, the Brookings research shows.
But what does “relatively similar” mean here? The proportion of adults in metropolitan areas that have college degrees, according to the accompanying infographic, has risen from 12% in 1970 to 32% in 2010. I would guess that a city with, say, 9% college graduates in 1970 is comparable to a city with 24% college graduates in 2010 — both have three-fourths of the average.
(Admittedly this doesn’t hold up if the percentages are quite large. For example, let’s say we’re looking at literacy rates; I’d say that a metropolitan area having 40% literacy in a time when the national rate is 50% is relatively better off than a state having 76% literacy when the national rate is 95%. But bear with me.)
Indeed, from the infographic you can also get the actual distribution of the percentage of college graduates in each of the metro areas in question. (The study includes 100 metro areas in each of 1970 and 2010.) In 1970 the average metropolitan area had 11.5 percent college graduates, with SD 2.9 percent; the standard deviation is 25 percent of the mean. In 2010 the average metropolitan area had 29.4 percent college graduates, with SD 6.2 percent; the standard deviation is 21 percent of the mean. In these terms, the disparity has gotten smaller!
So let’s normalize the share for every metropolitan area by comparing to the average. In 1970, for example, Washington, DC had 22.1% college graduates, compared to the average of 11.5%, so it had 1.92 times the average. In 2010, Washington, DC had 46.8% college graduates, compared to the average of 29.4%, so it had 1.59 times the average. In this respect it looks like Washington is getting more like the US, not less. (Washington was the most college-degreed metropolitan area, in both samples, which presumably has something to do with its dominant industry being government.)
If we make histograms of these normalized shares for 1970 and 2010 and superimpose them, we get the plot below. Black is 1970, red is 2010. The distribution gets narrower, not wider, as time passes when viewed on this scale.
I don’t mean to take away from the fact that this disparity exists between metropolitan areas. But the real problem is probably not so much that the educational disparity is growing as that the returns to a college education are larger with the departure of manufacturing jobs.
Matt Yglesias has commented on this from an economic point of view, echoing some points that Enrico Moretti makes in The New Geography of Jobs. In particular, what’s the point of states funding public education if people are just going to move away from those states?
Edited to add, 4:49 pm: Junk Charts comments on the graphic itself.
Categories
### 4 comments
Post your own or leave a trackback: Trackback URL
1. JSE says:
Superb, Michael.
What’s the right normalization in general? It can’t be straight proportion. If, in 2030, 60% of US adults are college graduates, then it would be slightly say “Washington DC is getting more like the rest of the country” on the grounds that it didn’t have 1.92*60% college graduates.
2. Michael Lugo says:
Jordan, I don’t actually know what’s “right” here. The logit feels like it might be right (followed by taking differences instead of quotients). In this case $p$ is mapped to $f(p) = \log p/(1-p)$, For small $p$ you have $f(p) \approx \log p$, and so for small $p$ and $q$ you have $f(q) - f(p) \approx \log (q/p)$.
So, for example, the distance between 1% and 2% is $f(0.02)-f(0.01) = 0.703$ (this is just barely over log 2); this is the same as the distance between 5% and 9.6%, or 20% and 33%, or 50% and 67%, or 80% and 89%, or 98% and 99%, to name a few.
If you apply the logit function to the data, then the standard deviation has gone up, but not by much. If you take the logits of the 100 percentages for 1970 and then take the SD of that set, you get about 0.27; doing the same compuation in 2010 gives 0.31. Washington goes from being 0.78 above the average to 0.75, but we really shouldn’t just look at a single data point.
In the end your question is more psychological than mathematical, though, so I’m not sure the two of us can answer it…
3. JSE says:
Well, I too wanted to apply some standard transformation from [0,1] to [-oo,oo]; the logit seems as good as any. I would guess that any “reasonably shaped” transform is close enough to logit that you’d end up not seeing any noticeable change in the inequality of baccalaureate distribution between 1970 and the present.
4. Michael Lugo says:
The logit also has the nice property that for small p it reduces to the criterion implicit in my original post. The probit, for example, wouldn’t do that.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523093104362488, "perplexity_flag": "middle"}
|
http://nrich.maths.org/727/note
|
### Doodles
A 'doodle' is a closed intersecting curve drawn without taking pencil from paper. Only two lines cross at each intersection or vertex (never 3), that is the vertex points must be 'double points' not 'triple points'. Number the vertex points in any order. Starting at any point on the doodle, trace it until you get back to where you started. Write down the numbers of the vertices as you pass through them. So you have a [not necessarily unique] list of numbers for each doodle. Prove that 1)each vertex number in a list occurs twice. [easy!] 2)between each pair of vertex numbers in a list there are an even number of other numbers [hard!]
### Russian Cubes
How many different cubes can be painted with three blue faces and three red faces? A boy (using blue) and a girl (using red) paint the faces of a cube in turn so that the six faces are painted in order 'blue then red then blue then red then blue then red'. Having finished one cube, they begin to paint the next one. Prove that the girl can choose the faces she paints so as to make the second cube the same as the first.
### N000ughty Thoughts
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the number of noughts in 10 000! and 100 000! or even 1 000 000!
# Walkabout
### Why do this problem
This problem presents an investigation which does eventually require a systematic approach. Although the generalisation is difficult for Stage 4 some of the context's structure is discernible and describable, and comparable to other similar situations. Do the problem in conjunction with Group Photo and ask learners to describe what is the same about the two situations that could explain them resulting in the same sequence of Catalan numbers.An apparent generalisation related to cubes of numbers breaks down and so the problem offers an opportunity to discuss a danger of applying inductive reasoning.
### Possible approach
One approach is to do this in conjunction with Group Photo , either following from one to the other, or dividing the class so that groups work on different problems, or why not use two classes working on the different problems. The aim would be to bring the two sets of findings together to discuss why two apparently quite different situations result in the same mathematics.
Allow plenty of time to 'play' with the problem, making sense of what is being counted and how it might be represented.
Encourage ideas that involve systematic approaches, and share them so that all learners have access to a way into the problem.
Use results from separate groups to check working.
### Key Questions
• Can you describe what is the same about the two problems that might explain the similar mathematical structure?
• What is different about and what is similar to other examples, such as One Step Two Step and Room Doubling that result in a Fibonacci sequence?
### Possible support
Group photo can be done with real people and you can start with small numbers. Spend plenty of time trying out, and considering the efficiency of, possible recording methods.
### Possible extension
Can students make connections between the structures of the two problems that may in part explain the mathematical connections?
#### Notes
$1$, $1$, $2$, $5$, $14$, $42$, $132$, $429$, $1430$, $4862$ ,...
The Catalan numbers describe things such as:
• the number of ways a polygon with n+2 sides can be cut into n triangles
• the number of ways to use n rectangles to tile a stairstep shape (1, 2, ..., n-1, n)
• the number of ways in which parentheses can be placed in a sequence of numbers to be multiplied, two at a time
• the number of planar binary trees with n+1 leaves
• the number of paths of length 2n through an n-by-n grid that do not rise above the main diagonal
They can be described by the formula $$\frac{ ^{2n}C_{n} }{(n + 1)}$$
The Catalan numbers are also generated by the recurrence relation:
$C_0=1, \qquad C_n=\sum_{i=0}^{n-1} C_i C_{n-1-i}.$
For example, $C_3=1\cdot 2+ 1\cdot 1+2\cdot 1=5$, $C_4 = 1\cdot 5 + 1\cdot 2 + 2\cdot 1 + 5\cdot 1 = 14$, etc.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229207634925842, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/575/projective-duality
|
# projective duality
Given a curve how do you intuitively construct the picture of its projective dual? I know points --> lines, lines--> points but for something like the swallowtail this is not really obvious.
-
I don't have a answer, but since no one else has said anything, have you tried breaking the curve into a series of lines to gain some intuition? – Jonathan Fischoff Jul 23 '10 at 21:58
## 1 Answer
Answer edited, in response to the comment and a second wind for explaining mathematics:
Let $F(x_0,x_1,x_2)=0$ be the equation for your curve, and take $(y_0,y_1,y_2)$ to be coordinates on $(\mathbb{P}^2)^*$. Also, assume that $F$ is irreducible and has no linear factors.
Then $y_0 x_0+y_1 x_1+y_2 x_2=0$ is the equation of a general line in $\mathbb{P}^2$ (recall, here $y_0,y_1,y_2$ are fixed, and the $x_i$ are the coordinates on the plane) and we look at the open set of $(\mathbb{P}^2)^*$ where $y_2\neq 0$. On this open set, we can solve the equation of the line for $x_2$, and look at $g(x_0,x_1)=y_2^n F(x_0,x_1,-\frac{1}{y_2}(y_0x_0+y_1x_1))$, a homogeneous polynomial of degree $n$ in $x_0,x_1$ with coefficients homogeneous polynomials in the $y_i$. This polynomial has zeros the intersections of our curve $C$ with the line $L$ we're looking at.
So we want to find points of multiplicity at least two. So how do we find multiple roots of a polynomial? We take the discriminant! Specifically, we do it for an affinization, and we get a homogeneous polynomial of degree $2n^2-n$ in the $y_i$.
All that's left is to factor the polynomial, and kill all the linear factors, just throw them away, the reasons are explained in more computational detail on my blogpost, and there I also do explicit examples, but the method for calculating the equation of the dual curve is as above.
-
Charles: In general, try to avoid having all useful information in an answer contained off-site. – Larry Wang Jul 28 '10 at 0:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434512853622437, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/10915/why-photons-transfer-to-electrons-perpendicular-momentum/10918
|
# Why photons transfer to electrons perpendicular momentum?
Linear antenna directed along z, photons (EM waves) propagate along x. Momentum of photons have only x component. Why electrons in antenna have z component of momentum?
-
Aha, if electrons in an antenna have a momentum along that antnna, what do they do after reaching the end? – Georg Jun 8 '11 at 15:16
...I think a vertical linear antenna will propagate waves in many directions, and the one thing I'm almost sure that it won't do is propagate preferentially in an x or y direction (not sure about z). It's still a valid question, but I maintain that we're searching for something different than an x-axis component, but instead an E&M pressure all around the wire that can then be turned into a net force with use of reflectors. – AlanSE Jun 8 '11 at 16:28
## 4 Answers
Photons are quanta of electromagnetic waves which are transverse so if the momentum of the photon goes in the $x$-direction, then the magnetic and electric fields are in the transverse $yz$-plane, for example $B$ may be in the $y$ direction and $E$ may be in the $z$ direction. Because the electric field accelerates charged particles in the same (or opposite) direction, electrons will be accelerated in the $z$ direction as well.
If the wave is unpolarized, about 50 percent of its energy will be composed of the wave above and 50 percent will be composed of the other way where the directions of $B,E$ are interchanged. This other wave won't be able to shake the electrons.
-
Could you elaborate more on the 50/50 breakdown of energy? When you say "the other way" I think direction and when you say "This other wave" I think the perpendicular wave. Because of that I can't pin down a clear meaning. – AlanSE Jun 8 '11 at 16:33
@Zassounotsukushi: in the picture you have an EM wave with the electric field in the up-down direction and the magnetic field in the left-right direction. But there could also be a wave with the electric field in the left-right direction and the magnetic field in the up-down direction. An unpolarized wave would be a linear combination of both those possibilities, with the energy split evenly between them. But only the component of the electric field that is parallel to the antenna would actually be able to move the electrons in the antenna. – David Zaslavsky♦ Jun 8 '11 at 18:00
In context of antennas and waves the near-field version of the wave had been better. – Georg Jun 8 '11 at 20:03
Lubos, you explained what is plane wave, but did not answered the question. The question was: why (how) photons having only x momentum transform to electron perpendicular momentum? – grigori Jun 9 '11 at 17:45
The electrons in the antenna are moving perpendicular to the direction of motion of the emitted radiation. This is natural for a transverse wave. Since light is a transverse wave, as shown by Luboš Motl's beautiful graphic, there's nothing surprising here.
-
Maybe interesting for some to take this a bit further:
This classical behavior goes all the way down to single cycle laser pulses which ionize atoms by accelerating them in the direction transverse to the motion of the laser pulse. Make sure to look at the very interesting (Realplayer) video...
http://www.cfa.harvard.edu/itamp/attosecondpdfs/paulus.pdf
http://cfa-www.harvard.edu/dvlwrap/itamp/0311/paulus.ram
Why is this behavior still classical?
(1) Because in this case even the single cycle of the ultrashort laser pulse has a wavelength much longer as the size of the field of the (bounded) electron.
(2) The single cycle laser pulse kicks the electron in one direction.
When, how and why do we get quantum mechanical Compton scattering effects?
When: Mainly if the wavelength of radiation becomes smaller (e.g. x-ray) as the volume in which the electron is contained.
How: The electron gets kicked in the direction parallel to the momentum of the photon.
Why: The electron can self-interfere when going from an initial state $\psi_i$ to a final state $\psi_f$. The interference current is a sinusoidal pattern of alternating charge and spin density with the same wavelength and direction as the incoming photon. The direction of the electron spin is essential in the interaction because it is the effective current from the alternating spin density which is the source of an alternating transverse electromagnetic field which compensates the electromagnetic field of the incoming photon: The photon is absorbed. Finally, the state of the electron after absorbing the photon isn't that of a free electron and another transition takes place under the emission of an outgoing photon which leaves the electron in a free (on-the-mass-shell) state. The math which describes these processes corresponds to the tree-level Feynman diagrams of the interaction
Regards, Hans
-
Hans, your answer is useful, however it is still unclear why and how photons having only x component of momentum give to electron perpendicular to x momentum? Is it consistent with conservation of momentum? Best regards, Grigori – grigori Jun 9 '11 at 17:57
– Hans de Vries Jun 11 '11 at 23:34
Hans, the question was: why EM waves having only x momentum transfer to electron z momentum? Electron begins oscillating along z, and will radiate EM waves ~ sin(teta) from direction z, however it will not radiate along z direction, so its radiation field will not compensate electron z momentum. So it seems that in terms of EM waves, the conservation of momentum does not take place (but it takes place in terms of photons). Grigori. – grigori Jun 19 '11 at 14:12
@Hans I'm sorry I didn't see this post when it first went up, or I would have asked you: why are the spin currents more important than the alternating charge layers? The charge layers are static but so are the current layers, so neither interacts with the incoming e-m wave until they are set in motion to some degree. If you are interested in explaining this, let me know and I will post it as a separate question. – Marty Green Aug 9 '11 at 21:46
There are 2 different effects of the EM waves:
1. Acceleration of charges due to EM field
2. Radiation pressure of the EM field
If the EM wave propagates along x direction and your antenna is along the z direction, then electrons will be driven along the antenna by the electric field. On the other hand, the antenna will also receive an overall impact (momentum) along the x direction because of the radiation pressure (which does not affect the antenna practically since its effect is tiny and the antenna is fastened).
It is also worth pointing out that there are 2 descriptions of radiation:
1. the classical description (EM waves) and
2. the quantum description (photons)
The quantum description is more general. Both pictures are identical only in the limit of so many photons. Electromagnetic waves can be described in terms of photons, but individual photons cannot be described in terms of electromagnetic waves. Photons description is more fundamental.
-
""The quantum description is more general. Both pictures are identical only in the limit of so many photons. Electromagnetic waves can be described in terms of photons, but individual photons cannot be described in terms of electromagnetic waves. Photons description is more fundamental."" This is fundamentally wrong. Ever heard of wave/particle dualism? – Georg Aug 9 '11 at 15:06
1
@Georg: The "wave" in the wave particle/duality has nothing to do with the electromagnetic waves. Wave/particle duality does not mean that Maxwell's equations are equivalent to quantum electrodynamics. QED is more general than the classical theory of radiation. – Revo Aug 9 '11 at 15:27
Very interesting, but not physics. – Georg Aug 9 '11 at 18:26
1
@Georg He is correct. When we speak of wave particle duality the "wave" part is a probability wave, not a field wave. – anna v Aug 9 '11 at 18:34
1
– Georg Aug 9 '11 at 18:57
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174312949180603, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/15156/how-to-take-into-account-the-reference-frames-with-the-revolution-and-rotation-o/16902
|
How to take into account the reference frames with the revolution and rotation of the Earth in OPERA's superluminal neutrinos?
Since the Earth is moving around the Sun, which is moving around Milky Way, etc... What reference frame is used for the complete motion of the begin/end points (which are non-inertial right?)?
-
1
Considering that this is a fishy question, as Ben Crowell noted, I voted to close it, but I think that this would be unfair to those that answered it in good faith. – Ron Maimon Nov 12 '11 at 22:10
3 Answers
They trust the method used by GPS for geodesy, where the claim is they can go down to picosecond and cm accuracy if necessary (for military use).
GPS error analysis takes even general relativity into account.
In GPS signal propagation (PDF) the systematic errors of a simple GPS setup are given, but the OPERA experiment has more sophisticated use of four satellites.
There has been no analysis of the GPS systematics further in the paper published in the archive, that is why I say they trust it.
Among other corrections, GPS corrects back to the velocity of light in vacuum. The meter is defined as a fraction of the velocity of light in vacuum (the second is defined by the caesium clock at normal temperature and pressure).
In my opinion, it is possible that some of the very sophisticated corrections of GPS values might be systematically off, with an end result of effectively redefining the meter. This would not show up in navigation or geodesy because the lengths probed by the OPERA experiment are very large (732 km) and the errors very small, 20 cm. A tiny systematic offset of what a meter is for GPS would not show up in the normal world use of it, but it would show up in measuring the neutrino speed with this method.
-
It seems odd to me that the anonymous poster accepted this answer. Everything said here is correct, but none if it answers the question. – Ben Crowell Nov 12 '11 at 21:49
@BenCrowell The answer is in the link in the first words. GPS takes into account all the relative motions and they used the GPS definitions. – anna v Nov 13 '11 at 12:34
"GPS takes into account all the relative motions[...]" I think it would be more accurate to say that GPS takes into account the relative motions that are relevant, and doesn't need to take into account the relevant motions that are not relevant (such as the earth's orbital motion around the sun). The distinction between the two is important. This has to do with the notion of an inertial frame in GR, which is different from that in Newtonian mechanics. – Ben Crowell Nov 13 '11 at 19:49
@BenCrowell if you read a bit about GPS, they also have general relativity corrections so they would not ignore the obvious. – anna v Nov 13 '11 at 20:56
GPS uses a nonrotating coordinate system in which the time coordinate is essentially a time that would be defined by radio signals broadcast from a hypothetical clock at the center of the earth: http://relativity.livingreviews.org/Articles/lrr-2003-1/ GPS uses general relativity heavily. In GR, an inertial frame is defined as a free-falling one, and since the earth is free-falling, a nonrotating frame fixed to the earth's center is inertial. The earth's orbital motion around the sun, etc., are therefore irrelevant.
There is a special-relativistic correction that needs to be applied in order to convert from a nonrotating frame to one moving with the rotation of the earth. This correction comes out to be a few nanoseconds. I assume the OPERA team took it into account, but even if they hadn't, it would have been far to small to explain the claimed 60 ns shift.
The motion of the GPS satellites does not need to be corrected for, contrary to claims by van Elburg, who is an idiot. Presumably the reason the popular press has paid so much attention to van Elburg is that his arguments use only freshman physics, so science journalists can understand them. As described above, and as discussed in the Living Reviews article, GPS results are stated in a nonrotating frame tied to the earth, not in a frame moving with the satellites, as van Elburg seems to have imagined.
-
1
I think this is unfair to van Elberg. He gave an order of magnitude estimate of rotational corrections to the GPS distance/time estimates, and showed these explain OPERA. It is obvious that any effect is an artifact, and these corrections are correct order of magnitude. The details are probably impossible to get right given that OPERA is not releasing the details of their frame corrections. – Ron Maimon Nov 12 '11 at 23:10
May be the case that this problem has to do with the «one-way» light speed and the referential that is used. Afaik the only known measures of the c are done in a «two-way» version (mean value in a closed path). When a photon is released in space it starts its journey at c speed independently of the source and of the receiver. The CMB referential clearly is the only referential to «observe» the light as isotropic. As the Earth moves we observe a dipole, and in different directions we measure different wavelengths for the same physical object (photon).
This paper (Cosmological Principle and Relativity - Part I) analyses the anisotropy of light speed for a moving observer.
Fig.3 and eq. 18, pag 14
The one-way light speed is : $c_{A}^{r}=\frac{c_{0}}{1+V/c_{0}\cdot\cos\phi_{A}}$
Using $c_0=299792.458$ Km/s is two-way light speed, $V\;$ is the speed of the lab in relation to the CMB: $V=V_{SS}+V_E$=369$\pm$30 km/s (data from here)
gives the max value of $\frac{\left|c_{V\pm\delta V}-c_{V}\right|}{c_{V}}\cdot10^{5}$=10.2.
All experimental measures of |v-c|/c are within this limit.
Anyway Einstein is correct.
I suspect that the syncronization used in the GPS is in the same as in the above paper and not as Einstein did.
In the pic Sat A must be synchronized with C at the same time thru the shortest red path and thru the longest blue path. At the same time B is in sync with C thru other paths with different lengths. IMO this is only possible if they are synchronised as in the above paper (instant observer) and not in the Einstein way that only considers one path between the observer and any other point ("Synchronisation around the circumference of a rotating disk gives a non vanishing time difference that depends on the direction used").
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406725764274597, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/262293/calculating-a-large-factorial-division-on-pen-and-paper
|
# Calculating a large factorial division on pen and paper
Context: In January I will be taking a concurrent programming examination, part of which will involve calculating the number of interleavings based on a formula which divides two sets of factorials.
The problem I have is that calculators are prohibited, so it must be calculated on pen and paper, so I wonder what the best strategy is here?
The professor's suggestion was to simplify the factorial division, but the example he used didn't show how he simplified it, but also the numbers he was using were smaller.
One example I worked on is as follows:
12! / (6! x 6!)
The smallest I can simplify this to is as follows (although let me know if it can be simplified further as I may be wrong):
(2 * 11 * 10 * 3 * 4 * 7) \ (5 * 4)
This still seems like an excessive calculation for pen and paper (the question will be worth a single mark, also).
What would you suggest as a strategy for calculating the division of two factorials on pen and paper? (the goal being to come up with a strategy which takes the least amount of time to work out using pen and paper).
The result must be expressed as a single number, so using this example the result would be 924.
Thanks
-
## 1 Answer
You can immediately cancel the $2\times10$ on the top with the $5\times4$ on the bottom, leaving $11\times3\times4\times7$.
$$11\times3\times4\times7=11\times12\times7=84\times11=924$$
The calculation shouldn't be too hard: $3\times4$ is easy, $12\times7$ is also pretty easy, and multiplication by $11$ is simple: If the two digits are $xy$, then $11\times xy=x(x+y)y$ (potentially carrying digits), which is easy to do mentally. For example, $11\times 23=253$. How big are these factorials going to be exactly?
-
I think what the professor said was that the result would be no larger than 1,000 (or something to that effect), so there shouldn't be any excessive calculations. – CiaranG Dec 19 '12 at 22:18
@CiaranG Then this'll probably be as complicated as it'll get. You have the right idea for simplifying it. Just note that since the result will always be an integer, you can always cancel enough to get rid of the denominator. Do that first and then just carry through the remaining multiplications. It might be a little tedious, but it shouldn't be too bad that way. – Robert Mastragostino Dec 20 '12 at 0:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9745420813560486, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/24396?sort=oldest
|
## Would Euler’s proofs get published in a modern math Journal, especially considering his treatment of the Infinite?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was wondering how mathematicians of today would treat, for example, Euler's proof of zeta(2).
In William Dunham's book 'Journey through Genius' ( http://www.amazon.com/Journey-through-Genius-Theorems-Mathematics/dp/014014739X ), the writer states (more or less) that most mathematicians of today wouldn't aprove of Euler's methods, as his treatment of the 'infinite' doesn't uphold to today's modern standards of rigour.
His evaluation of Zeta(2) and all other even zeta-arguments op to Zeta(26) was correct, however. Would Euler's proofs get published in a well-respected math journal?
(Of course, his papers would now be written in english and the would-be published results aren't known to the mathematical community, yet).
Thanks in advance,
Max Muller
PS: O.K. everyone, I think many of you have stressed some important points regarding this question. I can't choose one, which is why I have upvoted some of your answers and left the question as it is. Thank you for your thoughts.
PPS: I'm sorry for the confusing title of the question in its previous form. I hope you all think it is stated better now.
-
16
Sufficient unto the day is the rigor thereof. -- E. H. Moore . – Gerald Edgar May 12 2010 at 14:31
34
I haven't read your question yet, but I'm pretty sure the answer to your title question is yes. – Cam McLeman May 12 2010 at 14:46
3
Ha Carn McLeman, your comment made me laugh and made me a bit proud as well (I've got a Swiss and Dutch passport). Please reconsider reading the question, though, I think it's well worth it to think about the answer! – Max Muller May 12 2010 at 15:08
7
I don't think the title and the body are asking the same question. – Qiaochu Yuan May 12 2010 at 15:21
19
Two different questions: (1) Would a paper written in 18th-century style using 18th-century rigor be published in a 21st-century well-respected math journal? (2) If Euler were alive today, would he use 18th-century style and rigor in his manuscripts, or 21st-century style and rigor? I think the answers are obvious... – Gerald Edgar May 12 2010 at 15:49
show 9 more comments
## 9 Answers
I think the answer to your question is pretty well-addressed by Gerald in the comments. Let me just throw in a couple of points that wouldn't fit in a comment.
1) Euler knew what he was doing. He had a tremendous ability for mental calculation, and verified to his satisfaction that his sums converged. He was well aware of the divergence of the harmonic series, so it's not like he was unaware of the fundamental issues surrounding "treating the infinite." If a sequence converged in accuracy by one digit every couple of terms for 30 or 40 consecutive terms, this was good enough for him to be convinced.
2) Even if Euler's proofs were not the most rigorous, that is not to say that modern mathematicians don't appreciate the methods behind them. I'd say more often than not, the hard part of mathematics is figuring out what should be true. Any student armed with Fourier analysis (and probably less) could re-derive many of Euler's formulas ($\zeta(2)$ in particular) -- few, however, would be able to play with the series even heuristically to figure out what the nontrivial contribution to the sum was, and fewer still would arrive at $\frac{\pi^2}{6}$ without prior exposure. If Euler were to rediscover his results in today's academic atmosphere, I suspect he would be hailed for his great insight into what "should be happening," and have a very successful career providing graduate students amazing problems which needed details filled in.
3) Even today, heuristics form an important part of mathematical research, so the legacy (if you will) of Euler's approach is still alive and well. The Cohen-Lensta heuristics, as a more modern example (or maybe even analytic conjectures in general...maybe even something like BSD) might be considered as fundamental pieces of insight gleaned from heuristic reasoning and experimental data.
-
2
Cam McLeman, thank you for this answer. I have no doubt Euler knew what he was doing, nor dor do I think anybody isn't amazed at the results he obtained and how. Everyone would 'hail' him for his insights nowadays, if he'd arrive at $zeta(2)= pi^2/6$ now. I'm very impressed with his methods as well. It would be a pity, however, if his result wouldn't make it to a paper because his methods are simply 'not rigorous enough'. Another question comes to mind know: could mathematicians provide a firm(er) logical standing for Euler's methods, so his hypothetical 18th century paper would get accepted? – Max Muller May 12 2010 at 17:00
8
Well, just to adopt the opposite point of view, we can all marvel at Euler's greatness now in part because modern mathematics came along and was able to fill in the details. For every Euler, there's a thousand cranks whose "brilliant but non-rigorous ideas" actually do collapse after an application of modern rigor. I'm not sure how to respond to your last question -- there are many many ways of filling in the holes, to various extents modelling Euler's approaches. Isn't that partly what Dunham's book is all about? – Cam McLeman May 12 2010 at 17:11
I strongly agree with you, Cam. As many others have stated, modern mathematicians can look at his methods in awe because (<-- I want that word in italics) he helped to establish the standards of rigour we're so proud of today. Euler's greatness lied in 'feeling' a bit what was right in mathematics, I think he developed an intuition to feel how to solve a problem. Dunham's book is mainly about the fact that the're a lot of ways to solve different problems. Of course, we can model Euler's approaches to fill in the holes, but the original idea he exhibited is the most important. – Max Muller May 15 2010 at 23:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Not to be overly negative, but I think even with the qualifiers you write on the bottom of your question the answer is no, for a completely trivial reason. This has nothing to do with rigor. It's just that Euler's tools are not advanced enough for a modern publication.
To give you an example close to my interests, Euler himself pioneered the theory of partitions. He realized that one should take generating functions for the number of partitions from various classes and prove identities for the resulting $q$-series to establish connections between them. I read his work (in translation, I am afraid) and it is lucid, beautiful and mostly rigorous. But (to answer your question) you can't take his obscure result and submit to a serious journal. This might have worked pre-1930 or so, but in modern times the journals are no longer satisfied with "new and correct" papers; the papers are also expected to have interesting technical innovations which might prove helpful. Euler's techniques by now are too well known and "standard" to be of interest...
Of course, this is not to say that Euler does not continue to publish books or even articles (see e.g. MR0818419).
-
4
Perhaps a fairer interpretation of the question is about how a "modern-day Euler" would be received - someone whose education was up to date but whose sense of rigor was analogous to Euler's. One modern-day analogue I can think of is Feynman and the path integral. – Qiaochu Yuan May 12 2010 at 19:41
4
There also some famous mathematicians in modern times, whose standards of rigour and complete proofs do not please everyone. One example, which comes to my mind, is Thurston, where it took many years to fill his proofs in 'Geometry and Topology of 3-manifolds' with details and rigour (I don't know if it is done for everything). I've read that on these grounds, Serre was against to arward Thurston the Fields medal. Two other examples of mathematicians, who are more well-known for new ideas than complete and rigorous proofs are probably Gromov and Sullivan. – Lennart Meier May 13 2010 at 12:30
2
[Thurston writes](arxiv.org/abs/math/9404236v1) (in section 6) that his approach to rigor in the Geometrization program was intentional. He also argues that is was beneficial, for it mimics the way that people learn and live mathematics, which is actually quite different from what a stark formal proof or written paper might suggest. – Greg Graviton Sep 28 2010 at 19:45
I'm very late to this party, but I think it is important to point out that Thurston had complete and rigorous proofs of all the results he claimed. He discussed them with many people, and whenever pressed was able to produce as many details as people needed. He just chose not to write papers containing all the details of his proofs. The paper that Greg Graviton refers to contains his justification for this decision. – Andy Putman Apr 10 at 14:32
@Igor Pak: I don't think it is accurate to assert that "Euler's techniques by now are too well known and "standard" to be of interest..." Most of Euler's results are of course well understood, but his techniques, such as proof of the infinite product decomposition for the sine function, were properly interpreted only recently, through the work of Luxemburg, Kanovei, and others. The results are old, but the techniques are only beginning to be understood properly. – katz Apr 10 at 15:01
show 2 more comments
Even when he was wrong, Euler was frequently right.
-
I guess we can say the same about Ramanujan too. :) – Koundinya Vajjha May 13 2011 at 13:06
My own belief is that contemporary discounting of the validity of Euler's arguments is misplaced. Euler wrote arguments to the standards of his day. If he were writing now, he would write arguments to the standards of our day. As far as I know, and given his brilliance and insight, there's no reason to think that he wouldn't have been able to fill in any analytic details required to justify his arguments (which in the case under discussion are fairly straightforward).
-
Just to stress a few points already addressed in comments and answers:
Euler in his time discovered many important facts and solutions to classical questions, advanced rigor and gave examples of the power of the recently created methods (infinitesimal calculus), popularized the science of his day (notably books dedicated to a German Princess), wrote some of the first textbooks in analysis (still pleasant reading today), gave strength to the prussian and russian academy of science, courtized by two of the most powerful powers of the day (the King of Prussia and the Czar of Russia), filled international academic journals, some of them he edited himself, with quality articles (in fact up to several decades after his death because of the sheer size of his output), fostered international cooperation, wrote in the most important languages of his day (latin, french, german, I think he also learned russian), published in applied science, was part of state scientific advisory commission, etc.
In fact Euler's work has been instrumental in progressively establishing the "rigor" some of us are so proud of.
So a better equivalent of his investigation of what we call now Zeta(2 n) and the Gamma function would be the solution of outstanding problems by one of the most recognized mathematician of his day building on recent work by one of his even more famous and established mathematician, Bernoulli, who was his PhD advisor and whose several family members have established positions in the scientific community.
I think he would have no difficulty publishing it. And his work would be quickly read and commented upon by many other mathematicians.
Even if we imagine a Leonard Euler finding himself straight-jacketed by the mathematical discourse and style of the XXIst century, he would pair up with another good mathematician to write scholarly articles, as Ramanujan and Hardy used to do at the beginning of the XXth in a mutually benefical couple.
-
The important thing about Euler is he saw an approximately correct path to the correct solution. These kinds of people have always been the most regaled in mathematics. Whether or not he could write proofs to the degree we expect today is probably immaterial: he improved the human understanding of mathematics more than anyone in his generation and probably more than all but at most a handful of mathematicians in all of time. If he could continue to generate almost correct proofs/heuristics to produce correct answers to problems, he would no doubt find himself with employment as well as plenty of coauthors eager to check his pencil marks.
Mathematical rigor is important because intuition is too frequently wrong. But I think seeing the big ideas is still considered more valuable than getting the proof completely right.
Riemann's proof of the Riemann mapping theorem was flawed because it made use of the Dirichlet principle to a greater extent than is actually possible (as shown later by Weierstrass). Even if Riemann turned out to be somewhat wrong and the theorem did not hold to the generality he believed because the intuition of the day was that the Dirichlet principle was universally sound, would we not call it the Riemann mapping theorem (and instead name it after whoever gave the first complete proof)? In a similar-but-somewhat-different light, should Thurston's geometrization conjecture-now-theorem have a different name?
-
I learned in my science history lessons, that the standards for mathematics varied over time.
In the ancient Greek time (let say, the period of Archimedes), it was preferred that a theorem had more than one proof. At least, it was not uncommon to give more than one proof.
So, one can argue, that current math articles would not be accepted by the well respected Archimedes.
I don't know the details, but don't assume that the standards of rigour were monotonic increasing over time.
Lucas
-
That is an interesting perspective! Of course, you are aware that Archimedes himself has not "published" anything - his extant work was compiled over 700 years later using manuscripts and letters to Alexandrian scholars. – Victor Protsak May 13 2010 at 21:11
Let me answer by rephrasing your question. How will Andrew Wiles' proof of Fermat's Last Theorem be seen in the year 2260? I think it will be acknowledged then as much as it is acknowledged today that the proof was a major step in the history of mathematics. However, I seriously doubt that his proof will be considered as 'rigorous' by the standards of the year 2260. A well-respected math journal will then require a proof formally verified by a symbolic engine. (See the Notices of AMS 2008, vol. 55, issue 11).
-
9
I'm not so sure -- despite a multiple-millenium-long trend, it's not clear to me that the level of rigor required by the mathematical community will continue to be monotonically increasing. – Cam McLeman Sep 28 2010 at 20:26
In answering this question, it is helpful to make a distinction between, on the one hand, what Reeder calls the "inferential moves" that Euler makes (see related thread http://mathoverflow.net/questions/126986/eulers-mathematics-in-terms-of-modern-theories), and on the other, the mathematical objects he manipulates (infinitesimals, infinite integers, etc). This allows Reeder to observe that modern infinitesimal theories are far more successful in formalizing Euler's procedures ("inferential moves") than are $\epsilon,\delta$ techniques.
Traditional scholars like Ferraro (see thread linked above) were trained on the basis of conceptual frameworks that are inadequate to the task of making such an evaluation, and tend to receive the work of scholars like Laugwitz with hostility.
Laugwitz argued for an essential coherence of infinitesimal reasoning in both Cauchy and Euler, modulo certain "hidden lemmas" that need to be made explicit to meet a modern standard. I would adopt an optimistic position that many of Euler's greatest contributions are immediately publishable in contemporary journals, provided minimal changes are made so as to clarify the nature of the objects as well as the "hidden lemmas".
The verdict is still out on whether the MATHEMATICAL community (as opposed to that of the HISTORIANS of mathematics) will in the end side with Reeder's analysis or Ferraro's analysis.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9758492112159729, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/4608-parabolas-print.html
|
parabolas
Printable View
• August 1st 2006, 09:17 AM
Brooke
1. Find the equation of a parabola with vertex at (1,5) and focus (-5.5).
2. Find the equation of an ellipse with endpoints of the minor axis (10,2) and (10,8) and major axis of length 24.
• August 1st 2006, 09:39 AM
earboth
Quote:
Originally Posted by Brooke
1. Find the equation of a parabola with vertex at (1,5) and focus (-5.5).
2. Find the equation of an ellipse with endpoints of the minor axis (10,2) and (10,8) and major axis of length 24.
Hello, Brooke,
1. I assume that there is typo and you actually mean that the focus has the coordinates (-5,5).
2. The general equation of a parabola with axis parallel to the coordinate axis is:
$(y-y_V)^2=2\cdot p \cdot (x-x_V)$ where p is the distance between the focus and the directrix. So plug in the values you know and you'll get:
$(y-5)^2=2\cdot (-12) \cdot (x-1)$
3. The axises of the ellipse are parallel to the coordinate axises. The centre of the ellipse is thus M(10,5). So the endpoints of the major axis are at L(-2,5) and R(22,5)
Greetings
EB
All times are GMT -8. The time now is 04:56 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881359696388245, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/6999/choice-of-epsilon-for-numerical-calculation-of-vega-in-binomial-option-pricing-m?answertab=votes
|
# Choice of epsilon for numerical calculation of vega in binomial option pricing model
I have a binomial option-pricing model (I don't think the details of how its implemented are relevant). However, when I go to calculate vega, I am essentially running the model a second time with new volatility input we call $\sigma + \epsilon$, where $\sigma$ is the volatility used to calculate the price. I then have two prices $p1$ and $p2$, the first of which is a function of $\sigma$, and the second a function of $\sigma + \epsilon$. I then calculate vega to be $\frac{(p2 - p1)}{\epsilon}$.
I'm having trouble coming up with a good value of $\epsilon$ to avoid floating-point underflows and overflows in the resulting calculation. Any suggestions on how to choose it?
Some things that I have available to me at the time of the choice of $\epsilon$:
• expiration
• strike
• volatility
• underlying price
• current time
• option price (as calculated by my model)
• delta (as calculated by my model)
• gamma (as calculated by my model)
-
I am not sure I follow but should epsilon not be the value that you shift your volatility with in order to calculate impact on price? Why not shifting up sigma by 1%? – Freddy Jan 16 at 2:43
While vega is usually quoted as "change per 1% change in vol", vega is the partial derivative with respect to $\sigma$. hence, $\epsilon$ should be chosen as small as possible, avoiding numerical cancellation errors. – Christian Fries Jan 16 at 19:56
## 1 Answer
Let $\beta$ denote the relative machine precision, usually $\beta = 1E-16$. Assume the you can evaluate the value V up to precision $\alpha$. The best you can get is $\alpha = \beta \cdot V$ if $V$ is not underflow or overflow. Then you can calculate the finite difference up to precision $4 \alpha / \epsilon$ (the 4 might be a rough estimate, but it comes from the fact that there may be an additional cancelation in the finite difference of relative order $\beta$ and we use $\alpha > \beta \cdot V$.
Thus you can calculate the finite difference up to $4 \alpha / \epsilon$. On the other hand, from Taylor expansion, the approximation error of a finite difference is $C \epsilon$, where $C$ is a bound on the second derivative. The best choice of $\epsilon$ is the minimum of $\epsilon \mapsto 4 \alpha / \epsilon + C \epsilon$, which is attained for $-4\alpha/\epsilon^2 + C = 0$, i.e. $\epsilon = 2 \cdot \sqrt{\alpha/C}$. If your tree achieves machine accuracy, i.e., $\alpha = \beta V$, then you should choose $\epsilon = 2 \sqrt{\frac{V}{V''}} \cdot \sqrt{\beta} = 2 \sqrt{\frac{V}{V''}} 1E-8$. In the last equation V denote "the order of magnitude of the value and $V''$ the order of magnitude of the second derivative. Clearly, if the second derivative is small, you can use larger shifts.
-
In V'' you're referring to the second derivative of price with respect to sigma, correct? I don't have that available. – laslowh Jan 16 at 20:41
You just need a rough estimate for V''. For example via finite difference too. ($V(x+\epsilon) - 2 V(x) + V(x-\epsilon) / \epsilon^2$. – Christian Fries Jan 16 at 22:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137865900993347, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/298331/why-does-zeta-have-infinitely-many-zeros-in-the-critical-strip
|
# Why does zeta have infinitely many zeros in the critical strip?
I want a simple proof that $\zeta$ has infinitely many zeros in the critical strip.
The function $$\xi(s) = \frac{1}{2} s (s-1) \pi^{\tfrac{s}{2}} \Gamma(\tfrac{s}{2})\zeta(s)$$ has exactly the non-trivial zeros of $\zeta$ as its zeros ($\Gamma$ cancels all the trivial ones out). It also satisfies the functional equation $\xi(s) = \xi(1-s)$.
If we assume it has finitely many zeros, what analysis could get a contradiction?
I found an outline for a way to do it here but I can't do the details myself: http://mathoverflow.net/questions/13647/why-does-the-riemann-zeta-function-have-non-trivial-zeros/13762#13762
-
The first answer of that MO post is much shorter, comparing to the answer you are quoting. Do you somehow find the first answer unsatisfactory? – Sanchez Feb 8 at 21:34
2
Questions on the RH should be barred from SE imo. What real purpose does this question serve? – Jp McCarthy Feb 9 at 3:24
5
@JpMcCarthy, there are (two) crucial logical differences between infinitely many zeroes in the critical strip (the subject of OPs question) and every nontrivial zero is on the critical line (the Riemann hypothesis). Are you and those who upvoted your comment aware of this? While this question is on the topic of the zeta zeroes, which are also the subject of RH, this question (in contrast) is by design amenable to objective, definitive, enlightening and accessible answers by relevant experts in the here-and-now, and therefore constitutes a legitimate query for this site. – anon Feb 9 at 16:40
4
@JpMcCarthy: I don't understand your objection at all. First, the OP is not asking about RH, but about a very much simpler and more basic property of the Riemann zeta function (one which was well known to Riemann in the 19th century). Second, you ask: "What real purpose does this question serve?" Well, the OP wants a simple proof that the zeta function has infinitely many nontrivial zeros. Answering the question will fulfill this desire of the OP and may be useful to others. In other words, it serves the same purpose as most other questions asked on this site. – Pete L. Clark Feb 9 at 20:34
2
While I do understand the difference you allude to anon, I must admit that I have misjudged the situation. I withdraw my comment. – Jp McCarthy Feb 12 at 9:59
show 2 more comments
## 2 Answers
Hardy proved in 1914 that an infinity of zeros were on the critical line ('Sur les zéros de la fonction $\zeta(s)$ de Riemann' Comptes rendus hebdomadaires des séances de l'Académie des sciences. 1914).
Of course other zeros could exist elsewhere in the critical strip.
Let's exhibit the main idea starting with the Xi function defined by : $$\Xi(t):=\xi\left(\frac 12+it\right)=-\frac 12\left(t^2+\frac 14\right)\,\pi^{-\frac 14-\frac{it}2}\,\Gamma\left(\frac 14+\frac{it}2\right)\,\zeta\left(\frac 12+it\right)$$ $\Xi(t)$ is an even integral function of $t$, real for real $t$ because of the functional equation (applied to $s=\frac 12+it$) : $$\xi(s)=\frac 12s(s-1)\pi^{-\frac s2}\,\Gamma\left(\frac s2\right)\,\zeta(s)=\frac 12s(s-1)\pi^{\frac {s-1}2}\,\Gamma\left(\frac {1-s}2\right)\,\zeta(1-s)=\xi(1-s)$$ We observe that a zero of $\zeta$ on the critical line will give a real zero of $\,\Xi(t)$.
Now it can be proved (Ramanujan's $(2.16.2)$) that : $$\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}\cos(x t)\,dt=\frac{\pi}2\left(e^{\frac x2}-2e^{-\frac x2}\psi\left(e^{-2x}\right)\right)$$ where $\,\displaystyle \psi(s):=\sum_{n=1}^\infty e^{-n^2\pi s}\$ is the theta function used by Riemann
Setting $\ x:=-i\alpha\$ and after $2n$ derivations relatively to $\alpha$ we get (see Titchmarsh $10.2$) : $$\lim_{\alpha\to\frac{\pi}4}\,\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}t^{2n}\cosh(\alpha t)\,dt=\frac{(-1)^n\,\pi\,\cos\bigl(\frac{\pi}8\bigr)}{4^n}$$ Let's suppose that $\Xi(t)$ doesn't change sign for $\,t\ge T\,$ then the integral will be uniformly convergent with respect to $\alpha$ for $0\le\alpha\le\frac{\pi}4$ so that, for every $n$, we will have (at the limit) : $$\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}t^{2n}\cosh\left(\frac {\pi t}4\right)\,dt=\frac{(-1)^n\,\pi\,\cos\bigl(\frac{\pi}8\bigr)}{4^n}$$
But this is not possible since, from our hypothesis, the left-hand side has the same sign for sufficiently large values of $n$ (c.f. Titchmarsh) while the right part has alternating signs.
This proves that $\Xi(t)$ must change sign infinitely often and that $\zeta\left(\frac 12+it\right)$ has an infinity of real solutions $t$.
Probably not as simple as you hoped but a stronger result!
-
This is nice, but let's be clear that it's a much stronger result than what the OP asked for. – Pete L. Clark Feb 9 at 20:35
– Raymond Manzoni Feb 10 at 10:44
It is known that $$\xi(s)=\frac12\prod_{\xi(s)=0}\left(1-\frac s\rho\right),$$ i.e. $\xi$ would turn out to be a polynomial of degree $n$, say. Then we conclude that $\ln \xi(s)\sim n\ln s$ as $\mathbb R \ni s\to \infty$, but it is known that $\ln \xi(s)\sim s\ln s$.
-
I don't know how to prove any of those 3 claims. – user58512 Feb 8 at 21:36
You missed the $e^{s/\rho}$ factor in the product. – Sanchez Feb 8 at 21:37
@user58512, you may want to look up Weierstrass factorization theorem. – Sanchez Feb 8 at 21:38
1
– Hagen von Eitzen Feb 9 at 13:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353789687156677, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/201689-find-one-two-glide-reflections-plane-lattice-l.html
|
1Thanks
• 1 Post By Deveno
# Thread:
1. ## Find one of the two glide reflections of the plane lattice L.
Hi,
This is probably more of an algebra/geometry question so I hope I've posted it in the right place. I hope I have the answer to this right but as I'm not 100% sure I thought I'd ask for some feedback. The question is: I have the set of vectors {a=(1,-1), b=(1,2)} as a reduced basis for the plane lattice L. I need to find one of the two glide reflections of L that map the point (1,2) to (-1,1), in the standard form t[d]q[θ] and then in the form q[g,c,θ]. I have selected in standard form t[(-3,0)]q[∏/4] which is reflection in the line ∏/4 followed by translation of (-3,0), and in the other form this is t[(-3/2, -3/2)]q[(3/4,9/4)] ∏/4, which is reflection in ∏/4 through the point (3/4, 9/4) followed by a translation of (-3/2, -3/2), which is in the direction of the reflection line.
Could anyone offer any advice with this question, whether my thinking is correct or am I way off (which wouldn't surprise me!).
Thanks.
Pat
2. ## Re: Find one of the two glide reflections of the plane lattice L.
ok, this is what my calculations show:
(1,2) → (2,1) → (2,1) + (-3,0) = (-1,1)
(1,2) = (3/4,9/4) + (1/4,-1/4) → (3/4,9/4) + (-1/4,1/4) = (1/2,5/2) → (1/2,5/2) + (-3/2,-3/2) = (-1,1)
so that all looks fine. of course, i would be tempted to verify they are actually the same map, like so:
$\begin{bmatrix}x\\y \end{bmatrix} \to \begin{bmatrix}y\\x \end{bmatrix}\to \begin{bmatrix}y-3\\x \end{bmatrix}$
versus:
$\begin{bmatrix}x\\y \end{bmatrix} = \begin{bmatrix}\frac{3}{4}\\ \frac{9}{4} \end{bmatrix} + \begin{bmatrix}x-\frac{3}{4}\\y - \frac{9}{4}\end{bmatrix} \to\begin{bmatrix}\frac{3}{4}\\ \frac{9}{4} \end{bmatrix} + \begin{bmatrix}y-\frac{9}{4}\\x - \frac{3}{4}\end{bmatrix}$
$=\begin{bmatrix}y-\frac{3}{2}\\x +\frac{3}{2}\end{bmatrix} \to \begin{bmatrix}y-3\\x \end{bmatrix}$
i do not know how your "standard forms" are defined (in particular which θ and c you are allowed to use), but as near as i can tell, your second form is "kosher" in that the final translation is indeed parallel to the axis of reflection.
3. ## Re: Find one of the two glide reflections of the plane lattice L.
Hey Deveno, thanks for the reply. I think I get what you are saying. Just to clarify when I say standard form, that to me is reflection in the line at the specified angle through the origin followed then by the translation. The other form is reflection in the line at the specified angle through a point c and then translation parallel to the line of reflection. There are only two reflection symmetries for this lattice, in ∏/4 and 3∏/4, so only two possible glide reflections. I think the one above is fine then but any advice on getting the other one, just for completion?
4. ## Re: Find one of the two glide reflections of the plane lattice L.
ok, well reflection about the line at angle 3π/4 is this mapping:
$\begin{bmatrix}x\\y \end{bmatrix} \to \begin{bmatrix}-y\\-x \end{bmatrix}$
(this is actually a linear map).
this takes (1,2) → (-2,-1), so we need to follow with a translation by (1,2) (note that (1,2) is indeed in the lattice L).
if you'll tell me how you determine "c", i'll tell you the second form.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431121349334717, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.