url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://cms.math.ca/10.4153/CJM-2009-007-4
|
Canadian Mathematical Society
www.cms.math.ca
| |
Site map | CMS store
location: Publications → journals → CJM
Abstract view
# On the Littlewood Problem Modulo a Prime
Read article
[PDF: 224KB]
http://dx.doi.org/10.4153/CJM-2009-007-4
Canad. J. Math. 61(2009), 141-164
Published:2009-02-01
Printed: Feb 2009
• Ben Green
• Sergei Konyagin
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF PostScript
## Abstract
Let $p$ be a prime, and let $f \from \mathbb{Z}/p\mathbb{Z} \rightarrow \mathbb{R}$ be a function with $\E f = 0$ and $\Vert \widehat{f} \Vert_1 \leq 1$. Then $\min_{x \in \Zp} |f(x)| = O(\log p)^{-1/3 + \epsilon}$. One should think of $f$ as being approximately continuous''; our result is then an approximate intermediate value theorem''. As an immediate consequence we show that if $A \subseteq \Zp$ is a set of cardinality $\lfloor p/2\rfloor$, then $\sum_r |\widehat{1_A}(r)| \gg (\log p)^{1/3 - \epsilon}$. This gives a result on a mod $p$'' analogue of Littlewood's well-known problem concerning the smallest possible $L^1$-norm of the Fourier transform of a set of $n$ integers. Another application is to answer a question of Gowers. If $A \subseteq \Zp$ is a set of size $\lfloor p/2 \rfloor$, then there is some $x \in \Zp$ such that \[ | |A \cap (A + x)| - p/4 | = o(p).\]
MSC Classifications: 42A99 - None of the above, but in this section 11B99 - None of the above, but in this section
© Canadian Mathematical Society, 2013
© Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7587423324584961, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=415676
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## Is the graviton a different particle from the higgs boson
I was reading this book that separated the graviton from the higgs boson. Can I get some help anything works for me.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Gold Member Well, to start, the graviton is spin 2, the Higgs is spin 0. Then, the graviton is massless, the Higgs (if it exists) is very massive, much more than a proton. Third, very roughly speaking, the graviton transmits gravitational force, the Higgs field (related to the Higgs particle) creates or generates the masses in all other massive particles. All of the above is a little to a lot oversimplified. Best, Jim Graber
To simplify the above further, yes it is two completely different particles. performing two entirely different parts in particle physics.
## Is the graviton a different particle from the higgs boson
So would the Higgs boson be a force carrying particle even if it were massive? What is the function of the Higgs boson? Is the graviton the force carrying particle for gravity?
Recognitions:
Science Advisor
Quote by filegraphy So would the Higgs boson be a force carrying particle even if it were massive? What is the function of the Higgs boson? Is the graviton the force carrying particle for gravity?
The Higgs boson would mediate a force of sorts, but no one gives it a name. It's just part of the "Electroweak Force". Its force is similar to that of the "Z boson" (with some technical differences).
And yes, the "graviton" is the particle that mediates the gravitational force.
Recognitions: Science Advisor The Higgs field is "decomposed" into two parts a) a constant, non-fluctuating classical vacuum expectation value responsible for the masses of W, Z and the fermions b) the fluctuations around this vacuum expectation value which appear as Higgs bosons carrying a kind of "force" (as blechman said)
Recognitions: Gold Member They are certainly two separate and different theoretical particles; neither has been proved experimentally. There is hope that the Large Hadron Collider may confirm the existence of the Higgs boson. The Higgs boson is I think the only standard model particle that has not been detected experimentally; gravity is not part of the standard model.
I would add that the Higgs Boson is not required, just the Higgs Mechanism, and as Naty1 pointed out, there is no evidence of it, and certainly no evidence of gravitons. The Higgs would be nice to find however...
Recognitions: Science Advisor Some further remarks: As nismaratwork said the particle itself is not so relevant, the mechanism of spontaneous breaking of the electro-weak gauge symmetry is what matters. Unfortunately there are no fully viable Higgs-less alternatives, but the community is working on them, just to have a fallback strategy if the LJC disproves the existence of the Higgs. One must distinguish gravitons and gravitational waves. The latter one are required by GR and there are indirect results indicating their existence. Gravitons are theoretical artefacts based on analogies between quantizing the electromagnetic field (which gives us the photon) and quantizing the gravitational field. It could work that way, but it is also possible that quantizing gravity is totally different from quantizing other fields; therefore this does not necessarily mean that the graviton is required by nature.
So if the Higgs boson were massive, following relativity, it would be a force carrying particle traveling the speed of light causing it to have infinite mass. I thought a massive object cannot travel at the speed of light or else it would have infinite mass. Something is wrong with this picture.
Quote by filegraphy So if the Higgs boson were massive, following relativity, it would be a force carrying particle traveling the speed of light causing it to have infinite mass. I thought a massive object cannot travel at the speed of light or else it would have infinite mass. Something is wrong with this picture.
Force carrying particles do not necessarily travel at the speed of light. Just like any other type of particle, that's only if they're massless.
Recognitions:
Science Advisor
Quote by filegraphy So if the Higgs boson were massive, following relativity, it would be a force carrying particle traveling the speed of light causing it to have infinite mass. I thought a massive object cannot travel at the speed of light or else it would have infinite mass. Something is wrong with this picture.
particles traveling at the speed of light do not have infinite mass!! you're think of the old and antiquated idea of "relativistic mass" which we no longer use. whenever we refer to "mass" we always mean "rest mass", that is, the energy of the particle when it's at rest:
$$E^2=p^2c^2 + m^2c^4$$
So mass is the amount of energy the particle has when p=0.
You can prove that when the mass vanishes (in the sense I am saying here), then the particle is traveling at the speed of light (think photon!). When the mass does NOT vanish, you can prove that it would take an infinite amount of energy to get the particle's velocity up to the speed of light. That's where the "infinite energy" comes in.
Recognitions:
Science Advisor
Quote by the_house Force carrying particles do not necessarily travel at the speed of light. Just like any other type of particle, that's only if they're massless.
also, what he said!!
So the Higgs boson is a massive force carrying particle. It travels less than the speed of light. what force or energy does it carry?
Recognitions: Science Advisor as i said above, it carried a force similar to the force carried by the Z boson. It doesn't have a name, it's just lumped into the "Electroweak force"
Recognitions: Science Advisor The Higgs can turn fermion species into each other; or it can mediate fermion-antifermion annihilation into gauge bosons.
So how are we able to combine the photon (electromagnetic force) with the W and Z bosons (weak nuclear force). Would this create the Higgs boson.
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|--------------------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Is the graviton a different particle from the higgs boson | | |
| Thread | Forum | Replies |
| | High Energy, Nuclear, Particle Physics | 21 |
| | High Energy, Nuclear, Particle Physics | 0 |
| | High Energy, Nuclear, Particle Physics | 5 |
| | High Energy, Nuclear, Particle Physics | 2 |
| | General Physics | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396456480026245, "perplexity_flag": "middle"}
|
http://physics.aps.org/articles/print/v4/67
|
# Viewpoint: Majorana fermions inch closer to reality
, University of Illinois at Urbana-Champaign, 1110 W. Green St., Urbana, IL 61801, USA
Published August 22, 2011 | Physics 4, 67 (2011) | DOI: 10.1103/Physics.4.67
Researchers predict that Majorana fermions can be found in magnetic field vortices in the bulk of the superconductor $CuxBi2Se3$.
The search for Majorana fermions is quickly becoming an obsession in the condensed-matter community. To understand the intense interest, I will begin with a practical definition: a Majorana fermion is a fermion that is its own antiparticle. While sophisticated particle physics experiments are testing for Majorana character in neutrinos propagating in three dimensions [1], solid state physicists are more interested in lower dimensional counterparts. The most interesting Majorana fermions that are predicted to appear in materials are zero-dimensional bound states confined to live on various types of topological defects [2]. In a paper published in Physical Review Letters, Pavan Hosur and collaborators from the University of California, Berkeley, predict that these bound states are found in the vortices of the superconductor $CuxBi2Se3$ [3] (Fig. 1). Once discovered, a set of zero-dimensional Majorana bound states (MBS) are predicted to exhibit exotic non-Abelian statistics when exchanged among each other. While of great fundamental interest, perhaps the biggest driving factor in the search is a well-regarded proposal for (topological) quantum computation, which uses this unique statistical property of the MBS to robustly process quantum information free from local sources of decoherence [4, 5].
With the high-energy and condensed-matter communities exerting so much effort to find Majorana fermions, it is a bit surprising they have not yet been discovered. Why are Majorana fermions naturally elusive, at least in a condensed-matter setting? To determine if an electron, for example, is its own antiparticle we can perform a simple test: shoot two identical electrons at each other and look at the outcome. If there is a finite probability that the electrons annihilate into the fermionic vacuum, then they could be Majorana fermions. However, we know, for example, that electric charge is conserved and thus the two electrons can never annihilate, and are thus not Majorana. Electrons, in fact, have an independent antiparticle, the positron, which has different quantum numbers (e.g., opposite electric charge).
Naively, this eliminates all fermions at play in conventional electronic systems from being Majorana. The key to getting around this obstacle is noting that one finds many different emergent fermionic vacua/ground states in electronic systems that are qualitatively different from the fundamental vacuum of spacetime. To illustrate this, consider a BCS superconductor ground state filled with a condensate of paired electrons. If we again scatter two electrons off each other, they can indeed bind into a Cooper pair and “annihilate” into the fermionic vacuum! However, if the vacuum is of $s$-wave character, the most common superconducting ground state, then the two electrons bound into the Cooper pair must have opposite spin and are thus not Majorana (the antiparticle of an electron with spin up, in this case, is one with spin down). The solution to this problem is manifest: we must find a way to get around the spin-quantum number. Currently, there are two primary mechanisms to do this: (i) the superconducting vacuum can have spin-triplet pairing, which pairs electrons with the same spin or (ii) the superconductivity can exist in the presence of spin-orbit coupling or some other mechanism which will remove the spin conservation. Solution (i) is the paradigm for the first proposals of the existence of MBS as quasiparticles of a fractional quantum Hall state which models a two-dimensional electron gas at filling $ν=5/2$ [6], and as vortex excitations in some theories of the unconventional superconducting state of $Sr2RuO4$ [7]. These proposals offer real material candidates for finding MBS, but experiments in both of these systems require utmost care in sample production and measurement precision. To date, MBS excitations have not been clearly distinguished in either of these systems. Recently, solution (ii), which was first implemented by Fu and Kane in topological insulator/superconductor heterostructures [8], has been garnering attention due to more inherent practicality. This has been followed up nicely with further predictions of MBS in low-dimensional spin-orbit-coupled heterostructures in proximity to $s$-wave superconductors [9].
The seminal proposal of Fu and Kane predicts that if the surface of a three-dimensional topological insulator is proximity-coupled to an $s$-wave superconductor, then vortex lines in the superconductor will trap MBS where the lines intersect the topological insulator surface [8]. This proposal requires two main ingredients: (i) a topological insulator and (ii) an $s$-wave superconductor that can effectively proximity-couple to the surface of the topological insulator. Despite all of the recent publicity about the discovery of three-dimensional topological insulators [10], finding a suitable topological insulator for these experiments is still a difficult task. The reason being that, as of yet, there are no topological insulator materials that are completely insulating in the bulk, despite intense experimental programs dedicated to this task. The most commonly studied topological insulators are variations of either $Bi2Se3$ or $Bi2Te3$ , in which it has been difficult to tune the bulk to a completely insulating state [11]. Thus, while many experiments have confirmed the robust nature and structure of the surface states, these materials, having bulk carriers, are not true topological insulators.
It is then natural to ask, What is a doped topological insulator good for? While one hopes that many of the topological phenomena of the true insulating state might be manifested in some form in a doped system, many questions still remain unanswered. However, Hosur et al. have made a striking prediction that MBS can still be realized in doped topological insulators under certain mild conditions [3]. A true insulating state is important in the Fu-Kane proposal because if the bulk contains low-energy states then the MBS can tunnel away from the surface and delocalize into the bulk, which effectively destroys the MBS. Hosur et al. circumvent this delocalization by requiring that the entire doped topological insulator become superconducting. They show that as long as the doping is not too large, vortices in superconducting topological insulators will bind MBS at the places where the vortex lines intersect the material surfaces. While this might seem like a big leap in complexity, experimental evidence already shows that, indeed, copper-doped $Bi2Se3$ is a superconductor below $3.8K$ [12]. In this context, Hosur et al. make a strong prediction that vortex lines in superconducting $CuxBi2Se3$ can harbor MBS.
To understand the prediction, we begin with the Fu-Kane proximity effect scenario, as mentioned above, with a vortex line stretched between two surfaces. MBS are trapped where each end of the vortex line meets the topological insulator surface (see Fig. 1). If we tune the bulk chemical potential to lie in the conduction band, as opposed to the nominal insulating gap, then the MBS on each end of the vortex line could tunnel through the bulk and hybridize with the state on the opposite end. This is prevented in Hosur et al.’s work by inducing a superconducting gap in the entire bulk so that the MBS remain trapped. If the superconducting state were homogeneous, then the MBS would be trapped on the ends of the vortex line for any doping level. However, the superconducting order parameter varies rapidly near the vortex core, which is essentially a thin tube of normal metal (doped topological insulator) containing bound states with energies that lie below the nominal superconducting gap. It is easiest for the MBS to tunnel through the “mini-gap” region in the vortex core, and in fact, Hosur et al. go on to show that there is a critical chemical potential level where a vortex-core bound state becomes gapless and the MBS can easily tunnel through the vortex line to annihilate. Beyond this critical doping, the vortex line re-enters a gapped phase, but the MBS are absent. See Fig. 1 for an illustration of this process. The critical chemical potential can be calculated solely from low-energy information about the Fermi surface, and depends on the orientation of the vortex line with respect to the crystal structure. It is estimated that vortex lines oriented along the $c$ axis of $CuxBi2Se3$ are just on the trivial side of the transition, while vortices perpendicular to the $c$ axis should be well within the nontrivial regime and should trap MBS.
If $CuxBi2Se3$ is indeed in the predicted experimental regime, then this development is an exciting breakthrough, since it offers a simple way to generate MBS in a system that already exists. These effects show that, up to a certain point, a doped topological insulator remembers something about its topological nature, and furthermore that we are tantalizingly close to the first observation of MBS.
### References
1. F. T. Avignone, III, S. R. Elliott, and J. Engel, Rev. Mod. Phys. 80, 481 (2008).
2. D. A. Ivanov, Phys. Rev. Lett. 86, 268 (2001).
3. P. Hosur, P. Ghaemi, R. S. K. Mong, and A. Vishwanath, Phys. Rev. Lett. 107, 097001 (2011).
4. A. Yu. Kitaev, Ann. Phys. (N.Y.) 303, 2 (2003).
5. C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. 80, 1083 (2008).
6. G. Moore and N. Read, Nucl. Phys. B360, 362 (1991).
7. N. Read and D. Green, Phys. Rev. B 61, 10267 (2000).
8. L. Fu and C. L. Kane, Phys. Rev. Lett. 100, 096407 (2008).
9. J. D. Sau, R. M. Lutchyn, S. Tewari, and S. Das Sarma, Phys. Rev. Lett. 104, 040502 (2010).
10. X.-L. Qi and S.-C. Zhang, Phys. Today 63, No. 1, 33 (2010).
11. M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
12. Y. S. Hor, A. J. Williams, J. G. Checkelsky, P. Roushan, J. Seo, Q. Xu, H. W. Zandbergen, A. Yazdani, N. P. Ong, and R. J. Cava, Phys. Rev. Lett. 104, 057001 (2010).
### Highlighted article
#### Majorana Modes at the Ends of Superconductor Vortices in Doped Topological Insulators
Pavan Hosur, Pouyan Ghaemi, Roger S. K. Mong, and Ashvin Vishwanath
Published August 22, 2011 | PDF (free)
### Figures
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050275683403015, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/94567/percentage-ratio-of-areas?answertab=oldest
|
# Percentage ratio of areas
I don't understand a question from a Math exercise. I did many things asked before, but I found this part very difficult to understand:
What is the percentage ratio between the combined area of six cylinder bottoms and the area of box bottom in case A?
I just calculated the area of the box (which is a rectangle): 66*396 = 26136. And about the cylinders, we know that h=187 mm and r=33 mm...
Maybe is a problem of my bad English, but I can't answer to the above question. Could you bring me some light?
Thank you very much in advance!
-
## 1 Answer
The cylinders seem to have a circular cross section of radius 33 mm, so the area of the bottom of one is $\pi (33\ \mathrm{mm})^2$. Six will just fit in a row in a rectangle $66\ \mathrm{mm} \times 396\ \mathrm{mm}$, whose area you have calculated. So it is $\frac{6 \pi 33^2}{66\times396}$ which should be the same as the ratio between a circle and the circumscribed square (why?).
-
@Rahul Narain: Thanks. – Ross Millikan Dec 28 '11 at 4:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9633264541625977, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/97035?sort=oldest
|
## Derived functors of symmetric powers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What do the derived functors of the symmetric powers look like? I understand that this is related to the homology of the symmetric groups, but I don't know a reference for that.
Namely, I'm interested in the homotopy groups of the free simplicial commutative ring on a simplicial set. Let $X_\bullet$ be a simplicial set; I'd like to know the homotopy groups of $\mathbb{Z}[X_\bullet]$. This is the symmetric algebra on the free simplicial abelian group $\mathbb{Z}X_\bullet$, which is weakly equivalent to a product of Eilenberg-MacLane simplicial abelian groups corresponding to the homology of $X_\bullet$ (and is cofibrant). In particular, we have a weak equivalence of simplicial commutative rings $$\mathbb{Z}[X_\bullet] \simeq \bigotimes \mathbb{L} \mathrm{Sym}^\bullet K( H_n(X_\bullet, n)),$$ which brings up the question of what $\mathbb{L} \mathrm{Sym}^\bullet$ looks like. Tyler Lawson points out in answering this question that the answer is somewhat complicated and describes it in low degrees.
Is a complete answer known?
-
1
I think some computations of higher homotopy groups of spheres can be rephrased as the derived functors of symmetric powers: arxiv.org/abs/1103.4580v1 – John Wiltshire-Gordon May 15 2012 at 19:15
I believe that "I'm interested in the homotopy groups of the free simplicial commutative ring on Let $X_\bullet$ be a simplicial set" should probably be "I'm interested in the homotopy groups of the free simplicial commutative ring on A SIMPLICIAL SET. Let $X_\bullet$ be a simplicial set". Is this correct? – Theo Johnson-Freyd May 15 2012 at 23:03
Thanks for the correction and the reference to the paper. – Akhil Mathew May 16 2012 at 0:12
1
The symmetric groups have complicated cohomology taken individually, but taken all together the homology possesses extra structure (Dyer-Lashof operations) that makes it simple to describe. A nice reference, if I remember correctly, is Bisson and Joyal's "Q-rings and the homology of the symmetric groups." (preprint here: hopf.math.purdue.edu/Bisson-Joyal/Luminy.pdf) – Tyler Lawson May 16 2012 at 5:23
1
arxiv.org/abs/0911.0638 may be of interest – mt May 16 2012 at 7:23
show 1 more comment
## 1 Answer
The homology of all of the symmetric groups together is well understood, as Tyler says. Taking mod $p$ coefficients, that is the special case when $X = S^0$ of the calculation of $H_*(CX)$ as a functor of $H_*(X)$, where $C$ is the monad on based spaces associated to any $E_{\infty}$ operad of spaces. The calculation in this form is given as Theorem 4.1, page 40, of [Cohen, Lada, May. The homology of iterated loop spaces, SLN Vol 533. 1976] which is available on my web page. The functor is not all that complicated, but you do have to understand the Dyer-Lashof operations, which are very much like Steenrod operations and can be seen with those as special cases of a general construction of Steenrod operations [A general algebraic approach to Steenrod operations. In SLN Vol. 168. 1970] also on my web page. The paper of Bisson and Joyal cited by Tyler gives a reformulation of this functor in the case $p=2$. If you want the integral homology, that is a mess to write down in closed form, but the mod $p$ Bockstein spectral sequence of $CX$ is entirely determined by that of $X$, as explained in Theorem 4.13 op cit above, so that integral information is also available. It is worth emphasizing that viewing the homology of symmetric groups as a special case of $H_*(CX)$ substantially simplifies both the calculation and understanding the answer.
-
Very interesting; thanks. – Akhil Mathew May 20 2012 at 3:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919838011264801, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Nonlinear_resonance
|
Nonlinear resonance
In physics, nonlinear resonance is the occurrence of resonance in a nonlinear system. In nonlinear resonance the system behaviour – resonance frequencies and modes – depends on the amplitude of the oscillations, while for linear systems this is independent of amplitude.
Description
Generically two types of resonances have to be distinguished – linear and nonlinear. From the physical point of view, they are defined by whether or not external force coincides with the eigen-frequency of the system (linear and nonlinear resonance correspondingly). The frequency condition of nonlinear resonance reads
$\omega_n=\omega_{1}+ \omega_{2}+ \cdots + \omega_{n-1},$
with possibly different $\omega_i=\omega(\mathbf{k}_i),$ being eigen-frequencies of the linear part of some nonlinear partial differential equation. Here $\mathbf{k}_i$ is a vector with the integer subscripts $i$ being indexes into Fourier harmonics – or eigenmodes – see Fourier series. Accordingly, the frequency resonance condition is equivalent to a Diophantine equation with many unknowns. The problem of finding their solutions is equivalent to the Hilbert's tenth problem that is proven to be algorithmically unsolvable.
Main notions and results of the theory of nonlinear resonances are:[1]
1. The use of the special form of dispersion functions $\omega=\omega(\mathbf{k}),$ appearing in various physical applications allows to find the solutions of frequency resonance condition.
2. The set of resonances for given dispersion function and the form of resonance conditions is partitioned into non-intersecting resonance clusters; dynamics of each cluster can be studied independently (at the appropriate time-scale).
3. Each resonance cluster can be represented by its NR-diagram which is a plane graph of the special structure. This representation allows to reconstruct uniquely 3a) dynamical system describing time-dependent behavior of the cluster, and 3b) the set of its polynomial conservation laws which are generalization of Manley–Rowe constants of motion for the simplest clusters (triads and quartets)
4. Dynamical systems describing some types of the clusters can be solved analytically.
5. These theoretical results can be used directly for describing real-life physical phenomena (e.g. intraseasonal oscillations in the Earth's atmosphere) or various wave turbulent regimes in the theory of wave turbulence.
Nonlinear resonance shift
Foldover effect
Nonlinear effects may significantly modify the shape of the resonance curves of harmonic oscillators. First of all, the resonance frequency $\omega$ is shifted from its "natural" value $\omega_0$ according to the formula
$\omega=\omega_0+\kappa A^2,$
where $A$ is the oscillation amplitude and $\kappa$ is a constant defined by the anharmonic coefficients. Second, the shape of the resonance curve is distorted (foldover effect). When the amplitude of the (sinusoidal) external force $F$ reaches a critical value $F_\mathrm{crit}$ instabilities appear. The critical value is given by the formula
$F_\mathrm{crit}=\frac{4 m^2\omega_0^2\gamma^3}{3\sqrt{3}\kappa},$
where $m$ is the oscillator mass and $\gamma$ is the damping coefficient. Furthermore, new resonances appear in which oscillations of frequency close to $\omega_0$ are excited by an external force with frequency quite different from $\omega_0.$
Notes and references
Notes
1. Kartashova, E. (2010), Nonlinear Resonance Analysis, Cambridge University Press, ISBN 978-0-521-76360-8
References
• Landau, L. D.; Lifshitz, E. M. (1976), Mechanics (3rd ed.), Pergamon Press, ISBN 0-08-021022-8 (hardcover) and 0-08-029141-4 (softcover) Check `|isbn=` value (help)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8793898820877075, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/30035/is-it-acceptable-to-run-two-linear-models-on-the-same-data-set/30124
|
Is it acceptable to run two linear models on the same data set?
For a linear regression with multiple groups (natural groups defined a priori) is it acceptable to run two different models on the same data set to answer the following two questions?
1. Does each group have a non-zero slope and non-zero intercept and what are the parameters for each within group regression?
2. Is there, regardless of group membership, a non-zero trend and non-zero intercept and what are the parameters for this across groups regression?
In R, the first model would be `lm(y ~ group + x:group - 1)`, so that the estimated coefficients could be directly interpreted as the intercept and slope for each group.The second model would be `lm(y ~ x + 1)`.
The alternative would be `lm(y ~ x + group + x:group + 1)`, which results in a complicated summary table of coefficients, with within group slopes and intercepts having to be calculated from the differences in slopes and intercepts from some reference. Also you have to reorder the groups and run the model a second time anyway in order to get a p-value for the last group difference (sometimes).
Does this using two separate models negatively affect inference in any way or this this standard practice?
To put this into context, consider x to be a drug dosage and the groups to be different races. It may be interesting to know the dose-response relationship for a particular race for a doctor, or which races the drug works for at all, but it may also be interesting sometimes to know the dose-response relationship for the entire (human) population regardless of race for a public health official. This is just an example of how one might be interested in both within group and across group regressions separately. Whether a dose-response relationship should be linear isn't important.
-
Are you sure that you want to use linear regressions? Dose-response relationships are almost never linear over a substantial dose range. – Michael Lew Jun 8 '12 at 5:24
@Michael, sorry, that was a bad choice of example, I guess. I am wondering about this in general. The particulars of dose-response relationships shouldn't get in the way. I edited the question to note this. – Jdub Jun 8 '12 at 12:04
Have you considered a random intercept, random slope model? – Max Jun 8 '12 at 15:37
1 Answer
Let me start by saying that I think your first question and first R model are incompatible with each other. In R, when we write a formula with either `-1` or `+0`, we are suppressing the intercept. Thus, `lm(y ~ group + x:group - 1)` prevents you from being able to tell if the intercepts significantly differ from 0. In the same vein, in your following two models, th `+1` is superfluous, the intercept is automatically estimated in R. I would advise you to use reference cell coding (also called 'dummy coding') to represent your groups. That is, with $g$ groups, create $g-1$ new variables, pick one group as the default and assign 0's to the units of that group in each of the new variables. Then each new variable is used to represent membership in one of the other groups; units that fall within a given group are indicated with a 1 in the corresponding variable and 0's elsewhere. When your coefficients are returned, if the intercept is 'significant', then your default group has a non-zero intercept. Unfortunately, the standard significance tests for the other groups will not tell you if they differ from 0, but rather if they differ from the default group. To determine if they differ from 0, add their coefficients to the intercept and divide the sum by their standard errors to get their t-values. The situation with the slopes will be similar: That is, the test of $X$ will tell you if the default group's slope differs significantly from 0, and the interaction terms tell you if those groups' slopes differ from the default groups. Tests for the slopes of the other groups against 0 can be constructed just as for the intercepts. Even better would be to just fit a 'restricted' model without any of the group indicator variables or the interaction terms, and test this model against the full model with `anova()`, which will tell you if your groups differ meaningfully at all.
These things having been said, your main question is whether doing all of this is acceptable. The underlying issue here is the problem of multiple comparisons. This is a long-standing and thorny issue, with many opinions. (You can find more information on this topic on CV by perusing the questions tagged with this keyword.) While opinions have certainly varied on this topic, I think no one would fault you for running many analyses over the same dataset provided the analyses were orthogonal. Generally, orthogonal contrasts are thought about in the context of figuring out how to compare a set of $g$ groups to each other, however, that is not the case here; your question is unusual (and, I think, interesting). So far as I can see, if you simply wanted to partition your dataset into $g$ separate subsets and run a simple regression model on each that should be OK. The more interesting question is whether the 'collapsed' analysis can be considered orthogonal to the set of individual analyses; I don't think so, because you should be able to recreate the collapsed analysis with a linear combination of the group analyses.
A slightly different question is whether doing this is really meaningful. Image that you run an initial analysis and discover that the groups differ from each other in a substantively meaningful way; what sense does it make to put these divergent groups together into a discombobulated whole? For example, imagine that the groups differ (somehow) on their intercepts, then, at least some group does not have a 0 intercept. If there is only one such group, then the intercept for the whole will only be 0 if that group has $n_g=0$ in the relevant population. Alternatively, lets say that there are exactly 2 groups with non-zero intercepts with one positive and one negative, then the whole will have a 0 intercept only if the $n$'s of these groups are in inverse proportion to the magnitudes of the intercepts' divergences. I could go on here (there are many more possibilities), but the point is you are asking questions about how the groups sizes relate to the differences in parameter values. Frankly, these are weird questions to me.
I would suggest you follow the protocol I outline above. Namely, dummy code your groups. Then fit a full model with all the dummies and interaction terms included. Fit a reduced model without these terms, and perform a nested model test. If the groups do differ somehow, follow up with (hopefully) a-priori (theoretically driven) orthogonal contrasts to better understand how the groups differ. (And plot--always, always plot.)
-
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395275712013245, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/3160/multi-layer-encryption-with-ecb-mode?answertab=active
|
# Multi layer encryption with ECB mode [closed]
if i use the 2 of the same key with 2 of the same algoritm when encrypting in ECB like when i have 2 blocks of the same color and i encrypt the 2 blocks with the same color the cipher text should not change
just like when i encrypt 2 blocks of the same information with the same key the cipher text would appear like i had only encrypted it once
but when i use 2 different keys encrypting in ECB with the same algoritm the cipher text should not be identical to plain text because it was double encrypted with 2 different keys
-
So are you proposing using a new key for every block? It is hard to tell what your actual question is. In your last paragraph, you seem to be saying that the plaintext will be identical to the ciphertext. Is that correct? – mikeazo♦ Jul 6 '12 at 23:32
i encrypt with one key and use another key to encrypt the aready encrypted information under the same algoritm like when you encrypt with the cipher usb addonics.com/products/cipherusb.php then encrypt with the lock down harddrive enclosure satechi.net/index.php/satechi-lockdown – Andrew Campbell Jul 7 '12 at 1:17
Why are you using ECB? – mikeazo♦ Jul 7 '12 at 2:12
1
Welcome to Cryptography Stack Exchange. Sorry, your question is not clear at all. What are you trying to do? What do you want to know? You can edit it to make it clearer, and then we can reopen it. – Paŭlo Ebermann♦ Jul 8 '12 at 12:33
## closed as not a real question by mikeazo♦, Paŭlo Ebermann♦Jul 8 '12 at 12:12
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 1 Answer
Correct me if I'm wrong, but it sounds like the question is really, "If I encrypt a document multiple times with different keys in ECB mode, does the common vulnerability of patterns in plaintext showing in ciphertext go away?"
The answer to this is no. Consider a simple cipher which has the following mappings: $E_{k_1}(0)=1234, E_{k_1}(1)=8532, E_{k_2}(1234)=3901, E_{k_2}(8532)=6279$.
Given the message $0,1,0,0$, the problem with ECB mode is that the cipher text reveals the patterns. In this case the ciphertext under $k_2$ would be $1234, 8532, 1234, 1234$.
So then, what happens if I encrypt this ciphertext with $k_2$. I'd get $3901, 6279, 3901, 3901$. The pattern is still present.
-
if i encrypt with one key and encrypt the same information with another key should there be a small difference in the cipher text – Andrew Campbell Jul 8 '12 at 13:31
it would appear like i only encrypted once so using multible keys dous not raise security – Andrew Campbell Jul 8 '12 at 14:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9103298187255859, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/98218/is-it-written-anywhere-that-open-subvarieties-of-affine-spaces-have-completely-i/98220
|
## Is it written anywhere that open subvarieties of affine spaces have “completely impure” cohomology?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider complex affine space $\mathbb{C}^n$ and let $U$ be a Zariski open subset of $\mathbb{C}^n$. By a celebrated result of Deligne, the cohomology $H^i(U)$ has a canonical Hodge structure. In particular, $H^i(U)$ has a weight filtration and a subalgebra of pure classes (since a cohomology class can't have lower weight than expected, only higher). I believe it's true that
The pure subalgebra of $H^i(U)$ is exactly the identity.
This is as far from being pure as possible.
What I hope to get from the collective intelligence of the internet is somewhere where this fact is written. I want to emphasize that what I am really hoping to get is a reference, since (as you can see below) I basically know how the proof should go.
In hopes of getting either confirmation or a mistake pointed out, let me write a proof:
By Alexander duality $\tilde H^i(U)\cong H_{n-i-1}^{BM}(X)$ where $X=\mathbb{C}^n\setminus U$. This is an isomorphism of Hodge structures after Tate twist by $n$. The weights of $H_{n-i-1}^{BM}(X)$ lie in $[-n+i+1,0]$, so those of $\tilde H^i(U)$ lie in $[i+1,n]$.
As a second-best request, does anyone know of a reference for the version of Alexander written above? It's dual to way things are usually written.
-
## 1 Answer
Maybe I'm missing something, but I think this should be a simpler proof. Let $j \colon U \hookrightarrow \mathbf P^n$ be the natural compactification, and let $k > 0$. Then $W_k H^k(U,\mathbf Q)$ is the image of $j^\ast \colon H^k(\mathbf P^n,\mathbf Q) \to H^k(U,\mathbf Q)$ (Hodge II, Corollaire 3.2.17). But $j^\ast$ factors through $H^k(\mathbf A^n, \mathbf Q) = 0$.
-
Sweet. That's easy enough that I don't need a reference. – Ben Webster♦ May 28 at 22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947019636631012, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/138652/finding-a-constant-to-make-a-valid-pdf
|
# Finding a constant to make a valid pdf
Let $f(x) = c\cdot 2^{-x^2}$. How do I find a constant $c$ such that the integral evaluates to $1$?
-
1
Since you already have two answers showing that $f(x) = c\, e^{-x^2\ln(2)}$, I will suggest that rather than the error function, you simply use what I hope you already know: $$\frac{1}{\sigma \sqrt{2\pi}}e^{-x^2/(2\sigma^2)}~~\text{is the density function of a}~N(0,\sigma^2)~ \text{random variable}.$$ Now compare constants and deduce the value of $c$. As a side benefit, you also get the mean and variance of the random variable for free. – Dilip Sarwate Apr 30 '12 at 1:55
## 2 Answers
You can write $2^{-x^2}$ as $$2^{-x^2}=e^{(\ln 2)({-x^2})}=e^{-x^2\ln 2}$$Using the error function you can calculate the integral $$\int_{-\infty}^{+\infty}e^{-x^2\ln 2}=\sqrt{\frac{\pi}{\ln 2}}$$ The rest is trivial.
-
Hint: Rewrite $$f(x) = c \,[e^{\ln(2)}]^{-x^2} = c\, e^{-x^2\ln(2)}$$ and try to exploit the following integral together with some change of variable: $$\int^{\infty}_0 e^{-x^2} \,dx = \frac{\sqrt{\pi}}{2}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053893089294434, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/20526?sort=votes
|
## Algebraic cycles of dimension 2 on the square of a generic abelian surface
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to know, what is known on algebraic cycles of dimension 2 modulo algebraic or rational equivalence on the square of a generic abelian surface.
First, let $A$ be a generic abelian surface (generic abelian variety of dimension 2) over $\mathbb{C}$. Then the group of cycles of dimension 1 (divisors) up to rational equivalence is known, it its the Picard group of $A$, see e.g. the answers to this question and Fulton, Intersection Theory, Chapter 19. Still I have a stupid question: Can one "write down" all the (positive?) divisors on $A$?
The real question concerns the square $A^2=A\times_{\mathbb{C}} A$ of a generic abelian surface $A$. This is an abelian variety of dimension 4. I am interested in algebraic cycles of dimension 2 on $A^2$. I think I know the group of algebraic cycles of dimension 2 on $A^2$ modulo homological equivalence and modulo torsion, it is $\mathbb{Z}^6$ (because the space of invariants of $\mathrm{Sp}_{4,\mathbb{Q}}$ in $\wedge^4(\mathbb{Q}^4\oplus\mathbb{Q}^4)$ is of dimension 6). What is known about the group of algebraic cycles of dimension 2 on $A^2$ modulo rational or algebraic equivalence? In particular, what is known about the Griffiths group? Again a stupid question: Can one "write down" all the cycles of dimension 2 on $A^2$ (in some sense)?
-
Regarding the first question (divisors on A): why isn't the theta dunction concrete enough for generating all of them ? – David Lehavi Apr 7 2010 at 7:03
@David Lehavi: Please give a reference! – Mikhail Borovoi Apr 7 2010 at 9:05
## 4 Answers
Here is an easy $5$ dimensional space of cycles: Inside $A \times A$, consider the subvarieties ${ (a,b) : a=mb }$, for $m=0$, $1$, $2$, $3$, $4$. I will show that these are linearly independent over $\mathbb{Q}$.
By Kunneth and Poincare, `$$H^4(A \times A, \mathbb{Q}) \cong \bigoplus_{i=0}^4 H^{i}(A, \mathbb{Q}) \otimes H^{4-i}(A, \mathbb{Q}) \cong \bigoplus_{i=0}^4 \mathrm{End}(H^{i}(A, \mathbb{Q})).$$`
The graph of multiplication by $m$, in this presentation, has class `$$(\mathrm{Id}, m \mathrm{Id}, m^2 \mathrm{Id}, m^3 \mathrm{Id}, m^4 \mathrm{Id})$$`
Since the Vandermonde matrix `$$\begin{pmatrix} 0^0 & 0^1 & 0^2 & 0^3 & 0^4 \\ 1^0 & 1^1 & 1^2 & 1^3 & 1^4 \\ 2^0 & 2^1 & 2^2 & 2^3 & 2^4 \\ 3^0 & 3^1 & 3^2 & 3^3 & 3^4 \\ 4^0 & 4^1 & 4^2 & 4^3 & 4^4 \end{pmatrix}$$` has nonzero determinant, the $5$ classes I listed are linearly independent over $\mathbb{Q}$.
-
Thanks! But what I really want is to see somehow all the cycles! – Mikhail Borovoi Apr 6 2010 at 18:27
2
After careful reading the answer I noticed that the dimension of the space of algebraic cycles in $H^4(A\times A)$ is 6, and not 5, as I erroneously wrote previously. Indeed, the space $H^2(A,\mathbb{Q})$ is a reducible representation of Sp$_4$, say, $V\oplus W$, where $V$ and $W$ are non-equivalent irreducible representations. It follows that End$(H^2(A,\mathbb{Q}))$ has a 2-dimensional space of invariants generated by Id$_V$ and Id$_W$. Thus we obtain a 6-dimensional space of Hodge cycles. They are linear combinations of intersections of divisors, hence they are algebraic. – Mikhail Borovoi Apr 7 2010 at 19:43
Oh, good point. Another way to see this is that, if $\Theta$ is a $\Theta$ divisor in $A$, then $\Theta \times \Theta$ is in $\mathrm{End}(H^2(A))$ and is not a multiple of the identity. – David Speyer Apr 8 2010 at 1:42
How do you see that $\Theta\times\Theta$ in End$(H^2(A))$ is not a multiple of the identity? – Mikhail Borovoi Apr 8 2010 at 6:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As far as I know, there is no smooth projective variety over $\mathbb{C}$ of dimension $n>2$ with all possible Hodge numbers nonzero (i.e. $h^{p,q} \neq 0$ for all $p+q = n$) for which the Griffiths group of codimension $r$ cycles is known to be zero for any `$1<r<n$`.
For codimension $2$ cycles the Abel-Jacobi map is expected to detect the Griffiths group, however the computations in Nori: Algebraic cycles and Hodge theoretic connectivity, p. 372, suggest that for the self product of the generic abelian surface the Abel-Jacobi map on the Griffiths group might well be nonzero.
-
Not an answer per se, but you might be interested in http://arxiv.org/abs/1003.3183, where similar questions are investigated.
-
Probably you know this, but I might point out that Nori [Proc. Indian Acad, 1989] proved that the Griffiths group of a generic abelian 3-fold is infinitely generated. It may be worth looking at, even though I have some doubts about whether his method would give anything useful in your case.
-
Thanks! Yes, I know the paper of Nori of 1989 and the paper of Fakhruddin of 1996. – Mikhail Borovoi Apr 8 2010 at 6:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237267374992371, "perplexity_flag": "head"}
|
http://www.blkmage.net/
|
The 3rd annual π day anime and mathematics post: A symmetric group of friends of degree 5
Posted on March 14, 2013 by
「ふいにコネクト」/「ものくろあくたー。」
It’s that day of the year again.
Kokoro Connect’s premise made a lot of people raise their eyebrows, because really, what good can come from body-switching shenanigans? Well, let’s think about this for a second. We have a group of five kids and every once in a while, at random, they switch into the others’ bodies at random. What does that sound like? That’s right, a permutation!
Interestingly enough, the idea of connecting body-switching with permutations isn’t new. The Futurama writers did it and apparently got a new theorem out of it. What differs in the case of Kokoro Connect and Futurama is that in Futurama, the body-switching could only happen in twos. These are called transpositions. Obviously, this isn’t the case for Kokoro Connect. This doesn’t make too much of a difference since it turns out we can write out any permutation we want as a series of transpositions, but that wouldn’t be very fun for Heartseed.
We write permutations in the following way. If we let Taichi = 1, Iori = 2, Inaban = 3, Aoki = 4, and Yui = 5, we’ll have $(1 2 3 4 5)$ representing the identity permutation, when everyone’s in their own body. If Heartseed wanted to make Aoki and Yui switch places, he’d apply the following permutation
$$\left( \begin{array}{ccccc} 1&2&3&4&5 \\ 1&2&3&5&4 \end{array} \right)$$
While it’s helpful for seeing exactly what goes where, especially when we start dealing with multiple permutations, this notation is a bit cumbersome, so we’ll only write the second line ($(12354)$) to specify a permutation.
For the purposes of this little exercise, we’ll consider applying a permutation as taking whoever’s currently in a given body. That is, say we permute Aoki and Taichi to get $(4 2 3 1 5)$. In order to get everyone back into their own bodies, we have to apply $(4 2 3 1 5)$ again, which takes Aoki, who’s in Taichi’s body, back into Aoki’s body.
So let’s begin with something simple. How many different ways are there for the characters to body switch? Both who is switched and who they switch with is entirely random. Again, since the switches aren’t necessarily transpositions, this means that we can end up with cycles like in episode 2, when Yui, Inaban, and Aoki all get switched at the same time. This can be written as $(1 2 4 5 3)$.
But this is just the number of permutations that can happen on a set of five elements, which is just 5! = 120. Of course, that includes the identity permutation, which just takes all elements to themselves, so the actual number of different ways the characters can be swapped is actually 119.
Anyhow, we can gather up all of these different permutations into a set and give it the function composition operation and it becomes a group. A group $(G,\cdot)$ is an algebraic structure that consists of a set $G$ and an operation $\cdot$ which satisfy the group axioms:
• Closure: for every $a$ and $b$ in $G$, $a\cdot b$ is also in $G$
• Associativity: for every $a$, $b$, and $c$ in $G$, $(a\cdot b)\cdot c = a\cdot (b\cdot c)$
• Identity: there exists $e$ in $G$ such that for every $a$ in $G$, $e\cdot a = a \cdot e = a$
• Inverse: for every $a$ in $G$, there exists $b$ in $G$ such that $a\cdot b = b\cdot a = e$
In this case, we can think of the permutations themselves as elements of a group and we take permutation composition as the group operation. Let’s go through these axioms.
Closure says that if have two different configurations of body swamps, say Taichi and Iori ($(2 1 3 4 5)$) and Iori and Yui ($(1 5 3 4 2)$), then we can apply them one after the other and we’d still have a body swap configuration: $(2 5 3 4 1)$. That is, we won’t end up with something that’s not a body swap. This seems like a weird distinction to make, but it’s possible to define a set that doesn’t qualify as a group. Say I want to take the integers under division as a group ($(\mathbb Z, \div)$). Well, it breaks closure because 1 is an integer and 2 is an integer but $1 \div 2$ is not an integer.
Associativity says that it doesn’t matter what order we choose to apply our operations in. If we have three swaps, say Taichi and Inaban ($(3 2 1 4 5)$), Aoki and Yui ($(1 2 3 5 4)$), and Iori and Yui $(1 5 3 4 2)$ and we want to apply them in that order. Then as long as they still happen in that order, it doesn’t matter which one we apply first. We’d have
$$((32145)(12354))(15342) = (32154)(15342) = (34152)$$
and
$$(32145)((12354)(15342)) = (32145)(14352) = (34152)$$
The identity means that there’s a configuration that we can apply and nothing will change. That’d be $(12345)$. And inverse means that there’s always a single body swap that we can make to get everyone back in their own bodies.
As it turns out, the group of all permutations on $n$ objects is a pretty fundamental group. These groups are called the symmetric groups and are denoted by $S_n$. So the particular group we’re working with is $S_5$.
So what’s so special about $S_5$? Well, as it turns out it’s the first symmetric group that’s not solvable, a result that’s from Galois theory and has a surprising consequence.
Évariste Galois was a cool dude, proving a bunch of neat stuff up until he was 20, when he got killed in a duel because of some drama which is speculated to be of the relationship kind, maybe not unlike Kokoro Connect (it probably wasn’t anything like Kokoro Connect at all). Among the things that he developed was the field that’s now known as Galois theory, which is named after him. What’s cool about Galois theory is that it connects two previously unrelated concepts in algebra: groups and fields.
One of the most interesting things that came out of Galois theory is related to the idea of solving polynomials. I’m sure we’re all familiar with the quadratic formula. Well, in case you aren’t, here it is:
$$x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}$$
This neat little formula gives us an easy way to find the complex roots of any second degree polynomial. It’s not too difficult to derive. And we can do that for cubic polynomials too, which takes a bit more work to derive. And if we want to really get our hands dirty, we could try deriving the general form of roots for polynomials of degree four. And wait until you try to do it for degree five polynomials.
That’s because, eventually, you’ll give up. Why? Well, it’s not just hard, but it’s impossible. There is no general formula using radicals and standard arithmetic operations for the roots for any fifth degree (or higher!) polynomial. The reason behind this is because $S_5$ is the Galois group for the general polynomial of degree 5. Unfortunately, proving that fact is a bit of a challenge to do here since it took about 11 weeks of Galois theory and group theory to get all the machinery in place, so we’ll have to leave it at that.
Posted in Anime | |
Low energy 2012 reflection
Posted on January 1, 2013 by
「「私、気になります!」」/「Mirunai」
You can see my 12 Days posts as sort of the most interesting things I’ve seen or read over the year. And so you can probably infer the following.
The best anime of 2012 was Hyouka.
Why? There’s a lot of reasons, but basically, it was the show I was most sad to see end. Oh and I guess there’s this too:
@blkmage Totally not an excuse just to post a cute pic of Oreki and Chitanda.
— kViN (@Yuyucow) December 25, 2012
More generally, I think the highlights of my 2012 have been meeting up with people, something that I’ve begun to look forward to after being exiled in London. Of course, there’s the good old meetups with old university friends, some of which involved riichi mahjong. But this was the year that I got to meet some of the Toronto-area cartoon heads that I’ve been talking to on twitter for a while and it was great. And even the non-cartoon head Toronto council watchers were cool too, which is unsurprising, since Toronto City Council is the secret best anime.
Hopefully 2013 gives me some more chances to ruin your impressions of me IRL.
Posted in Anime | |
12 Days XII: He’s lazy. She’s curious. They solve mysteries.
Posted on December 25, 2012 by
「氷菓」/「ぱち」
Hyouka is just lovely. I was pretty skeptical when the whole thing was announced and it started. Really, a slow mystery light novel with pretty animation, is that going to hold up? As it turns out, it’s not really a mystery, as things involving a bunch of bored high school students rarely are. Instead, it’s about a guy who, despite his best efforts, has the misfortune of being captivated by a starry-eyed girl and is dragged out of his shell. Like most of other shows in this vein, the enjoyment comes from watching how our protagonist slowly changes and see, by the end of it all, how far they’ve come.
Posted in Anime | |
12 Days XI: Two wolf kid moon shirt
Posted on December 24, 2012 by
「おおかみこどもの雨と雪」/「ksw」
In Summer Wars, we’re introduced to a family that’s large and traditional from the viewpoint of an outsider. We watch as they go about their hustle and bustle to honour the matriarch that’s guided and anchored their family. It’s a very broad, macro sort of viewpoint of the family. In Ookami Kodomo, we go about things from the other side. We see the love between two people blossom and they start building their family. After tragedy strikes, we see the day-to-day struggles of the young family and follow them through their highs and lows until the children are grown up.
Posted in Anime | |
12 Days X: Humanity has declined
Posted on December 23, 2012 by
「いつか」/「八子」
Shingeki no Kyojin is a story about the human race getting screwed. In shounen manga, we usually get characters who pull off amazing feats and come back against all odds. Sure, they might get themselves into dangerous situations, but none of them are actually going to die, right? Well, no one is safe in Shingeki no Kyojin. Nothing good happens when someone dies. And death never comes in a blaze of glory. This cloud of danger, horror, and despair hangs over the manga in a Muv-Luv Alternative-esque fashion.
Posted in Anime | Tagged 12 days, chomp, shingeki no kyojin |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955089807510376, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/100486-algebraic-expressions-print.html
|
# algebraic expressions
Printable View
• September 3rd 2009, 05:30 PM
pklanni
algebraic expressions
I am a dad helping with daughters homework and have a symbol I have not seen, the symbol is a solid diamond. The problem reads
Quote:
A new operation is defined as a ( solid diamond ) b = a+a-b. Is the set of whole numbers closed under the operation ( solid diamond )? If not, give a set that is closed under this operation.
Can anyone tell me what I am supposed to do with the solid diamond? Thank you in advance.
• September 3rd 2009, 05:49 PM
artvandalay11
a (diamond) b=a+a-b
5 (diamond) 2=5+5-2=8
The set of whole numbers is not going to be closed under this new operation (which is simply a combination of familiar operations in addition and subtraction) because the set of whole numbers is not closed under addition and subtraction
Here I am assuming the set of whole numbers is {0,1,2,3,.....}
however the set of integers and real numbers will be closed. The set of whole numbers are not closed because if b is large enough, a+a-b could be negative, and thus the result would not be an element of the whole numbers
• September 3rd 2009, 05:52 PM
pickslides
You are given $a\diamond b = a+a-b$
So now think, if a = 2 and b = 3 now
$2\diamond 3 = 2+2-3 = 1$
The diamond symbol is not an operation in particular, just defined for this particualr problem.
Even easier
$a\diamond b = a+a-b = 2a-b$
All times are GMT -8. The time now is 04:00 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938949465751648, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/41587?sort=votes
|
## union of matroid intersection
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The following is classical theorem of Ore and Ryser, generalising famous Hall marriage theorem.
Assume that $n$ guys and $m$ girls live in a town, some guys like some girls. Three statements are equivalent:
(i) each guy may get one wife and one mistress (they do differ, but he should like both) so that all wifes are different and all mistresses are also different.
(ii) after removing any $k$ edges ($k=1,2,\dots$) in corresponding bipartite graph, at least $n-k/2$ guys may get wifes.
(iii) for each $k$ guys, (number of girls liked by at least one of them) plus (number of girls liked by at least two of them) is not less then $2k$.
It is easy to see that (ii) and (iii) are equivalent, and (i) implies both, so the interesting part is that (ii) implies (i).
One may look at this problem as follows. Consider the set $E$ of edges of our graph, call its subset independent, if no two edges of the subset have common endpoint. This is independence system, call it $M$, and its rank function $\rho$. Then consider new independent system $M\cup M$, in which independent set is a union of two sets, independent in $M$. Then (i) means that rank of $M\cup M$ equals $2n$ (it obviously can not be more), and (ii) means that
$$2 \rho(E\setminus A)+|A|\geq 2n$$
for any $A\subset E$. If $M$ was matroid, it would finish the proof by matroid union rank formula, but alas $M$ is (in general) not a matroid, but it is intersection of two matroids, one of them corresponds to "independent set of edges is the set of edges with different guys-endpoints", and another one to "independent set of edges is the set of edges with different girls-endpoints". Also, projection of $M$ onto guys and onto girls are ("transversal") matroids.
Now finally the question. Is there any general weaker, then being a matroid, condition on independence system, which is enjoyed by our $M$, and which shares the matroid union rank formula?
-
## 1 Answer
The concept of "strongly base orderable" matroids seems to fit the bill, see for example
http://lemon.cs.elte.hu/egres/open/Base_orderable_matroid
In particular, partition matroids are strongly base orderable and a "union of intersections" theorem was proven by Davies and McDiarmid.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953718900680542, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/functional-inequalities?sort=faq&pagesize=15
|
Tagged Questions
The functional-inequalities tag has no wiki summary.
learn more… | top users | synonyms
1answer
314 views
Prove that $\int_0^1|f''(x)|dx\ge4.$
Let $f$ be a $C^2$ function on $[0,1]$. $f(0)=f(1)=f'(0)=0,f'(1)=1.$ Prove that $\int_0^1|f''(x)|dx\ge4.$ Also determine all possible $f$ when equality occurs.
0answers
146 views
An application of Poincare inequality [solved]
I am woking on Evans PDE problem 5.10. #15: Fix $\alpha>0$ and let $U=B^0(0,1)\subset \mathbb{R}^n$. Show there exists a constant $C$ depending only on $n$ and $\alpha$ such that \int_U u^2 dx ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7929640412330627, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/193179-matrix-norms-homotopies-print.html
|
# Matrix norms and homotopies.
Printable View
• December 1st 2011, 10:28 AM
Deveno
Matrix norms and homotopies.
I came across a question which I think I know the answer to, but I'd like some confirmation, or a counter-example to show me where my reason fails me (as it often does....darn unreliable brain (Surprised)).
the question is this:
is the subgroup of $\mathrm{GL}_n(\mathbb{R})$, given by:
$H = \{A \in \mathrm{GL}_n(\mathbb{R})\ |\text{ } \exists \text{ continuous } f:[0,1] \to \mathrm{GL}_n(\mathbb{R}) \text{ with } f(0) = A, f(1) = I \}$
normal in $\mathrm{GL}_n(\mathbb{R})$?
the topology used on $\mathrm{GL}_n(\mathbb{R})$ is the standard (metric) topology on $\mathbb{R}^{n^2}$, which I believe induces the frobenius norm on the matrices. this norm is sub-multiplicative, so it seems to me, that if $P \in \mathrm{GL}_n(\mathbb{R})$, that:
$g:[0,1] \to \mathrm{GL}_n(\mathbb{R})$ given by:
$g(t) = Pf(t)P^{-1}$ is the desired homotopy of $PAP^{-1}$ with $I$, which would then prove the normality of $H$.
the fly in the ointment being, that $g$ has to be continuous, which is where the frobenius norm comes in.
suppose that for $t_0 \in [0,1], f(t_0) = B$, and denote $f(t) = B_t$.
if $\epsilon > 0$, then if I choose $\delta > 0$ such that:
$|t - t_0| < \delta \implies |B_t - B| < \frac{\epsilon}{|P||P^{-1}|}$, then
$|t - t_0| < \delta \implies |g(t) - g(t_0)| = |PB_tP^{-1} - PBP^{-1}| = |P(B_t - B)P^{-1}|$
$\leq |B_t - B||P||P^{-1}| < \epsilon$, right?
• December 1st 2011, 12:40 PM
Drexel28
Re: Matrix norms and homotopies.
Quote:
Originally Posted by Deveno
I came across a question which I think I know the answer to, but I'd like some confirmation, or a counter-example to show me where my reason fails me (as it often does....darn unreliable brain (Surprised)).
the question is this:
is the subgroup of $\mathrm{GL}_n(\mathbb{R})$, given by:
$H = \{A \in \mathrm{GL}_n(\mathbb{R})\ |\text{ } \exists \text{ continuous } f:[0,1] \to \mathrm{GL}_n(\mathbb{R}) \text{ with } f(0) = A, f(1) = I \}$
normal in $\mathrm{GL}_n(\mathbb{R})$?
the topology used on $\mathrm{GL}_n(\mathbb{R})$ is the standard (metric) topology on $\mathbb{R}^{n^2}$, which I believe induces the frobenius norm on the matrices. this norm is sub-multiplicative, so it seems to me, that if $P \in \mathrm{GL}_n(\mathbb{R})$, that:
$g:[0,1] \to \mathrm{GL}_n(\mathbb{R})$ given by:
$g(t) = Pf(t)P^{-1}$ is the desired homotopy of $PAP^{-1}$ with $I$, which would then prove the normality of $H$.
the fly in the ointment being, that $g$ has to be continuous, which is where the frobenius norm comes in.
suppose that for $t_0 \in [0,1], f(t_0) = B$, and denote $f(t) = B_t$.
if $\epsilon > 0$, then if I choose $\delta > 0$ such that:
$|t - t_0| < \delta \implies |B_t - B| < \frac{\epsilon}{|P||P^{-1}|}$, then
$|t - t_0| < \delta \implies |g(t) - g(t_0)| = |PB_tP^{-1} - PBP^{-1}| = |P(B_t - B)P^{-1}|$
$\leq |B_t - B||P||P^{-1}| < \epsilon$, right?
Right, this looks fine to me. I think the more general thing, that if someone would have said to you, you would have understood is the conjugation maps are continuous in topological groups. Indeed, suppose that $G$ is a topological group with multiplication map $m:G\times G\to G$ and $i:G\to G$. Then, the map $c_g:G\to G$ given by $h\mapsto ghg^{-1}$ is the composition
$h\mapsto (h,g) \mapsto (h,g^{-1})\mapsto hg^{-1}\mapsto (g,hg^{-1})\mapsto ghg^{-1}$
The first map is just the continuous map $G\to G\times\{g\}$, the second is continuous because each coordinate map is continuous, the third is just the continuous multiplication map, the fourth is just the inclusion $G\ \{g\}\times G$, and the last just multiplication again.
So, if you know that $\text{GL}_n(\mathbb{R})$ is a topological group then you're golden because your function $g$ is just $c_P\circ f$, which is the composition of two continuous functions. But, $\text{GL}_n(\mathbb{R})$ is clearly a topological group since the multiplication maps and inversion maps are just rational functions in each coordinate (in fact, this clearly implies that $\text{GL}_n(\mathbb{R})$ is a Lie group).
Remark: It's not fruitful to fret over whether a given norm is the one that induces your topology. Indeed, $\text{GL}_n(\mathbb{R})$ is a fin. dim. vector space, and so all norms induce the same topology. But, yes if you fix the usual Euclidean norm for $\mathbb{R}^{n^2}$ then this norm is carried naturally by the obvious identification $\mathbb{R}^{n^2}\approx\text{GL}_n(\mathbb{R})$ to the Frobenius norm.
• December 1st 2011, 12:41 PM
xxp9
Re: Matrix norms and homotopies.
So the question is, is the connected component containing I, as a subgroup of $G=GL_n(R)$, normal or not?
Note that $GL_n(R)$ has only two components, H={det>0} and K={det<0}, where det is the determinant function.
A subgroup H of G is said to be normal if Hg=gH for any g in G. If $g \in H$, we have gH=Hg=H certainly.
If g is not in H, det(g)<0, we have det(gh)=det(g)det(h)<0, that is, $gh \in K$, so gH is contained in K.
And for any $a \in K, a=g*(g^{-1}a)$, and $det(g^{-1}a)=det(a)/det(g)>0$, so $g^{-1}a \in H$,
So $a \in gH$, that is K is contained in gH. So we have gH=K. Similarly we have Hg=K. So gH=Hg. H is normal.
• December 1st 2011, 02:01 PM
Deveno
Re: Matrix norms and homotopies.
Quote:
Originally Posted by Drexel28
Right, this looks fine to me. I think the more general thing, that if someone would have said to you, you would have understood is the conjugation maps are continuous in topological groups. Indeed, suppose that $G$ is a topological group with multiplication map $m:G\times G\to G$ and $i:G\to G$. Then, the map $c_g:G\to G$ given by $h\mapsto ghg^{-1}$ is the composition
$h\mapsto (h,g) \mapsto (h,g^{-1})\mapsto hg^{-1}\mapsto (g,hg^{-1})\mapsto ghg^{-1}$
The first map is just the continuous map $G\to G\times\{g\}$, the second is continuous because each coordinate map is continuous, the third is just the continuous multiplication map, the fourth is just the inclusion $G\ \{g\}\times G$, and the last just multiplication again.
So, if you know that $\text{GL}_n(\mathbb{R})$ is a topological group then you're golden because your function $g$ is just $c_P\circ f$, which is the composition of two continuous functions. But, $\text{GL}_n(\mathbb{R})$ is clearly a topological group since the multiplication maps and inversion maps are just rational functions in each coordinate (in fact, this clearly implies that $\text{GL}_n(\mathbb{R})$ is a Lie group).
d'oh! conjugation is continuous, right. should i start dyeing my hair blonde?
Quote:
Remark: It's not fruitful to fret over whether a given norm is the one that induces your topology. Indeed, $\text{GL}_n(\mathbb{R})$ is a fin. dim. vector space, and so all norms induce the same topology. But, yes if you fix the usual Euclidean norm for $\mathbb{R}^{n^2}$ then this norm is carried naturally by the obvious identification $\mathbb{R}^{n^2}\approx\text{GL}_n(\mathbb{R})$ to the Frobenius norm.
well, when i first came across the problem, the usual topology (for $\text{GL}_n(\mathbb{R}))$ was indicated, so it seemed appropriate to use the usual norm for the vectorization of a matrix. since i was trying to prove the continuity of g directly, i needed the sub-multiplicative property, and not all norms are sub-multiplicative (although that can be fixed by re-scaling).
Quote:
Originally Posted by xxp9
So the question is, is the connected component containing I, as a subgroup of $G=GL_n(R)$, normal or not?
Note that $GL_n(R)$ has only two components, H={det>0} and K={det<0}, where det is the determinant function.
A subgroup H of G is said to be normal if Hg=gH for any g in G. If $g \in H$, we have gH=Hg=H certainly.
If g is not in H, det(g)<0, we have det(gh)=det(g)det(h)<0, that is, $gh \in K$, so gH is contained in K.
And for any $a \in K, a=g*(g^{-1}a)$, and $det(g^{-1}a)=det(a)/det(g)>0$, so $g^{-1}a \in H$,
So $a \in gH$, that is K is contained in gH. So we have gH=K. Similarly we have Hg=K. So gH=Hg. H is normal.
great answer. i wasn't thinking in terms of homotopies being paths in $\text{GL}_n(\mathbb{R})$, which makes it rather simple, because det is continuous.
All times are GMT -8. The time now is 10:51 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 88, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551427364349365, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/40618?sort=oldest
|
## Uncomputability of the identity relation on computable real numbers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f_{=}$ be a function from $\mathbb{R}^{2}$ be defined as follows: (1) if $x = y$ then $f_{=}(x,y) = 1$; (2) $f_{x,y} = 0$ otherwise.
I would like to have a proof for / a reference to a textbook proof of the following theorem (if it indeed is a theorem):
$f_{=}$ is uncomputable even if one restricts the domain of $f_{=}$ to a proper subset of $\mathbb{R}^{2}$, viz. the set of the computable real numbers
Thanks!
-
2
You should specify which model of computation over the reals you have in mind. – wood Sep 30 2010 at 14:18
1
Rice's theorem: en.wikipedia.org/wiki/Rice%27s_theorem – Mark Sapir Sep 30 2010 at 14:20
@Wood: The question is specific enough. There is no ambiguity in the term "computable real number". – Mark Sapir Sep 30 2010 at 14:21
1
@Mark: I agree for the term "computable real number". But he is referring to a computable function over the reals. Does he mean the Blum-shub-Smale model or something else. I think there maybe different models. When I first read the question, I thought trivially yes under der BSS model. – wood Sep 30 2010 at 14:27
1
@Wood: He restricts the function to computable reals. The function takes Turing machines computing $x,y$ and produces 1 or 0. By Rice theorem the function is not computable: you cannot check if two Turing machines recognize the same language. – Mark Sapir Sep 30 2010 at 15:26
show 2 more comments
## 4 Answers
Suppose that $f_=$ is computable when restricted to computable real numbers, which means that there exists a Turing machine that, given as input the encoding of two Turing machines $M_1$ and $M_2$ that compute the fractional digits of two computable real numbers $r_1$ and $r_2$ in $[0,1]$, produces $1$ if $r_1 = r_2$ and $0$ otherwise. I will use this assumption to show that the Halting problem is also computable, which is impossible.
Given a Turing machine $M$ and an input $x$ for which we want to know if $M$ on input $x$ halts or not, let $M_x$ be the Turing machine that acts as follows: given an integer $i$ as input, $M_x$ starts a simulation of $M$ on input $x$ for up to $i$ steps, and if the simulation does not halt within that number of steps, it outputs $0$ and otherwise it outputs $1$. By definition, $M_x$ computes the digits of a computable real number (more precisely, it computes the $i$-th digit for every given $i$). Moreover, that real number is $0$ if $M$ on input $x$ does not halt, and the real number $0.0\cdots 011 \cdots = 2^{-k}$ otherwise for some $k \geq 1$. In other words, $M_x$ computes the real number $0$ if and only if $M$ on input $x$ does not halt. To complete the argument, note that $0$ is a computable real number, so if you could tell whether two computable real numbers are equal you would also be able to tell if $M$ on input $x$ halts or not.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Aren't computable functions of reals automatically continuous? And isn't your function discontinuous? Of course you need a definition of computable in this setting to make sense of this...
-
If you take away some neighbourhood of the diagonal the continuity obstruction vanishes (the OP allowed proper subsets of R^2). – Peter Arndt Sep 30 2010 at 15:37
I thought the question only allowed one proper subset, namely the one consisting only of pairs of computable reals. – Carl Mummert Sep 30 2010 at 19:03
I guess you are right, Carl. – Peter Arndt Sep 30 2010 at 20:40
Dan Richardson in Bath has extensively studied the problem of recognizing zero under various hypotheses. I would be hard-pressed to give you an account of the details, because there is a lot of subtle and surprising results, but his page has all his papers and I'm sure you can find something of interest there.
-
The main difficulty in finding a reference for this is that it's so well known :). The fact that equality of reals is only (negatively) semidecidable is a basic and important result in both computable analysis and constructive analysis.
The underlying phenomenon here is about continuity. As Gerald Edgar says, the equality function is not continuous (in particular, it's not sequentially continuous). The proof that slimton presents shows not only that it's discontinuous, but that it's effectively discontinuous: we can make an effective sequence of effective reals that witnesses the discontinuity.
This is closely related to the type-2 functional $E\colon \{0,1\}^\omega \to \{0,1\}$ defined such that $E(f) = 1 \leftrightarrow (\exists k)(f(k) = 1)$. This functional is not computable.
If you look more deeply at slimton's proof, you see that he actually proves that if you had a uniform way to test equality of reals, then you would have a uniform way to compute $E$. In particular the problem of computing equality of computable reals is no easier than that of computing $E$ on computable reals. It can be shown with only a little more work that these are equivalent problems.
This phenomenon is a particular instance of a general phenomenon first studied by Grilliot [1] and now called Grilliot's trick: a functional $\Phi$ is effectively discontinuous if and only if $E$ is computable from $\Phi$. In particular, no effectively discontinuous functional is computable.
1: Thomas J. Grilliot, "On Effectively Discontinuous Type-2 Objects", Journal of Symbolic Logic v. 36, n. 2 (Jun., 1971), pp. 245-248. http://www.jstor.org/stable/2270259
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9165798425674438, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=335211
|
Physics Forums
## Solution to system of linear equations in range of system matrix
1. The problem statement, all variables and given/known data
See image. a) and b) have been solved. The problem is c)
2. Relevant equations
3. The attempt at a solution
I really have no idea where to begin. For the three systems given there are solutions x in range(A) for system 1 and 2 but not for 3. Therefore I have been trying to spot some obvious difference between system matrix A of system 2 and 3 but I cannot think of any clearly different property that could be linked to b.
Any hint just to get me started would be much appreciated.
Attached Thumbnails
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Hint: Ask yourself whether $A^T y = 0 [/tex] has solutions such that [itex] y^T b \neq 0$. Try it yourself first. Then if you think you know what is going on, look at the wikipedia entry under Fredholm alternative.
Much appreciated!
Thread Tools
| | | |
|---------------------------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Solution to system of linear equations in range of system matrix | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 0 |
| | Differential Geometry | 2 |
| | Differential Equations | 2 |
| | Linear & Abstract Algebra | 0 |
| | Linear & Abstract Algebra | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134222865104675, "perplexity_flag": "middle"}
|
http://en.m.wiktionary.org/wiki/Euler%E2%80%93Lagrange_equation
|
Euler–Lagrange equation
English
Wikipedia has an article on:
Etymology
Named after Leonhard Euler (1707–1783), Swiss mathematician and physicist, and Joseph Louis Lagrange (1736–1813), French mathematician and astronomer — originally from Italy.
Noun
Euler–Lagrange equation (plural Euler–Lagrange equations)
1. (mechanics, analytical mechanics) A differential equation which describes a function $\mathbf{q}(t)$ which describes a stationary point of a functional, $S(\mathbf{q}) = \int L(t, \mathbf{q}(t), \mathbf{\dot q}(t))\,dt$, which represents the action of $\mathbf{q}(t)$, with $L$ representing the Lagrangian. The said equation (found through the calculus of variations) is ${\partial L \over \partial \mathbf{q}} = {d \over dt} {\partial L \over \partial \mathbf{\dot q}}$ and its solution for $\mathbf{q}(t)$ represents the trajectory of a particle or object, and such trajectory should satisfy the principle of least action.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8930202722549438, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/6870/why-is-an-elliptic-curve-a-group/6878
|
## Why is an elliptic curve a group?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider an elliptic curve $y^2=x^3+ax+b$. It is well known that we can (in the generic case) create an addition on this curve turning it into an abelian group: The group law is characterized by the neutral element being the point at infinity and the fact that $w_1+w_2+w_3=0$ if and only if the three points $w_j$ are the intersections (with multiplicity) of a line and the elliptic curve.
Groups can be hard to work with, but in most cases proving that the group is in fact a group is easy. The elliptic curve is an obvious exception. Commutativity is easy, but associativity is hard, at least to this non-algebraist: The proof looks like a big calculation, and the associativity seems like an algebraic accident rather than something that ought to be true.
So this is my question, then: Why is the group law on an elliptic curve associative? Is there some good reason for it? Is the group perhaps a subgroup or a quotient of some other group that is easier to understand? Or can it be constructed from other groups in some fashion?
I gather that historically, the group law was discovered via the addition law for the Weierstrass $\wp$-function. The addition law is itself not totally obviuos, plus this approach seems limited to the case where the base field is $\mathbb{C}$. In any case, I'll elaborate a bit on this shortly, in a (community wiki) answer.
-
2
To be fair, the main reason "proving a group is a group is easy" is that the vast majority of groups people work with are defined by functions/morphisms/whatever, so you get associativity for free. Get away from that (or, I guess, from generators and relations, where you also get associativity for free) and demonstrating associativity is usually tedious at best. – Harrison Brown Nov 26 2009 at 7:33
"plus this approach seems limited to the case where the base field is $\mathbb{C}$" Actually, you should be able to prove the general case from the complex case, using the Lefschetz principle. – David Corwin Sep 1 2010 at 0:45
2
Interesting thought. But the Lefschetz principle only applies to fields of characteristic zero, right? – Harald Hanche-Olsen Sep 1 2010 at 7:20
## 8 Answers
Everything I am writing below is carried out explicitly in Chapter III of Silverman's book on elliptic curves. In the earlier chapters, he defines the Picard group.
For any curve over any field, algebraic geometers are interested in an associated group called the Picard group. It is a certain quotient of the free abelian group on points of the curve. It consists of formal sums of points on the curve modulo those formal sums that come from looking at the zeroes and poles of rational functions. It is a very important tool in the study of algebraic curves.
The very special thing about elliptic curves, as opposed to other curves, is that they turn out to be in natural set-theoretic bijection with their own Picard groups (or actually, the subgroup $Pic^0(E)$). The bijection is as follows: let O be the point at infinity. Then send a point P on the elliptic curve to the formal sum of points [P] - [O]. (It is not obvious that this is a bijection, but the work to prove it is all "pure geometric reasoning" with no computations.) So there is automatically a group law on the points of E. Then it requires no messy formulas to show that under this group law, the sum of three collinear points is O. So for free, you also get that this group law is the same as the one you defined in the question and that the one you defined is associative!
-
Hmm. I still have some work to do in order to get to the bottom of this, but now I know where to look. Thanks! – Harald Hanche-Olsen Nov 26 2009 at 18:26
6
Quick comment: the Picard group is defined as the quotient of the free abelian group on points by the subgroup of principal divisors, i.e. those coming from writing out the poles and zeroes of an element of the function field. The reason the Picard group has anything to do with drawing lines is that the equation of a line automatically determines an element of the function field whose associated divisor is principal (by definition), so every line determines a relation in the Picard group. – Qiaochu Yuan Jan 8 2010 at 22:13
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This was too long for a comment.
To elaborate on Hunter's response, there is something you can associate to any curve called its Jacobian variety, whose points can be identified with degree 0 line bundles (these can be identified with formal linear combinations of closed points on the curve whose sum of coefficients is 0, modulo a certain equivalence relation). There is a group operation on the Jacobian variety given by tensoring line bundles (or adding linear combinations), and its dimension is the genus of the curve. Since elliptic curves are genus 1 curves, one might hope that the Jacobian variety is isomorphic to the elliptic curve, and indeed this is the case. So this sort of gives a reason why a group law exists on the elliptic curve.
The catch is that there isn't a canonical isomorphism from an elliptic curve to its Jacobian variety. There is a way to specify an isomorphism once you single out a closed point on the elliptic curve (this is like picking an identity element). So elliptic curves don't really have a group law, it's elliptic curves with a choice of a rational point which do.
(I learned this stuff from Chapter IV of Hartshorne's Algebraic Geometry)
-
3
An elliptic curve is generally defined to be a genus one curve equipped with a distinguished point, rather than a plain genus one curve. – S. Carnahan♦ Nov 28 2009 at 1:28
8
To elaborate: an elliptic curve over a scheme S is a scheme E, a morphism f: E -> S, and a morphism g: S -> E, such that fg is the identity on S, and the geometric fibers of f are genus one curves. The canonical embedding of an elliptic curve into its Jacobian is an isomorphism. This is one of several places where Hartshorne disagrees with the rest of the universe. – S. Carnahan♦ Nov 28 2009 at 1:44
A proof I like is that the group of points on the curve is the classgroup of the ring $R=k[x,\sqrt{x^3+Ax+B}]$ where $k$ is the field you're working over. Set $y=\sqrt{x^3+Ax+B}\in R$ and let $K$ denote the field of fractions of $R$. Define ideals $I_P$ of $R$ for each point $P$ on the curve as follows. For $P=O$, the point at infinity, set $I_O=R$. For $P=(u,v)$ let $I_{P}=\langle x-u,y-v \rangle$. There's an equivalence relation $\sim$ on the nonzero ideals of $R$ defined by $I\sim J$ if $J= \alpha I$ where $\alpha\in K^*$. The equivalence classes of ideals form a monoid via $[I][J]=[IJ]$, which I'll call the class monoid of $R$.
Then you can prove by explicit calculation that
1. For points $P$ and $Q$ on the curve, if $T=P+Q$ then $[I_P][I_Q]=[I_T]$.
2. For points $P$ and $Q$ on the curve, $[I_p]=[I_Q]$ if and only if $P=Q$.
From this it's apparent that the group operation on the curve is associative, and that the set of classes of the form $[I_P]$ form a subgroup of the class monoid of $R$, isomorphic to the group of the elliptic curve. With a bit more effort you can show that every element of the class monoid has the form $[I_P]$ and that $R$ is a Dedekind domain.
(I should say that this argument is a simple-minded variant of the Picard group proof as expounded earlier by Hunter.)
-
Short answer: because it's a complex torus. Explanation below would take as through many topics.
### Topological covers
The curve should be considered over complex numbers, where it can be seen as a Riemann surface, therefore a two-dimensional oriented closed variety. How to find out whether this particular one is a sphere, torus or something else? Just consider a two-fold covering onto $x$-axis and count the Euler characteristics as $-2 \cdot 2 + 4 = 0$ (don't forget the point at infinity.)
### Complex tori
So this is a torus; now a torus with complex structure can be always defined as a quotient $\mathbb C/\Lambda$, where $\Lambda$ is the lattice of periods. It can be written as integrals $\int_\gamma \omega$ of any differential form $\omega$ over all elements $\gamma \in \pi_1$. The choice of differential form is unique up to $\lambda \in \mathbb C$.
### Algebraic addition
A complex map of a torus into itself that leaves lattice $\Lambda$ fixed can be only given by a shift. Once you select a base point, these shifts are in one-to-one correspondence with points of $E$. We have unique distinguished point — infinity — so let's choose it as the base point. It follows that we now have an addition map $(u, v) \to u\oplus v$, though defined purely algebraically so far.
### Geometric meaning
Now let's stop and ask ourselves: how to see this addition geometrically? For a start, consider map that sends $u$ to the third point of intersection with the line containing both $u$ and 0 (the infinity point). It's not hard to see that we fix 0 but change every class $\gamma$ in a fundamental group into $-\gamma$, so we must have the map $u\mapsto -u$ here.
### Group theory laws
What would happen if you took a line through $u$ and $v$? By temporarily changing coordinates so that $u$ becomes the infinity point, one writes down that map as $(u, v) \mapsto -(u+v)$. Now if you took three points, there would be two different ways to add them; those would lead to $(u+v)+w$ and $u+(v+w)$ as complex numbers, which we know to be associative.
### Logically proven
In the above, we worked over complex numbers, but we proved associativity which is a formal theorem about substitution of some rational expressions into others. Since it works over complex fields, it is required to work over all fields.
(In any case, the big discovery of mid-20th century was that you actually can take all of the intuition described above and apply it to the case of elliptic curves over arbitrary field)
### Analytic computations (bonus)
Consider a line that passes through points $u$, $0$ and $-u$. This line is actually vertical, and $y$ is a well-defined function there which has two zeroes and one double pole at infinity. After a shift and multiplication of several such functions we'll be getting a meromorphic function on a complex torus with poles $p_i$ and zeroes $z_i$ having the property $\sum p_i = \sum z_i$. This method can give all such functions and only them; it's not hard to see that only meromorphic functions with this property are allowed on elliptic curve.
For example, $\wp'$-functions are the ones that have triple pole at 0 and single zeroes at points $\frac12w_1, \frac12w_2, \frac12(w_1+ w_2)$ where $w_1, w_2$ are generators of $\Lambda$.
### Jacobian of a curve (bonus 2)
The formula above describes what types of functions are allowed on our curve. It is a good idea to organize this information into a curve: in this case, the information is that a single expression $p_1 + p_2 + \cdots + p_n - z_1 - \cdots - z_n$, considered a point of the curve, must vanish. For curves of higher genus, more relations are necessary; for $\mathbb C\mathbb P^1$, no relations beyond number of poles = number of zeroes are necessary. Those are relations in the group of classes of divisors (= Jacobian of a curve) mentioned in other answers.
In particular, elliptic curves coincide with their Jacobian and that's another explanation for the additive law.
-
You missed the whole central idea which the the deepest: The Riemann-Roch theorem gives the group law. – Anweshi Jan 8 2010 at 22:30
4
Well, if you are sure that the poster is familiar with elliptic curves, line bundles and sheaf cohomology (which I doubt), then, yes, Riemann-Roch theorem would give the group law. But why exactly this is more profound than curve being a complex torus? – Ilya Nikokoshev Jan 8 2010 at 22:46
Riemann-Roch would tell you why there is a group law precisely on curves of genus 1. Also, Riemann-Roch is obvious more profound than just looking at a torus or Weierstrass elliptic functions on the complex plane. – Anweshi Jan 8 2010 at 23:20
2
Not to mention that Riemann-Roch would give the group law over any field/ring/scheme. You have done some handwaving with the the Lefschetz principle for transporting to over other fields. This is not quite enough. Riemann-Roch is much neater. – Anweshi Jan 8 2010 at 23:22
Also I have referred to the simpler books of Miranda and Narasimhan, with the questioner in mind. These books use only complex analysis of one variable, and the definition of genus from topology. – Anweshi Jan 8 2010 at 23:52
show 1 more comment
Consider an additive subgroup $\Lambda$ of $\mathbb{C}$ so that $\mathbb{C}/\Lambda$ is compact (indeed, a torus). The corresponding Weierstrass $\wp$-function satisfies the ODE $\wp'(z)^2=4\wp(z)^3-g_2z-g_3$, and so if we write $x=\wp(z)$ and $y=\wp'(z)$ then $(x,y)$ lies on the elliptic curve $y^2=4z^3-g_2z-g_3$ (indeed, this parametrizes the entire curve). Addition in $\mathbb{C}/\Lambda$ is well defined, of course, and the addition theorem states that if $z_1+z_2+z_3=0$ then the corresponding $(x_j,y_j)$ satisfy $$\begin{vmatrix}x_1&y_1&1\\x_2&y_2&1\\x_3&y_3&1\end{vmatrix}=0,$$ i.e., the three points $(x_j,y_j)$ lie on a line. Thus the map $z\mapsto(x,y)$ maps the usual addition on $\mathbb{C}/\Lambda$ to the elliptic curve addition on the curve.
-
this is all very nice and cute in the complex number field. But what is amazing is why an elliptic curve over some arbitrary field is a group. Which makes me wonder, how did mathematicians realized elliptic curve is a group, I believe they suspected that way before the weierstrass p-function was used to consider e.c. over the complex. Like the last poster suggested, I think it all started with the picard groups. Though, I think its easier to defined e.c. over the complex fields to a beginner on the subject. – Jose Capco Nov 26 2009 at 9:04
2
Elliptic functions were originally introduced as the inverse functions to certain integrals, by the same procedure with which you can construct the exponential and trigonometric functions. Everybody knew that these functions have addition laws, so it is quite natural to expect ellitic functions also have one. – Mariano Suárez-Alvarez Nov 26 2009 at 12:12
2
The point is that the group law you get from the Weierstrass function can be written in terms of rational functions. This is an obvious conjecture to make if you think of elliptic functions as analogous to trigonometric functions, and it's also natural to think of the function field of C/Lambda as one-dimensional, so p(nz) should lie in C(p(z), p'(z)) for all n. Once you get rational functions of course the extension to all fields is clear. – Qiaochu Yuan Jan 8 2010 at 22:05
1
I meant to say this, but I forgot to: it certainly did not start with the Picard groups. Euler and Gauss both wrote down group laws for specific curves before anyone had written down elliptic functions. – Qiaochu Yuan Jan 8 2010 at 22:44
A few days ago in my algebraic geometry class we saw a pretty nice "geometric" argument for why the group law is associative. Unfortunately I'm at home for the Thanksgiving break and don't have access to my notes, but as best as I can remember it went like this:
Start with three points P, Q, R, plus the distinguished point 0 that's the identity; we want to show (P+Q)+R = P+(Q+R). Denote by (P#Q) the third point of your curve collinear with P, Q. Basically we start drawing lines everywhere; we end up with I think a total of 10 points, which are:
0, P, Q, R, (P#Q), (Q#R), P+Q, Q+R, (P+Q)#R, P#(Q+R).
Of course secretly those are just 9 points, but that's what we're trying to prove...
Now it turns out that I think 9 of these points -- all but P#(Q+R) -- lie on the union of 3 lines. Same construction, all but (P+Q)#R lie on the union of 3 different lines. Using either Bezout's theorem or a slightly stronger generalization thereof, we can show that the 9th point of intersection of these two plane cubics is in fact P#(Q+R) or (P+Q)#R, so they're the same.
There's probably at least one place where I horribly misremembered the argument, though -- if you happen to know what it is, let me know and I'll happily fix it.
-
1
This is exactly the proof in Silverman & Tate's book chapter 1. I guess this is not what OP's asking for though. – Ho Chung Siu Nov 26 2009 at 9:15
I'm not entirely clear on what exactly is being asked for. IIRC, there's a purely algebraic version of the argument that works for I think any algebraically closed field, although (hopefully!) this shouldn't be all that obvious. – Harrison Brown Nov 26 2009 at 18:02
1
Well my guess is that OP looks for a conceptual reason for there to the group law to be associative (since when you try to define a group law like that you probably have some intuition why it really defines a group, in particular, why conceptually should the associative law holds) – Ho Chung Siu Nov 27 2009 at 6:59
The drawing of lines as you have explained, gives a group law only in the case of genus $1$ curves. This does not work for any other genus.
The reason is that the Riemann-Roch theorem gives the third point, under the composition, and it works out only in the case of genus $1$. Riemann-Roch is the most important theorem in the study of Riemann surfaces, or algebraic curves. When you set up the situation in terms of divisors and apply Riemann-Roch, you kind of get associativity "for free". This seems the most natural explanation to me. This is also much the same as the Jacobian explanation given earlier.
This is given in Silverman's AEC. But it is a bit algebraic.
See the proof of the group law in J. W. S. Cassels, Lectures on Elliptic Curves. First the proof given by Harrison Brown is explained, and then this "conceptual" explanation using Riemann-Roch is given.
However since your approach is complex analytic, it will be very instructive to look into Rick Miranda's book on Riemann surfaces. Also Raghavan Narasimhan's ETH lecture notes give the complex analytic construction of the Jacobian variety, referred to by other people in earlier answers.
The more advanced(and definitive) volume on complex algebraic geometry is of Griffiths and Harris.
-
2
If you replace lines by parabolas or cubics, the geometric interpretation of the group law also works for certain curves of higher genus. – Franz Lemmermeyer Apr 7 2010 at 12:38
An elementary proof using Lamé's theorem, a classic result: If 3 lines cut a non-singular cubic on points A1 A2 A3, B1 B2 B3 and C1 C2 C3 and if A1 B1 C1 and A2 B2 C2 are both colinear, then A3 B3 C3 are also colinear (i. e. belong to a same rigth line). 1)A, B and -(A+B) are colinear 2)0,C and -C are colinear 3)-A,-(B+C) and S = A+(B+C) are colinear 4) As A, 0 and -A as well as B, C and -(B+C) are colinear this is so for -(A+B), -C and S because of Lame' theorem. Consequently S = (A+B)+C then A+B+C makes sens (This is associativity!!)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 90, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469997882843018, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/17445/number-of-conditions-for-a-two-particle-state-to-be-decomposable/17446
|
# Number of conditions for a two-particle state to be decomposable
Suppose we have a general two-particle state $\Phi (x_1, x_2 ) = \sum_{n_1,n_2} \phi_{n_1,n_2}(x_1,x_2)|n_1,n_2>$, where $n_1$ can be any of $n$ possible states, and $n_2$ can be any of $m$ states. If the state is decomposable then the coefficients $\phi_{1,2}$ can be decomposed into $\phi_{n_1} * \phi_{n_2}$.
It seems to me that for the decomposable state there are $n + m$ independent coefficients ($n$ coefficients describing the state of partice 1, and $m$ coefficients describing the state of particle 2), and if the system is not decomposable, i.e., if there is entanglement between the two particles, then the number of independent coefficients is $n * m$ ($n$ possibilities for $n_1$ times $m$ possibilities for $n_2$ in $\phi_{n_1,n_2}$). If this logic is correct, then the number of conditions to be fulfilled by a decomposable state is $nm - n - m$. However, according to the book I am studying the number of conditions is $nm - n - m +1$. I wonder why there is an extra condition.
I am not considering normalization, because the states are seen as rays in projective space, and furthermore both the decomposable and entangled states would have to fulfill the same normalization requirements so I guess there would be no difference.
-
Is there a reason for the x-dependence in the state? – Norbert Schuch Nov 26 '11 at 2:57
$x$ is supposed to represent a three dimensional vector. Sorry if that was unclear, maybe I should have written it in bold. – Raphael R. Nov 26 '11 at 4:14
No, it's just that x being or not being there seems to have nothing to do with your question, as you are only taking about the "spin" degree of freedom. – Norbert Schuch Nov 26 '11 at 5:52
## 1 Answer
If you divide out normalization & overall phase, the two states have $n-1$ and $m-1$ independent (complex) degrees of freedom, respectively. On the other hand, the joint two-particle state has $nm-1$ independent degrees of freedom. The difference is $nm-n-m+1$.
Differently speaking, if you leave in phase and normalization, the two states (in a tensor product) have only $n+m-1$ degrees of freedom, since phase + normalization is a joint property.
EDIT: Alternative derivation:
If we write $|\phi\rangle = \sum_{ij} M_{ij} |i\rangle |j\rangle$, we are asking for the number of conditions such that $M_{ij} = a_i b_j$. Clearly, this means that $M$ is determined by its first row and column (which can be choosen freely). The remaining $(n-1)(m-1)=nm-n-m+1$ elements cannot be chosen and enumerate the constraints. Again, the first row and column together only have $n+m-1$ independent variables.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488779306411743, "perplexity_flag": "head"}
|
http://www.scholarpedia.org/article/Small-world_networks
|
# Small-World Network
From Scholarpedia
Mason A. Porter (2012), Scholarpedia, 7(2):1739.
(Redirected from Small-world networks)
Curator and Contributors
1.00 - Mason A. Porter
A small-world network refers to an ensemble of networks in which the mean geodesic (i.e., shortest-path) distance between nodes increases sufficiently slowly as a function of the number of nodes in the network. The term is often applied to a single network in such a family, and the term "small-world network" is also used frequently to refer specifically to a Watts-Strogatz toy network.
## Definition
A network (or graph) consists of nodes (i.e., vertices) connected by edges. A path (or walk) in a network is a sequence of alternating nodes and edges that starts with a node and ends with a node such that adjacent nodes and edges in the sequence are incident to each other (Bollobás, 2001; Newman, 2010). Nodes or edges can appear multiple times in the same path, and the number of edges in a path is the length of the path. If a graph is connected, then any node can be reached via a finite-length path starting from any other node. The shortest path between a pair of nodes is called a geodesic path and there can be more than one such path.
Between any pair of nodes in an unweighted network, one can calculate the geodesic distance, which is given by the minimum number of edges that must be traversed to travel from the starting node to the destination node. (One can consider directed and weighted generalizations as well.) The number of edges in a path is the length of the path. The distance from me to one of my friends is 1 (as I can reach them via one "hop" along the network), the distance from me to a friend of my friend who is not also my friend is 2, and so on.
A graph's diameter is the maximum of the geodesic distances between node pairs, and the world encapsulated by a graph is "small" if the expected number of hops between two randomly chosen people is small in some sense. In particular, a network is said to be a small-world network (or to satisfy the small-world property) if the mean geodesic distance between pairs of nodes is small relative to the total number of nodes in the network — usually, one wants this length $$\ell$$ to grow no faster than logarithmically as the number of nodes tends to infinity. That is, $$\ell \lesssim O(\log N)$$ as $$N \rightarrow \infty$$. The base of the logarithm doesn't matter.
Importantly, one needs an ensemble of graphs in order to define the small-world property rigorously. Many people who focus on empirical data find this unsatisfactory, as intuitively it can be desirable to designate whether or not an individual network is a "small world", and the definition above does not allow one to do this. (On the bright side, one can still compute all of the usual diagnostics for an empirical network and thereby examine its properties directly.)
## Background
In the 1960s, experimental psychologist Stanley Milgram conducted landmark studies on the small world phenomenon in human social networks (Milgram, 1967; 1969). Milgram sought to quantify the typical distance between actors (i.e., nodes) in a social network and to show that one should expect it to be small. His series of experiments attempted to test the idea that the world had become increasingly interconnected amidst increasing globalization.
In one of his experiments, Milgram sent 96 packages to people living in Omaha, NE, USA who he chose "randomly" from a telephone directory. Each package contained an official-looking booklet that included the crest of Harvard University (where Milgram was on the faculty). Each package also included instructions that recipients should attempt to get this booklet to a specific target individual (a friend of Milgram's who lived in Boston, MA, USA). The only information supplied about Milgram's friend was his name (and thus, indirectly, his gender), his address, and the fact that he was a stockbroker. Each recipient was instructed to send the package to somebody that they knew on a first-name basis who they felt would be socially "closer" to the target individual (e.g., by having a more similar occupation, living closer to the target, etc). Each person who subsequently received one of the packages was then supposed to follow the same instructions to try to get the package to the target.
The target received 18 of the 96 packages. This success rate was higher than expected, and a modern version of Milgram's experiment that used e-mail communication had a significantly smaller success rate (Dodds, 2003). Milgram asked the participants to record in the package each step of the path, and the mean number of hops of completed paths was about 5.9. This led to the popularization of the idea that there are no more than about 6 steps between each pair of people in the world, which is encapsulated by the phrase "6 degrees of separation." Even more fascinating than the small number of hops that Milgram found between people is that the participants in his experiment was the ease in which the social network could be navigated using very little (and mostly local) information.
The actor Kevin Bacon became an unwitting part of this story a couple of decades ago when some college students, scheming to get on Jon Stewart's show on MTV (Barabási, 2003; Durrett, 2007), seemingly decided that "6 degrees of Kevin Bacon" sounded enough like "6 degrees of separation" that it must imply that Kevin Bacon was the "center" of the acting universe. (There are, in fact, numerous notions of centrality in networks.) An actor has a "Bacon number" of 1 if he appeared in a movie with Kevin Bacon, a Bacon number of 2 if he appeared in a movie with someone with a Bacon number of 1 and doesn't himself have a Bacon number of 1, etc.
Mathematicians were already doing something like this many years earlier, as it is quite popular to try to find the shortest path from oneself to Paul Erdős, who is one of the pioneers of graph theory (Hoffman, 1998). One considers a network whose nodes are defined by people and whose unweighted edges indicate that two nodes coauthored at least one paper together. (There are much more sophisticated, and appropriate, ways to study coauthorship networks if one wants to investigate them scientifically (Newman, 2001a; Newman, 2001b; Redner, 2005; Leydesdorff, 2001).) "Erdős numbers" are then defined analogously to Bacon numbers. For example, I know that my Erdős number is at most 4 because I have used MathSciNet) to find a path of length 4 between Paul Erdős and me. One such path is the following: I have coauthored a paper with Shui-Nee Chow (who has an Erdős number of 3), who has coauthored a paper with David Green Jr. (Erdős number 2), who has coauthored a paper with Jiuqiang Liu, who has coauthored a paper with Erdős. The reason I state explicitly that my Erdős number is bounded above by 4 is that the number of 4 reported by MathSciNet uses only publication venues that are indexed by MathSciNet, so there might be a shorter path via a publication that it doesn't include. This, along with Milgram's experiment, lead to the idea of network navigation. In particular, it is one thing to state that geodesic paths are short on average, but it is typically much harder to actually find a short path — and especially difficult to guarantee having found the shortest one — without full knowledge of the network structure.
One can also calculate an Erdős-Bacon number, which is equal to the sum of one's Erdős number and one's Bacon number. Some people, such as Natalie Portman, have low Erdős-Bacon numbers. (As described on the Wikipedia entry for Erdős-Bacon numbers, Portman has a collaboration path that includes Joseph Gillis, whose Erdős number is 1.)
It is also worth noting that there are some issues with traditional Erdős numbers, and some generalizations have been developed (Morrison, 2010).
## Watts-Strogatz networks
The best known family of small-world networks was formulated by Duncan Watts and Steve Strogatz in a seminal 1998 paper (Watts and Strogatz, 1998) that has helped network science become a medium of expression for numerous physicists, mathematicians, computer science, and many others. In fact, the term "small-world networks" (or the "small-world model") is often used to mean Watts-Strogatz (WS) networks or variants thereof, though many consider it preferable to define small-worldness in a more general fashion (as in the present article) (Newman, 2000). However, there is disagreement in the literature, as others (including the original authors) prefer to reserve the term to describe networks with both small mean geodesic path lengths and significant clustering (Watts and Strogatz, 1998). Note additionally that examination of graphs with small-world scaling predates the Watts-Strogatz model. For example, (B. Bollobás and F. R. K. Chung, 1988) contains a proof that the diameter of a network consisting of an $$N$$-cycle plus a random edge scales logarithmically with $$N$$ with probability $$1$$ as $$N \rightarrow \infty$$. (A closed path, which starts and ends at the same node, is called a cycle.)
Figure 1: (a) A ring network (i.e., a one-dimensional lattice with periodic boundary conditions) in which each node is connected to the same number $$l = 3$$ nearest neighbors. (b) A Watts-Strogatz network is created by selecting uniformly at random a fraction $$p$$ of the stubs (i.e., ends of edges) in the network and rewiring the associated edges so that each of them is connected to some node that is chosen uniformly at random. (c) The Newman-Watts variant of a Watts-Strogatz network, in which one adds shortcuts between nodes chosen uniformly at random without rewiring edges in the underlying lattice. This figure, which appeared in (Newman, 2003), is used with permission from Mark Newman and SIAM. Copyright © 2003 Society for Industrial and Applied Mathematics. Reused with permission. All rights reserved.
To discuss the Watts-Strogatz family of small-world networks, we need a notion of clustering. A global clustering coefficient is (Newman, 2003; Newman, 2010; Barrat and Weigt, 2000)$\begin{equation}\tag{1} C = \frac{3 \,\,\, \times \mbox{ number of triangles}}{\mbox{number of connected triples}}\,, \end{equation}$
Figure 2: Clustering coefficient $$C$$ and mean geodesic distance $$\ell$$ between nodes in the Newman-Watts variant of the Watts-Strogatz small-world model as a function of rewiring probability $$p$$. Observe that there is a regime with high clustering but low mean geodesic distance. The clustering coefficient $$C \in [0,1]$$, as one obtains $$C = 1$$ for a complete graph with $$N \geq 3$$ nodes. This figure, which appeared in (Newman, 2003), is used with permission from Mark Newman and SIAM. Copyright © 2003 Society for Industrial and Applied Mathematics. Reused with permission. All rights reserved.
where a triangle consists of 3 nodes that are completely connected to each other (i.e., a 3-clique) and a connected triple consists of three nodes $$\{i,j,k\}$$ such that node $$i$$ is connected to node $$j$$ and node $$j$$ is connected to node $$k$$. The factor of 3 arises because each triangle gets counted 3 times in a connected triple. The clustering coefficient $$C$$ indicates how many triples are in fact triangles. A complete graph, in which every pair of nodes is connected by an edge, with $$N \geq 3$$ nodes yields the maximum possible value of $$C = 1$$, as all triples are also triangles. It can also be useful define a local clustering coefficient (Watts and Strogatz, 1998; Newman, 2010), but this won't be necessary here.
The Watts-Strogatz model of small-world networks demonstrates how to construct a tractable family of toy networks that can simultaneously have significant clustering (i.e., what sociologists might call high transitivity) and small geodesic distances (Watts and Strogatz, 1998; Newman, 2003; Newman, 2010). This model generates a family of unweighted, undirected networks (with no self-edges or multi-edges) that interpolates between two limiting situations. One extreme consists of an unweighted, undirected network with $$N$$ nodes, which you should imagine are arranged in a circle. As depicted in panel (a) of Figure 1, each node is connected to the $$l$$ nearest neighbors on each side. This ring graph, which I will call the "substrate" of the WS model, is a large world, as one can only take slow "local" steps to travel between a pair of nodes. The ring graph has a nonzero clustering coefficient $$C$$ provided $$l \geq 2$$. The other extreme of the WS model is an Erdős-Rényi (ER) random graph, in which each pair of nodes has a uniform probability of being connected to each other. This yields a small world, as one can typically travel between a pair of nodes using a short path. However, the clustering coefficient $$C \rightarrow 0$$ as the number of nodes $$N \rightarrow \infty.$$
The WS model, parametrized by $$p \in [0,1]$$, includes a regime of networks that simultaneously exhibit both significant clustering and the small-world property. When $$p = 0$$, we obtain a ring graph in which each node is coupled to its $$c = 2l$$ nearest neighbors; when $$p = 1$$, we obtain an ER random graph. As illustrated in the middle panel of Figure 1, $$p$$ gives the probability of rewiring an edge: one considers each edge in the graph, and with probability $$p$$ removes that edge and replaces it with a "shortcut" edge between two nodes that are chosen uniformly at random from the $$N$$ nodes. (As discussed in Newman, 2010, this is technically a slight variant of the original WS model.)
The clustering coefficient when $$p = 0$$ is (Newman, 2010) $$\begin{equation} \tag{2} C = \frac{3(c-2)}{4(c-1)}\,, \end{equation}$$ which is independent of the number of nodes and ranges from $$C = 0$$ for $$c = 2$$ (i.e., nearest-neighbor coupling) to $$C \rightarrow 3/4$$ for $$c \rightarrow \infty$$. For $$c > 2$$, we get a family of graphs that have both small mean geodesic distances $$\ell$$ (i.e., they are small worlds) and significant clustering for a large range of rewiring probabilities $$p$$. Barrat and Weigt (Barrat and Weigt, 2000) showed that $$\begin{equation}\tag{3} C \sim \frac{3(c - 1)}{2(2c - 1)}(1-p)^3 \end{equation}$$ as $$N \rightarrow \infty$$. The value of $$C$$ is large even for large values of $$p$$, and the value of $$\ell$$ is small even for small values of $$p$$.
The WS model's famous feature — namely, its possession of large range of $$p$$ that produces small-world graphs with significant clustering — has inspired numerous subsequent research. For example, in the late 1990s and early 2000s, paper after paper appeared that investigated variants of the Watts-Strogatz model (Newman, 2000; Newman, 2010). In particular, Newman and Watts (Newman and Watts, 1999a) introduced a variant family of WS networks (see the right panel of Figure 1 in which $$p$$ represents the probability that a shortcut edge is added between two nodes that are far away from each other. Although one can see using numerical computations that WS networks have high $$C$$ and low $$\ell$$ for many values of $$p$$, it is not easy to verify this with a rigorous calculation. The Newman-Watts (NW) variant of WS networks dispensed with the rewiring and instead defined $$p$$ as the probability to add a shortcut between each pair of nodes (see the right panel of Figure 1). This leaves the substrate ring intact and simplifies calculations enormously. The downside is that $$C \not\rightarrow 0$$ as $$p \rightarrow 1$$ (for $$c > 2$$), as one no longer obtains an ER random graph in that limit. Hence, this variant of the model has similar behavior for intermediate values of $$p$$ but not for values of $$p$$ too close to $$1$$ (see Figure 2). As $$N \rightarrow \infty$$, the clustering coefficient of the NW model is (Newman, 2010; Barrat and Weigt, 2000) $$\begin{equation}\tag{4} C \sim \frac{3(c-2)}{4(c-1) + 8cp + 4cp^2}\,. \end{equation}$$
To examine the small-world property, it is desirable to have an expression for the mean geodesic distance $$\ell$$. Exact expressions have proven elusive, but it is possible to find approximate ones (Newman, 2010; Newman and Watts, 1999b; Newman et al., 2000). For example, in NW networks, one can show using scaling considerations and a mean-field approximation that $$\begin{equation}\tag{5} \ell = \frac{N}{c}f(Ncp)\,, \end{equation}$$ where $$\begin{equation}\tag{6} f(x) \approx \frac{2}{\sqrt{x^2 + 4x}}\mathrm{tanh}^{-1}\left(\sqrt{\frac{x}{x+4}}\right) \end{equation}$$ and $$Ncp$$ is the expected number of shortcut edges. Because of the nature of mean-field calculations, equations (5) and (6) work best either when $$Ncp$$ is very large or when it is very small. When $$Ncp \gg 1$$, for example, one finds that $$\begin{equation}\tag{7} \ell \sim \frac{\ln(Ncp)}{c^2p}\,. \end{equation}$$ In other words, as long as the number of shortcuts is much larger than $$1$$, the mean geodesic distance $$\ell$$ increases logarithmically with the number of nodes $$N$$. That is, these networks exhibit the small-world property.
Intuitively, one expects many real-world social networks to exhibit the small-world property (Newman, 2010). WS networks and their variants provide tractable models in which to observe this phenomenon cleanly, and this is what makes them interesting to study. As one can observe in Figure 2, this can occur for the same values of $$p$$ at which clustering (and hence transitivity) in the network is significant. Sometimes, it really is a small world after all.
## Other examples
As I mentioned above, Ref. (Watts and Strogatz, 1998) has led to a huge amount of subsequent work (Newman, 2000; Strogatz, 2001; Newman, 2010; Newman, 2003), and the notion of small-world networks has been extremely influential in both theory and applications.
Some classes of networks can yield especially small worlds. For example, consider the construction of an unweighted, undirected, random network with a specified degree distribution (Aiello et al., 2001; Durrett, 2007) $$\begin{equation} \tag{8} p(k) = b k^{-\lambda}\,, \quad k = m\,, m+1 \,, \ldots \,, k_{\mathrm{max}}\,, \end{equation}$$ where $$b \approx (\lambda - 1)m^{\lambda - 1}$$ is a normalization factor and $$m$$ and $$k_{\mathrm{max}} = mN^{1/(\lambda - 1)}$$ provide lower and upper cutoffs for what is otherwise a power-law form. (The degree of a node is the number of edges connected to it, and the degree distribution of a network can either be determined based on a set of given node degrees or be given directly by some probability distribution from which node degrees are drawn.) If $$\lambda \in (2,3)$$, then the graph's mean geodesic distance $$\ell \sim O(\ln \ln N)$$ (Cohen and Havlin, 2003; Dorogovtsev et al., 2002; Chung and Lu, 2002; Durrett, 2007).
It is important to remark, especially in the context of the previous example, that the degree distribution does not tell us much by itself about geodesic distances in a network. Crucially, one must consider how nodes are connected. For example, Marek Biskup (Biskup, 2011) considered graph diameters by adding shortcut edges to the $$d$$-dimensional hypercube lattice $$\mathbb{Z}^d$$. Additionally, the idea of the "smallest" small-world network was explored in (Nishikawa et al., 2002), and the importance of the smallness of a small-world network for dynamical systems on networks was discussed in (Melnik et al., 2011).
The most amazing aspect of Milgram's experiments was not that the world is small but that people seemed to be very good at navigating a small world with almost entirely local information. (See my comments above concerning my attempt to find a short path of paper coauthorships between Paul Erdős and me.) To investigate network navigation, Jon Kleinberg has studied message passing on networks (Kleinberg, 2000; Newman, 2010; Easley and Kleinberg, 2010). Kleinberg introduced a model whose substrate is a ring of nodes with nearest-neighbor coupling. He incorporated geographic effects by assuming that the nodes around the ring are aware of how close they are to each other. Shortcuts are still placed between pairs of nodes at random, but now one chooses the particular pair of nodes so that the probability of a particular shortcut covering a distance $$r$$ (i.e., the two nodes in question are $$r$$ hops apart around the ring) is $$p_r = Kr^{-\alpha}$$, where $$\alpha \geq 0$$ is a bias parameter and the normalizing constant $$K$$ ensures that one has a well-defined probability. When placing a shortcut, one first chooses a distance $$r$$ from the probability distribution, and one then places a shortcut between a pair of nodes (chosen uniformly at random) that are exactly $$r$$ hops apart on the ring. The limiting case $$\alpha = 0$$ yields the NW model. When $$\alpha > 0$$, connections between nearby nodes are chosen preferentially, and the effect becomes progressively stronger as $$\alpha$$ becomes larger.
There is, of course, much more. For example, one interesting question to ask is which networks are easy to navigate using only local information, and one can also ask how small-world scaling might arise in the first place (Clauset and Moore, 2003, Mogren et al., 2009).
## References
• W. Aiello, F. Chung, L. Lu. A random graph model for power law graphs. Experimental Mathematics 10, 53—66 (2001).
• A.-L. Barabási. Linked: The New Science of Networks, Perseus Books, New York (2003).
• A. Barrat, M. Weigt. On the properties of small-world networks. The European Physical Journal B 13, 547—560 (2000).
• M. Biskup. Graph diameter in long-range percolation. Random Structures & Algorithms 39(2), 210—227 (2011).
• B. Bollobás. Modern Graph Theory, Academic Press, New York (2001)
• B. Bollobás, F. R. K. Chung. The Diameter of a cycle plus a random matching. SIAM Journal of Discrete Mathematics 1, 328—333 (1988).
• F. Chung, L. Lu. The average distances in random graphs with given expected degrees. Proceedings of the National Academy of Sciences 99, 15879—15882 (2002).
• A. Clauset, C. Moore. How do networks become navigable?. ArXiv:cond-mat/0309415 (2003).
• R. Cohen, S. Havlin. Scale-free networks are ultrasmall. Physical Review Letters 90, 058701 (2003).
• R. Cohen, S. Havlin. Complex Networks: Structure, Robustness and Function, Cambridge University Press, Cambridge (2010).
• P. S. Dodds, R. Muhamad, D. J. Watts. An experimental study of search in global social networks. Science 301, 827—829 (2003).
• S. N. Dorogovtsev, J. F. F. Mendes, A. N. Samukhin. Metric structure of random networks. Nuclear Physics B '653, 307—338 (2003).
• R. Durrett. Random Graph Dynamics, Cambridge University Press, Cambridge (2007).
• D. Easley, J. Kleinberg. Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge University Press, Cambridge (2010).
• P. Hoffman. The Man Who Loved Only Numbers: The Story of Paul Erdős and the Search for Mathematical Truth, Hyperion, New York (1998).
• J. Kleinberg. Navigation in a small world. Nature 406, 845 (2000).
• L. Leydesdorff. The Challenge of Scientometrics, Universal Publishers, Boca Raton (2001).
• S. Melnik, A. Hackett, M. A. Porter, P. J. Mucha, J. P. Gleeson. The unreasonable effectiveness of tree-based theory for networks with clustering. Physical Review E 83, 036112 (2011).
• S. Milgram. The small world problem. Psychology Today 1(1), 60—67 (1967).
• O. Mogren, O. Sandberg, V. Verendel, D. Dubhashi. Adaptive Dynamics of Realistic Small-World Networks. ArXiv:0804.1115 (2008).
• G. Morrison, L. Mahadevan. Generalized Erdős numbers. ArXiv:1010.4293 (2010).
• M. E. J. Newman, Models of the small world. Journal of Statistical Physics 101, 819—841 (2000).
• M. E. J. Newman. Scientific collaboration networks: I. Network construction and fundamental results. Physical Review E 64, 016131 (2001).
• M. E. J. Newman. Scientific collaboration networks: II. Shortest paths, weighted networks, and centrality. Physical Review E 64, 016132 (2001).
• M. E. J. Newman. The structure and function of complex networks. SIAM Review 45(2), 167—256 (2003).
• M. E. J. Newman. Networks: An Introduction, Oxford University Press, Oxford (2010).
• M. E. J. Newman, C. Moore, D. J. Watts. Mean-field solution of the small-world network model. Physical Review Letters 84, 3201—3204.
• M. E. J. Newman, D. J. Watts. Renormalization group analysis of the small-world network model. Physics Letters A 263, 341—346 (1999).
• M. E. J. Newman, D. J. Watts. Scaling and percolation in the small-world network model. Physical Review E 60, 7332—7342 (1999).
• T. Nishikawa, A. E. Motter, Y.-C. Lai, F. C. Hoppensteadt. Smallest small-world network. Physical Review E 66, 046139 (2002).
• S. Redner. Citation statistics from 110 years of Physical Review. Physics Today 58, 49—54 (2005).
• S. H. Strogatz. Exploring complex networks. Nature 410, 268—276 (2001).
• J. Travers, S. Milgram. An experimental study of the small world problem. Sociometry 32, 425—443 (1969).
• D. J. Watts, S. H. Strogatz. Collective dynamics of small-world networks. Nature 393(1), 440—442 (1998).
## Internal references
The following Scholarpedia articles are germane to the present discussion:
• B. Bollobás, Graph theory. Scholarpedia. In preparation.
• C. Hidalgo and A.-L. Barabási (2008), Scale-free networks. Scholarpedia, 3(1):1716.
• G. Nicolis and C. Rouvas-Nicolis (2007), Complex systems. Scholarpedia, 2(11):1473.
• T. Nishikawa, Network science. Scholarpedia. In preparation.
• O. Sporns (2007), Complexity. Scholarpedia, 2(10):1623.
## Recommended reading
You can find relevant information in the following books and articles (and in many other resources):
• M. E. J. Newman. The structure and function of complex networks. SIAM Review 45(2), 167—256 (2003).
• M. E. J. Newman. Networks: An Introduction, Oxford University Press, Oxford (2010).
• S. H. Strogatz. Exploring complex networks. Nature 410, 268—276 (2001).
• D. J. Watts, S. H. Strogatz. Collective dynamics of "small-world" networks. Nature 393(1), 440—442 (1998).
## See also
Additional online sources of interest include:
• S. Forman, Oracle of Baseball, available (online).
• P. Lamere, 6 degrees of Black Sabbath, available (online).
• MathSciNet, Collaboration Distance, available (online).
• J. J. O'Connor, E. F. Robertson, Biography of Paul Erdős, available (online).
• P. Reynolds, The Oracle of Bacon, available (online).
• U. Wilensky, NetLogo Java Applet for Watts-Strogatz networks, available (online).
• Wikipedia entry for Complex Network, available (online).
• Wikipedia entry for Erdős-Bacon Number, available (online).
• Wikipedia entry for Network Science, available (online).
• Wikipedia entry for Network Theory, available (online).
• Wikipedia entry for Six Degrees of Kevin Bacon, available (online).
• Wikipedia entry for Small-World Experiment, available (online).
• Wikipedia entry for Small-World Network, available (online).
• Wikipedia entry for Watts-Strogatz Model, available (online).
## Acknowledgements
I thank Mark Newman for permission to use Figure 1 and Figure 2, and I acknowledge Rick Durrett, James Gleeson, Kreso Josić, David Krackhardt, Peter Mucha, Oliver Riordan, and Steve Strogatz for useful comments on this article.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 94, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074426889419556, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/12179/how-to-diagonalize-a-large-sparse-symmetric-matrix-to-get-the-eigenvalues-and-e/12201
|
# how to diagonalize a large sparse symmetric matrix, to get the eigenvalues and eigenvectors
How does one diagonalize a large sparse symmetric matrix to get the eigenvalues and the eigenvectors?
The problem is the matrix could be very large (though it is sparse), at most 2500*2500. Is there a good algorithm to do that and most importantly, I can implement it into my own code. Thanks a lot!
-
Why would you want to implement it on your own? Matlab can do it just fine. – Yuval Filmus Nov 28 '10 at 5:15
I am somewhat interested in this question because I know nothing about algorithmic efficiency. My naive thoughts here are that the usual diagonalization algorithm (i.e., performing simultaneous row and column operations) should go faster the sparser the matrix is. From a practical standpoint, it would be useful to have a "sparse matrix" datatype, so that the computer knows from the beginning that most of the row-column operations do not need to be performed. But is there more to it than this? I.e., does one actually use a different algorithm rather than just different implementation? – Pete L. Clark Nov 28 '10 at 8:05
1. Is there any structure in your sparse matrix? The efficiency of a sparse eigensolver ultimately rests on how well you wrote your matrix-vector multiplication routine, since Lanczos/Arnoldi requires at its core the multiplication of your sparse matrix with a number of vectors per iteration. – J. M. Nov 28 '10 at 12:33
2. Do you really need all the eigenvalues and eigenvectors? For most applications of sparse eigenproblems, one only needs the largest few and/or the smallest few. In addition to that, your 2500-by-2500 matrix of eigenvectors is guaranteed not to be sparse; so unless you have provisions for storing all 2500 vectors, as well as the time needed to wait for them (Lanczos/Arnoldi is fast for one vector, but for 2500 vectors...), you might want to reconsider your wants/needs. – J. M. Nov 28 '10 at 12:37
@Pete: It might interest you to know that sparse matrix storage techniques borrow a lot from graph theory, but not being a graph theory expert, that's all I can say about it. For tridiagonalizing a symmetric matrix with a neat pattern, I suppose judicious use of rotations might work, but if one tries Jacobi's scheme for diagonalization on a sparse matrix, you get a lot of fill-in in the first few iterations (even though theoretically the off-diagonal elements should converge to zero in the limit). – J. M. Nov 28 '10 at 15:22
## 1 Answer
$2500 \times 2500$ is a small matrix by current standards. The standard eig command of matlab should be able to handle this size with ease. Iterative sparse matrix eigensolvers like those implemented in ARPACK, or SLEPc will become more preferable if the matrix is much larger.
Also, if you want to implement an eigensolver into your own code, just use the LAPACK library that comes with very well developed routines for such purpose. Matlab also ultimately invokes LAPACK routines for doing most of its numerical linear algebra.
Semi-related note: the matrix need not be explicitly available for the large sparse solvers, because they usually just depend on being able to compute $A*x$ and $A'*x$.
-
Sure, it's relatively small, but sometimes the need to exploit sparsity is due to time (QR or MRRR or divide-and-conquer take longer than Lanczos/Arnoldi if you need only a few eigenvalues/eigenvectors.) – J. M. Nov 28 '10 at 12:39
yes, of course: but the OP seems to want all eigenvectors and eigenvalues, then probably viewing it as a sparse matrix might not be that great! Also, for this size of a matrix, having multicore or GPU implementations of dense linear algebra are significantly easier (or at least have better scaling) than the sparse case. – user1709 Nov 28 '10 at 13:28
Hence my questions to him/her; I am rather doubtful that s/he needs all of them! Still, I would have to agree that LAPACK/BLAS is pretty well-tuned as it stands! – J. M. Nov 28 '10 at 14:35
yes, I think your question to him/her regarding whether he/she really needs all eigenvals+vecs is a very important one. one setting where one may require (in the worst case) all might be when having to repeatedly ensure positive-definiteness of a matrix during an iterative process. – user1709 Nov 28 '10 at 15:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354391098022461, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27061/bell-polytopes-with-nontrivial-symmetries/27063
|
# Bell polytopes with nontrivial symmetries
Take $N$ parties, each of which receives an input $s_i \in {1, \dots, m_i}$ and produces an output $r_i \in {1, \dots, v_i}$, possibly in a nondeterministic manner. We are interested in joint conditional probabilities of the form $p(r_1r_2\dots r_N|s_1s_2\dots s_N)$. Bell polytope is the polytope spanned by the probability distributions of the form $p(r_1r_2\dots r_N|s_1s_2\dots s_N) = \delta_{r_1, r_{1, s_1}}\dots\delta_{r_N, r_{N, s_N}}$ for all possible choices of numbers $r_{i,s_i}$ (in other words, each input $s_i$ produces a result $r_{i,s_i}$ either with probability 0 or 1, regardless of other players' inputs).
Every Bell polytope has a certain amount of trivial symmetries, like permutation of parties or relabelling of inputs or outputs. Is it possible to give an explicit Bell polytope with nontrivial symmetries? (e.g. transformations of the polytope into itself that takes faces to faces and is not trivial in the above sense) In other words, I'm interested whether a specific Bell scenario can possess any "hidden" symmetries
Bell polytopes in literature are usually characterized by their faces, given by sets of inequalities (Bell inequalities), which, however, usually do not have any manifest symmetry group.
-
At first glance, it seems any further symmetry would necessarily take you out of the regime of product states, and that such a state would necessarily produce correlations outside of the polytope. That said, this is just speculation, but perhaps it is one way to prove that there aren't any. – Joe Fitzsimons Sep 20 '11 at 8:11
## 2 Answers
Any symmetry of the local hidden variable polytope must map a vertex of the polytope to another other vertex (or trivially to itself). This is true in general by convexity. By the duality between vertex representation and facet representation we only need consider vertices. I have modified the way you write vertices to obtain $p(r_{1} r_{2} ... r_{N}|s_{1} s_{2} ... s_{N})=\delta^{r_{1}}_{f_{1}(s_{1})}\delta^{r_{2}}_{f_{1}(s_{2})}...\delta^{r_{N}}_{f_{N}(s_{N})}$ where $f_{j}(s_{j})$ is the image of $s_{j}$ under a single-site function $f_{j}:\mathbb{Z}_{m_{j}}\rightarrow\mathbb{Z}_{v_{j}}$.
Therefore, a symmetry will map from the product of single-site maps $\delta^{r_{1}}_{f_{1}(s_{1})}\delta^{r_{2}}_{f_{1}(s_{2})}...\delta^{r_{N}}_{f_{N}(s_{N})}$ to other products of single-site maps $\delta^{r_{1}}_{f'_{1}(s_{1})}\delta^{r_{2}}_{f'_{1}(s_{2})}...\delta^{r_{N}}_{f'_{N}(s_{N})}$ with $f_{j}$ not necessarily equal to $f'_{j}$. Of course, one can reorder the products by permuting the parties and still produce a product of delta functions. Locality prevents us from allowing delta functions of the form $\delta^{r_{j}}_{f_{j'}(s_{j'})}$ with $j\neq j'$. Therefore, other than permutations the only symmetry transformations that are allowed will be transformations on the maps $f_{j}(s_{j})\rightarrow f'_{j}(s_{j})$.
We only need to consider each site's marginal probability distribution $p(r_{j}|s_{j})$ which can be written as a $m_{j}v_{j}$ length real vector. The vertices have $m_{j}$ non-zero elements which have unity value for each value of $s_{j}$. In order to conserve these two conditions of the vertex probability distributions, the only allowed transformations on the real vectors that are allowed are a restricted class of permutations of row elements. The restricted class of permutations of row elements is naturally generated by relabelling a measurement outcome for each value of $s_{j}$ and relabelling values of $s_{j}$.
This applies for the full probability distribution polytope. However, for other forms of correlations such as joint outcome statistics, e.g. $p(\sum_{j}^{n}r_{j}|s_{1} s_{2} ... s_{N})$ there are other subtle forms of symmetry outside of the 'trivial' classes. If you want me to elaborate, I can.
This is my first post to the TP.SE. I'm sorry if it is not detailed enough.
-
Welcome to TP.SE. It's good to see you here. Sorry to have stolen your reference! You deserve the rep from my answer too, so I'll use a bounty to transfer the rep. – Joe Fitzsimons Oct 3 '11 at 10:29
Apparently I need to wait 24 hours to award the bounty, but I'll do so then. – Joe Fitzsimons Oct 3 '11 at 10:30
No worries, Joe, I'm glad you put it up anyway. Sorry, I have been slow to reply to this question. Have been a bit busy. – Matty Hoban Oct 3 '11 at 10:41
Matty Hoban pointed me to a paper (PDF here) by Itamar Pitowsky from 1991 which looks the geometry of correlation polytopes and their symmetries. I haven't read the paper in full, but glancing through it, on page 400 (page 6 of the actual paper) under the statement of results the author seems to say that the cardinality of the symmetry group is $n! 2^n$ which would be consistent with just the bit flips and permutations, and with the existence of only the trivial symmetries you mention.
-
1
Thanks for the reference. Though, the author doesn't actually seem to prove that there are no symmetries but the $n!2^n$ trivial ones - he only identifies these symmetries and claims that they generate the full group (maybe it follows from some facts about polytopes, I don't know). – Marcin Kotowski Sep 23 '11 at 16:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403570294380188, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/55348-proving-roots-irrational.html
|
# Thread:
1. ## proving roots irrational
My class was given an assignment of proving the roots of 3, 6, and 12 as irrational. I have solved 3 and 6, and tried using what I did there for 12, but it is not working out. Someone help me out please?
Here is the prob:
If n^2=12, then n is not rational.
2. Hi,
If you know that $\sqrt{3}$ is an irrational number then you can use this : $\sqrt{12}=\sqrt{4\times3}=2 \sqrt{3}\ldots$
3. wow... i have no idea why i kept overlooking that... i would always get to p^2=3q^2 (sorry, i am still tryin to learn how to much the computer math stuff) and i was so tryin to use the common factor that i didnt even notice what i was arriving at, thanks so much
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9792167544364929, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/74816/list
|
2 edited tags
1
# Is there a standard name for the intersection of all maximal linearly independent subsets of a given set in a vector space?
The title more or less says it all.... Let $V$ be a vector space (over your favorite field; $V$ not necessarily finite dimensional), and let $S$ be a subset of $V$. A maximal linearly independent subset of $S$ is exactly that: a subset of $S$ that is linearly independent yet not properly contained in any other linearly independent subset of $S$. (Equivalently, it is a basis for the subspace of $V$ that is spanned by $S$.)
Let $T$ be the intersection of all maximal linearly independent subsets of $S$. This $T$ might be as large as $S$, when $S$ itself is linearly independent. Alternatively, $T$ might be empty: if ${v_1,v_2,v_3}$ is a basis for $V$, then both examples $S = {}${$v_1,2v_1$} and $S = {}${$v_1,v_2,v_3,v_1+v_2+v_3$} have corresponding $T=\emptyset$. There are plenty of intermediate cases as well: in the same notation, if $S = {}${$v_1,v_2,v_3,v_2+v_3$} then $T={}${$v_1$}.
Does this object $T$ have a standard name?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104107022285461, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/21881/how-should-one-present-curl-and-divergence-in-an-undergraduate-multivariable-calc/21889
|
## How should one present curl and divergence in an undergraduate multivariable calculus class?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am a TA for a multivariable calculus class this semester. I have also TA'd this course a few times in the past. Every time I teach this course, I am never quite sure how I should present curl and divergence. This course follows Stewart's book and does not use differential forms; we only deal with vector fields (in $\mathbb{R}^3$ or $\mathbb{R}^2$). I know that div and curl and gradient are just the de Rham differential (of 2-forms, 1-forms, and 0-forms respectively) in disguise. I know that things like curl(gradient f) = 0 and div(curl F) = 0 are just rephrasings of $d^2 = 0$. However, these things are, understandably, quite mysterious to the students, especially the formula for curl, given by $\nabla \times \textbf{F}$, where $\nabla$ is the "vector field" $\langle \partial_x , \partial_y , \partial_z \rangle$. They always find the appearance of the determinant / cross product to be quite weird. And the determinant that you do is itself a bit weird, since its second row consists of differential operators. The students usually think of cross products as giving normal vectors, so they are lead to questions like: What does it mean for a vector field to be perpendicular to a "vector field" with differential operator components?! Incidentally, is the appearance of the "vector field" $\nabla = \langle \partial_x , \partial_y , \partial_z \rangle$ just some sort of coincidence, or is there some high-brow explanation for what it really is?
Is there a clear (it doesn't have to necessarily be 100% rigorous) way to "explain" the formula for curl to undergrad students, within the context of a multivariable calculus class that doesn't use differential forms?
I actually never quite worked out the curl formula myself in terms of fancier differential geometry language. I imagine it's: take a vector field (in $\mathbb{R}^3$), turn it into a 1-form using the standard Riemannian metric, take de Rham d of that to get a 2-form, take Hodge star of that using the standard orientation to get a 1-form, turn that into a vector field using the standard Riemannian metric. I imagine that the appearance of the determinant / cross product comes from the Hodge star. I imagine that one can work out divergence in the same way, and the reason why the formula for divergence is "simple" is because the Hodge star from 3-forms to 0-forms is simple. Is my thinking correct?
Stewart's book provides some comments about how to give curl and divergence a "physical" or "geometric" or "intuitive" interpretation; the former gives the axis about which the vector field is "rotating" at each point, the latter tells you how much the vector field is "flowing" in or out of each point. Is there some way to use these kinds of "physical" or "geometric" pictures to "prove" or explain curl(gradient f) = 0 and div(curl F) = 0? Is there some way to explain to undergrad students how the formulas for curl and div do in fact agree with the "physical" or "geometric" picture? Though such an explanation is perhaps less "mathematical", I would find an explanation of this sort satisfactory for my class.
Thanks in advance!
-
5
My advice ... at this level, strictly stick to the textbook. If a student comes to you outside of class and asks for more insight, go ahead. But any deviation from the text will likely cause far more confusion that it prevents! – Gerald Edgar Apr 19 2010 at 20:54
1
@Gerald: Thank you for your advice. I should say that I am certainly not going to try to teach my students about differential forms and the Hodge star. My main issue is just that every time curl comes up, there are inevitably some students who ask about where the "unnatural-looking" formula comes from. On the other hand, they usually don't ask such questions about gradient and divergence, because their formulas "look" natural to them. I don't want to tell them that the curl formula is just some magic formula that has these magical properties. – Kevin Lin Apr 19 2010 at 21:08
2
What I am really hoping for is some way to convey to them that curl is in fact as natural as gradient and divergence, despite initial appearances. – Kevin Lin Apr 19 2010 at 21:09
1
This has always been my intuitive picture of curl: en.wikipedia.org/wiki/… – Dan Piponi Apr 20 2010 at 1:52
2
I would like to add a comment which doesn't constitute an answer. I have always explained the definition of divergence and curl just as Qiaochu suggests, by starting a proof of Gauss's and Stoke's theorems, computing the flux or divergence integrals on small boxes and deriving the formulas for divergence and curl as a limit. This has the advantage that these two theorems, which are rarely explained or motivated in a calculus class, are essentially self-evident, if one is comfortable with some heuristics about the integral as summing up small contributions. – David Jordan Jul 14 2010 at 0:19
show 4 more comments
## 18 Answers
To me, the explanation for the appearance of div, grad and curl in physical equations is in their invariance properties.
Physics undergrads are taught (aren't they?) Galileo's principle that physical laws should be invariant under inertial coordinate changes. So take a first-order differential operator $D$, mapping 3-vector fields to 3-vector fields. If it's to appear in any general physical equation, it must commute with with translations (and therefore have constant coefficients) and also with rotations. Just by considering rotations about the 3 coordinate axes, you can then check that $D$ is a multiple of curl.
If I want to devise a "physical" operator which has the same invariance property - and therefore equals curl, up to a factor - I'd try something like "the mean angular velocity of particles uniformly distributed on a very small sphere centred at $\mathbf{x}$, as they are carried along by the vector field." (This is manifestly invariant, but not manifestly a differential operator!)
[Here I should admit that, having occasionally tried, I've never convinced more than a fraction of a calculus class that it's possible to understand something in terms of the properties it satisfies rather than in terms of a formula. That's unsurprising, perhaps: it's not an obvious idea, and it's entirely absent from the standard textbooks.]
-
2
Dear Tim: I'll have to check that argument for myself, but argh, that's beautiful!!! I feel frustrated that they don't seem teach this in undergrad multivariable calculus courses. This is certainly not in Stewart's book, and certainly this is the first time I've seen this myself. I hope that at least this is commonly taught in undergrad classical mechanics courses, but I never took a proper classical mechanics course myself. – Kevin Lin Apr 19 2010 at 23:39
6
Kevin, I'm glad if this is new to you. I admit that the argument itself may a bit fiddly for a mass-market calculus text (it reduces to showing that the cross product is characterized by bilinearity, rotation-invariance and scale), but the statement is clear enough. In its own terms, Stewart's text is solid enough, but he sometimes seems to me to present mathematics as a subject that fossilized some time in the Late Triassic...;) – Tim Perutz Apr 20 2010 at 1:56
Hmm, well, it's possible that someone showed this to me at some point, but I must have not been paying attention ;). Anyway, I feel that this explanation is so nice that it should be at least mentioned in these courses and books; it doesn't have to necessarily be worked out in explicit detail. And yeah, I agree with your feelings about Stewart, though to be fair I feel similarly about a lot of other calculus books as well. Some mention of more modern work would be helpful to counteract such perceptions. For instance, an introduction to the Galilean principle could be nicely followed ... – Kevin Lin Apr 20 2010 at 8:42
... up by, for instance, an introduction to some ideas from special relativity. (Ok, ok, ok, that's probably way too much to ask. But a mention would still be nice.) – Kevin Lin Apr 20 2010 at 8:45
4
I am sure you can work this out yourself, but just to note: div,grad, and curl are not diffeomorphism invariant, just invariant under isometries. This is of course due to the fact that $grad = \sharp\circ d$, $div = \delta \circ\flat$ and $curl = \sharp \circ d \circ * \circ \flat$, so the metric is involved heavily. This is also why it is somewhat more natural to work with d and forms instead of those operations on vectors in the context of differential topology. – Willie Wong Apr 20 2010 at 18:33
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As far as explaining the formulas for div and curl, you should be able to do this starting with the definitions given in the Wikpedia articles by taking the corresponding integrals on rectangles and boxes. These definitions have fairly clear physical meanings, at least if your students are comfortable with line and surface integrals.
As far as explaining d^2 = 0: curl(grad f) = 0 because the line integral of a gradient about small circles is zero, so gradients can't curl. (In other words, if a vector field has nonzero curl at some p, you wouldn't be able to define a consistent potential about some small closed contour around p.) And div(curl F) = 0 because the surface integral of a curl about small spheres is zero, so curls can't diverge (that is, flow). These interpretations get used all the time in applications of Stokes' theorem to physics.
-
Ah, awesome, thanks. I'll keep this in mind when we get to Stokes' theorem in the class. – Kevin Lin Apr 19 2010 at 20:29
13
When I was younger, I envisioned curl(grad f) = 0 as the mathematical version of the statement "you can't go uphill both ways to school". – Ryan Thorngren Apr 20 2010 at 0:38
1
spikedmath.com/501.html – Asaf Karagila Oct 15 at 11:53
I have taught multivariable calculus exactly once, to engineering students at Concordia University in Montreal. I found the course to be replete with expository challenges like the one you mention: namely, to explain what is going on with the various concepts of vector analysis in something like geometric terms, but of course without introducing anything like differential forms. [Conversely, it is possible to know Stokes' Theorem in the form $\int_{\partial M} \omega = \int_M d \omega$ and still not have any insight into flux, divergence and other such geometric and physical notions. I myself spent about 10 years in this position.]
I thought hard and often found explanations that were much more satisfactory than the textbook, which was amazingly laconic. Or rather, I found explanations which were much more satisfactory to me. The students had a lot of trouble conceptualizing the material, to the extent that my lectures almost certainly would have been more successful if I hadn't tried to give geometric explanations and intuition but simply concentrated on the problems. Thus Gerald Edgar's comment rings true to me. But let me proceed on the happier premise that you want to give more motivation to the bright student who approaches you outside of class.
One thing which was useful for me was to read the "physical explanations" that the book sometimes gave and try to make some kind of mathematical sense of them. For context, I should say that I have never taken any physics classes at the university level and that I have rarely if ever met a mathematician who has less physics background than I. Moreover, when I took introductory multivariable calculus myself (around the age of 17), I found the physical explanations to be so vague and so far away from the mathematics as to be laughable. For instance, the geometric intuition for a curl involved some story about a paddlewheel.
So when I taught the class, I tried to make some mathematical sense out of the names "incompressible" (zero divergence) and "irrotational" (zero curl), and to my surprise and delight I found that this was actually rather straightforward once I stopped to think about it.
Let me also tell you about my one "innovation" in the course (I am sure it will be commonplace to many of the mathematicians here). It seems strange that there are two versions of Stokes' theorem in three-dimensional space (one of them is called Stokes' theorem and one of them is called Gauss' Theorem or -- better! -- the Divergence Theorem) whereas in the plane there is only Green's Theorem. Stokes' Theorem is about curl, whereas Gauss' theorem is about divergence. What about Green's Theorem?
The answer is that Green's theorem has a version for divergence -- i.e., a flux version involving normal line integrals -- and a version for curl -- a circulation version involving tangent line integrals -- but these two versions are formally equivalent. Indeed, one gets from one to the other by applying the "turning" operators L and R: L applied to a planar vector field rotates each vector 90 degrees counterclockwise, and R is the inverse operator. Then (with the convention that the curl of a planar vector field should always be pointing in the vertical direction, so we can make a scalar function out of it)
curl(L(F)) = div(F)
and by making this formal substitution one gets from one version of Green's Theorem to the other.
See
http://math.uga.edu/~pete/handoutfive.pdf
http://math.uga.edu/~pete/handouteight.pdf
http://math.uga.edu/~pete/reviewnotes.pdf
-
5
A related problem (for math dept administrators)... How desirable/avoidable is it for physics & engineering students to be taught div,grad,curl and all that by an instructor who, himself, has never taken a physics course ?? – Gerald Edgar Apr 20 2010 at 14:20
8
Gerald, your remark seems rather harsh. I take my teaching responsibilities seriously, and I think that a PhD in mathematics is sufficient qualification for anyone to pick up concepts from freshman/sophomore physics as needed. I would certainly not be pleased to receive such an inquiry from my department head. – Pete L. Clark Apr 20 2010 at 15:26
14
Yes, in all of your examples it could only improve the course if the instructor knew those things. My point is that asking someone with a PhD in mathematics what courses they took as an undergraduate is both somewhat insulting and against the point of getting a PhD. I have never taken a course on Shimura varieties, but I taught one. If you wanted to teach an undergraduate course on algebraic number theory (or whatever is outside of your student training), I would trust you to prepare yourself properly to do so. – Pete L. Clark Apr 20 2010 at 18:01
8
When did taking a course in a subject become necessary OR sufficient for understanding a subject? – B. Bischof Apr 21 2010 at 13:27
10
@Gerald: Somewhere between your first and second comments you switched from formal qualifications (taking a course) to competence. IMO, it's reasonable to demand some degree of competence in physics of an instructor in "calculus for physicists and engineers" but it is unreasonable to demand that the instructor have taken a physics course. The latter kind of bean-counting is precisely the kind of thing that gives administrators a bad name. – Timothy Chow Oct 22 2010 at 23:12
show 2 more comments
Depending `(*)` on the underlying degree of analyticity `(**)` in your calculus course, it might be just as well to start with the Stokes theorem, stating it as an existence and uniqueness theorem:
Theorem (Stokes) Given a differentiable vector field $X$ on a region $U$ of $\mathbb{R}^3$ there is a unique continuous vector field $\operatorname{curl} X$ such that for any regularly parametrized surface $(u,v):D^2\rightarrow U$ with normal field $\hat{\mathbf{n}}$ and boundary tangent field $\mathbf{s}$, the integrals $$\iint (\operatorname{curl} X)\cdot \hat{\mathbf{n}} dA$$ and $$\oint X \cdot \mathbf{s}\ dt$$ are equal
From there you can proceed by deducing properties of $\operatorname{curl} X$ sufficient to give its formula in coordinates and at the same time prove the theorem.
Note for instance that even stated only as an existence theorem, it already says there's a sufficient local criterion for local integrability of a vector field; the actual formula for curl then tells you what the criterion is.
This style of approach also gives you a quick proof that $\operatorname{div}\operatorname{curl}(X)=0$, because a sphere can be regularly parametrized by a disk such that $\mathbf{s} \equiv 0$.
(*) Here I mean roughly that if you show them sufficient variations of the mean value theorem to prove that itterated derivatives commute when they're continuous, then it should be feasible to give this construction with comparable rigor.
(**) none of these words used here with any standard mathematical sense.
-
Thanks. Yeah, I think your the idea of your proposal is essentially contained in the wikipedia links in Qiaochu's answer. (Sigh -- another example of me not first looking things up on wikipedia before asking on MO -- sorry.) – Kevin Lin Apr 19 2010 at 21:14
oh! me too, for not thoroughly investigating what other people already said... – some guy on the street Apr 19 2010 at 21:17
3
This is exactly how I learned about div and curl in my physics course on electromagnetic theory, where they talk about the integral form of Maxwell's equations (which we call Stokes' theorem) and take a limit to derive the differential form of Maxwell's equations (containing the div and curl). Although I generally do not like the way physicists present mathematics, this is one case, where I think the physicists do it better. – Deane Yang Apr 20 2010 at 0:05
2
An old officemate of mine tried to teach his multivariate calc course this way (integrals first, then the derivative operators). The students who were taking simultaneously a physics course which "counted on div,grad,curl being taught first" (of course the physics department never bothered to communicate that fact to the maths department) were understandably confused and upset. So just as a protocol I suggest checking with departmental administrators before changing the order of the course too much! – Willie Wong Apr 20 2010 at 18:37
This is perhaps a crude (and certainly non-rigorous) explanation, but it's always how I've thought of motivating it.
Let $F = (F_1, F_2, F_3)$ denote a vector field in $\mathbb{R}^3$, and write $\text{curl}\ F = (G_1, G_2, G_3)$. We would like a situation where $G_1$ describes the "instantaneous" rotation of $F$ about the $x$-axis, $G_2$ the rotation about the $y$-axis, and $G_3$ the rotation about the $z$-axis.
So let's think of vector fields which do just that. Three simple (linear!) ones which come to mind are $$H_1(x,y,z) = (0, -z, y)$$ $$H_2(x,y,z) = (z, 0, -x)$$ $$H_3(x,y,z) = (-y, x, 0)$$ So in order to measure how much $F$ rotates about, say, the $z$-axis, it makes sense to look at something that compares how similar $F$ is to $H_3$. The dot product $F(x,y,z) \cdot H_3(x,y,z)$ seems reasonable, which is precisely $-yF_1(x,y,z) + xF_2(x,y,z).$
This suggests that defining $$G_1(x,y,z) \approx -zF_2(x,y,z) + yF_3(x,y,z)$$ $$G_2(x,y,z) \approx zF_1(x,y,z) - xF_3(x,y,z)$$ $$G_3(x,y,z) \approx -yF_1(x,y,z) + xF_2(x,y,z)$$ might give something close to what we want. But this is a very crude way to measure "instantaneous" rotation -- in fact, one might say it's a sort of linear approximation. Thus, we are led to replacing the linear terms with their corresponding derivations: $$G_1(x,y,z) = -\frac{\partial}{\partial z}F_2 + \frac{\partial}{\partial y}F_3$$ $$G_2(x,y,z) = \frac{\partial}{\partial z}F_1 - \frac{\partial}{\partial x}F_3$$ $$G_3(x,y,z) = -\frac{\partial}{\partial y}F_1 + \frac{\partial}{\partial x}F_2,$$ which is precisely the curl.
This heuristic also works with divergence, but instead consider $(H_1, H_2, H_3) = (x,y,z)$.
-
2
Oh, this is really nice! – Kevin Lin Jul 14 2010 at 0:36
1
Thanks! I like it also because it reiterates the idea that derivatives, differential operators, etc. are good linear approximations (which wasn't emphazised when I took Calc 3). – Jesse Madnick Jul 14 2010 at 0:45
This could be a comment on Qiaochu Yuan's answer but I don't have enough reputation.
A book that takes the approach mentioned on those wikipedia pages (defining divergence as flux density and curl as the vector in the direction of greatest circulation density ) is Multivariable Calculus by McCallum, Hughes-Hallett et. al.
Another resource worth looking at is Bridging the Vector Calculus Gap.
-
When I did this course myself I was deeply confused and distressed by it all. I know that everyone learns differently but when we are shown all these mnemonics for remembering formula that treat partials as if there were really numbers - it's not so bad, until they start getting used in proofs.. then it drags up all these memories about people saying "well it's not really a fraction but lets treat it that way anyway". You can pass this class (with a very high mark) by memorizing these formula and I suppose that is all that really matters for beginner classes like that (being able to solve lots of problems, without necessarily knowing why or what) but it can be a bit stomach churning and unpleasant to sit through a semester of not having a clue what any of this means while still getting all the right answers. There is a terrible sense of being lied to, when people try to dumb down things in the hope that it makes it simpler and easier to learn. It's only when I found a book on differential forms (which unified all these different concepts in one of the "applications" chapters) that I started to get the impression this was real mathematics and not just a strange act of going through the motions of writing lots of integral signs and so on. Although I do appreciate the remark in the preface of one of the many text books I read cover to cover in a worried haste to try and make sense of all this notational juggling, which said roughly that having illustrative notations like this fuels the intuition (and gave an example of that curl formula coming from the visual that the determinant had two equal rows in it), I didn't personally find this reassuring. Maybe there is value in teaching it this way to scare people into studying hard for it but I don't think I personally got anything out those months of work myself except for the vague geometric intuitions about div, grad and curl (which you could explain in a day by showing a video). If people learned about differential forms earlier on (rather than being told "it's what Leibniz did so we do to") maybe this course could be taught by developing the abstract setting a bit more and then specializing it down for the 3 dimensional case - which you could then get a got grip on by applying it to physics problems. I've not taught any class myself so I just wanted to give an account of what it can feel like sitting (or being dragged through backwards) this type of class.
-
2
I almost feel like the exterior derivative definition of curl is kind of like a mneumonic - all you remember is that differential forms are anti-commutative, and then you basic just work out the curl formula from that! – David Corwin Jul 14 2010 at 0:23
1
I agree wholeheartedly -- despite having the highest score in my vector calculus course, I had absolutely no idea what any of these things meant. I even ended up failing a physics class after finding myself completely unable to make sense of the divs, grads, and curls. Several years later, while trying to get a feeling for homological algebra, I picked up a book talking about de Rham cohomology since I'd heard that it was a good source of practical examples. By the time I was six or seven pages in, I suddenly understood everything I'd spent years struggling with in completely futility. – Daniel McLaury Apr 20 2012 at 4:25
Soon, I was able to learn what had been a full year's worth of material in a few weeks. Whenever I bring this up in mathematical company, I'll always hear the same thing -- "Yeah, me too. I didn't understand any of that stuff until I learned differential geometry." – Daniel McLaury Apr 20 2012 at 4:28
This was originally a comment that got too big. Since I wont address the real heart of your question, it is CW. I hope this is ok :/
I am also teaching Multivar out of Stewart this semester. As it has been suggested, I stick fairly close to the book, even working through some of the same examples he does. I focus very hard on motivating the ideas from the ground up.
For instance when talking about curvature, I made the students try to define curvature for themselves. Then proceeded to find little issues with their definitions, until we arrived at something that was pretty close to the standard defn(with lots of urging).
Just yesterday Terry Tao posted a link to this video;
which talks about this exact style of teaching. I think that it works very well for Calc 3, where many of the students are pretty decent at math to get to that point. In so far as specific examples and motivation, I really enjoy the notes by Oliver Knill;
http://abel.math.harvard.edu/~knill//teaching/math21a/index.html
There are some nice diagrams, examples, and explanations of pretty much everything in Calc 3. In particular, he gives real physical applications of the ideas, and at the end he gives a "calc beyond calc" intro.
-
I don't know that I have anything new to add. I've taught vector calculus a few times, and I have to admit I really enjoy it. The last few times I actually used a combination of traditional vector notation and differential forms, and it seemed to work OK (i.e. it wasn't a complete disaster). I'm linking my recently revised notes. Please keep in mind they were written for undergraduate science majors, so the standards probably fall short of what most people here might like.
-
Your notes look very nice to me. I would say they are towards the high end of the spectrum of what could be done in an American sophomore level multivariable calculus course. – Pete L. Clark Apr 21 2010 at 19:15
Thanks. I threw in a lot of things that could or should not teach in such a class. But I was hoping that some students would keep reading. – Donu Arapura Apr 21 2010 at 19:47
There's one simple answer, which I'm very surprised hasn't been mentioned. Use the book Div, Grad, Curl, and All That.
-
Note: I wrote this when the title was still "What is the divergence? What is the curl?"
A nice geometric interpretation of the divergence is that it measures the rate of expansion of a fixed volume in the flow defined by the vector field. There is a very concrete way to see this by comparing the volume of an small cube to the volume of a parallelepiped given by considering where the corners of the cube are dragged by the flow in an infinitesimal length of time (it is easiest to work out the analogous case of a square with corners (x,y), (x+dx,y), (x,y+dy), (x+dx,y+dy) in the plane). I imagine the fact that the determinant measures volume can be used to explain the presence of the cross product. A more sophisticated (and conceptual) way to prove this fact is to note that the divergence can be defined on any manifold with a volume form as the Lie derivative of the volume form (contract the volume form with the vector field and then take the exterior derivative).
I have also seen a result generalizing the curl to arbitrary dimension by noting that we can identify 2-forms with skew-symmetric matrices (elements of the Lie algebra of SO(n)) if we have an inner product. I'd be curious to hear how exponentiating this infinitesimal rotation relates to integrating the vector field.
-
Here as another way to define curl:
1) Curl is a vector describing "instantaneous rotation". The line integral over a gradient vector field is zero on any closed curve, so whatever instantaneous rotation is, it should have the property $curl(\nabla f) = 0$.
2) Every symmetric $3\times 3$ matrix is $D^2f$ for some homogeneous polynomial $f$ in three variables. Thus a vector field $F:\mathbb{R}^3 \to \mathbb{R}^3$ is locally well-approximated by an irrotational gradient vector field when $DF$ is symmetric.
3) The function $g(\vec v) = \vec v \times \vec p$, where $\vec p$ is fixed but arbitrary, is both the general rotational velocity field about an axis $\vec p$ through the origin, with angular velocity $||\vec p ||$, as well as the linear function described by an arbitrary antisymmetric matrix. When $DF$ is antisymmetric, $F$ is locally well-approximated by a rotational velocity field.
4) $DF = \frac{(DF + DF^T) + (DF - DF^T)}{2}$. The first summand is irrotational, the second, a rotational velocity field. Define the instantaneous rotation field for F to be the linear function given by $(DF - DF^T)$ (ignoring the factor of $2$). The vector $\vec p$ which gives us the corresponding function $g$ is easily determined from the coefficients of $(DF - DF^T)$, and is precisely $curl(F)$.
-
"derivative of the gradient vector field for [a function]": why not just call it the Hessian? – Willie Wong Oct 22 2010 at 19:06
Edited to $D^2f$. – Tobias Hagge Oct 26 2010 at 5:07
This is a good question and there are already a lot of good answers. Why add one more answer? Because there is a pedagogic option that nicely synthesizes these good questions and answers.
(1) The question recognizes undergraduate students exposed to div, grad, curl, etc. commonly ask questions whose answers are associated to higher-level concepts like one-forms, Hodge duals, etc.
(2) Numerous comments and answers recognize that wading into these advanced topics will slow, confuse, discourage, and distress many undergraduate students. How then to proceed?
(3) A reasonable response is to refer that subset of students (generally a minority) who are willing to work (for no academic credit) toward a broader understanding, to a (free) on-line book by William L. Burke titled Div, Grad, Curl are Dead.
Note: do not confuse Div, Grad, Curl are Dead with the above-recommended book Div, Grad, Curl and All That: the former is modern and polemic, while the latter is traditional and tutorial. Students who like either one generally will not like the other one; it is useful therefore to point students toward both books.
The on-line text of Div, Grad, Curl are Dead is an uncorrected publisher's proof because sadly, Prof. Burke was killed in an accident before the final corrections were done. So the Burke's proof pages have to be read carefully and critically, imperfections and all, this in itself is a good practice for thoughtful students.
Some of Burke's lively prose:
I am going to include some basic facts on linear algebra, multilinear algebra, affine algebra, and multi-affine algebra. Actually I would rather call these linear geometra, etc., but I follow the historical use here. You may have taken a course on linear algebra. This to repair the omissions of such a course, which now is typically only a course on matrix manipulation.
Another example is:
Mathematician: When do you guys [scientists and engineers] treat dual spaces in linear algebra?
Scientist / Engineer: We don't.
Mathematician: What! How can that be?
Burke's lively exposition supplies in abundance what many undergraduates crave: a dramatic narrative about why the geometric aspects of differential calculus are useful and important ... and a dawning realization that undergraduate mathematics is just the preliminary chapter of a wonderful story.
Bill Thurston's Foreword to Mircea Pitici's recent book The Best Writing on Mathematics: 2010 makes this same point, and is recommended to undergraduates who ask "How can two mathematics texts on the same topic be so very different?"
For many undergraduate students, Div, Grad, Curl are Dead will be the wrong textbook. But for those students who ask tough questions and refuse to accept glib answers, it's an excellent textook that gives undergraduate students explicit permission—indeed, seduces them—into reading more deeply.
-
That book looks nice. Thanks for your vivid advertisement :-) – Kevin Lin Feb 16 2011 at 16:48
Thanks ... I added another juicy quote (Burke's text has many such), also a link to a recent essay by William Thurston. That mathematical writing can be lively is a welcome revelation to many undergraduates. – John Sidles Feb 16 2011 at 17:45
Well, I for one have just discovered that I like both books! – Toby Bartels Apr 4 2011 at 1:04
One way of motivating the derivative definition of curl (or rather scalar curl in 2D) is to prove a "Mean Value Theorem" for Rectangles. Suppose that $\mathbf{F} = (M, 0)$ is a vector field defined on the rectangle $R = [a,b] \times [c,d]$. Write the integral $\frac{1}{\operatorname{area}(R)}\int_{\partial R} \mathbf{F} \cdot d\mathbf{s}$ as $\frac{-1}{b-a}\int_a^b \frac{M(t,d) - M(t,c)}{d-c}$. Apply the MVT for integrals and then the MVT for derivatives to conclude that there is a point $\mathbf{x} \in R$ such that $$\frac{1}{\operatorname{area}(R)}\int_{\partial R} \mathbf{F} \cdot d\mathbf{s}= -\frac{\partial M}{\partial y}(\mathbf{x}).$$ Do the same for vector fields of the form $(0, N)$. Then if $\mathbf{F}$ is C$^1$ and if $R_n$ is a sequence of rectangles with horizontal and vertical sides converging to a point $\mathbf{a}$, then we obtain: $$\lim_{n \to \infty} \frac{1}{\operatorname{area}(R_n)}\int_{\partial R_n} \mathbf{F} \cdot d\mathbf{s} = \frac{\partial N}{\partial y}(\mathbf{a}) - \frac{\partial M}{\partial x}(\mathbf{a}).$$ This is enough to explain what curl is measuring, without using Green's theorem.
Some students and I in a recent paper use this result to give a rather more intuitive proof of Green's theorem than those usually found in vector calc texts. See http://arxiv.org/abs/1301.1937
-
Well, $\nabla\times\nabla f=0$ if $f \in C^2$ can be proved by contradiction. Suppose the curl wasn't 0: then, if you chose a contour integral around the area with non-zero curl, it will be different than if you just stayed at that point. This doesn't make sense, because, given that the gradient describes change in the function, line integrals on it should be path invariant. You can make this visual with some arrows on a chalk board.
(You could explain that $\oint_a^b \nabla F\cdot dx = F(a)-F(b)$. Also, if you've explained differential forms, you can explain this with exact forms...)
And I'm rather pleased with the line Divergence Theorem is a grandiose way of saying that the amount of water than comes out of the tap is the amount that overflows the bath tub''...
As for $\nabla$, isn't it just a vector of unbound operators? ($(\mathbb{R}\to\mathbb{R})^3$?) And the multiplication/differentiation analogy seems not only to go back quite a ways, but useful given that it forms some sort of nice algebraic structure relative to addition... In fact, given continuity, it very nearly forms a field.
In any case, I'm a high school student and have explained this to one of my freinds in this manner. It seems to work fairly well in my one point data set...
-
Many great answers have already been provided to your earlier questions; this one is to speculate on the final one about the history of the operators. I am almost certain the operators (in various guises) were first used in fluid/continuum/elastic mechanics, rather than in electromagnetism, based purely on knowledge that the (mathematical) development of the former predates the latter by almost a century. Consider that Euler wrote down his eponymous equation in fluid dynamics in 1757, some half century before Oersted's discovery that electricity and magnetism are connected. So at the very least the Div and Grad operators predates electromagnetism.
The Div operator probably first appeared in consideration of the continuity equations, which roughly says $$\partial_t \mbox{density} + \nabla\cdot \mbox{flux} = \mbox{source}$$ but I am not certain about the history here. (I've made this answer a CW so people can add references.)
As to the Curl, I am much less certain. The name (also its alternate name the Rotor) suggests to me that, like the divergence, it arises from considering fluid flow, and in fact, the vorticity ($\omega = \nabla\times v$) is immensely useful for the study of fluids. Since in the (modern re-)derivative of Stoke's law it is used, and since the existence of the Kelvin-Stokes theorem, it is likely that George Stokes, at least, made use of the expression which now we call the Curl.
Of course, the name itself is unreliable as an arbiter, since it is most likely a modern invention. See, for example, this article in the Monthly. You should also take all of the above with a grain of salt: the operators were almost certainly not so named, nor their modern symbols used, in the 18th and early 19th centuries. So the most you can say is that some equivalent mathematical expression was found useful way back when.
-
And... great. Just as I typed up an answer to your final question, you deleted it! – Willie Wong Apr 21 2010 at 18:44
Well, I felt that the history part was kind of distracting to the rest of the question, so I removed it... – Kevin Lin Apr 21 2010 at 19:11
Nah, if this weren't community wiki, I would've just deleted it. Now I am just not sure what to do about the response. – Willie Wong Apr 21 2010 at 22:35
I don't know how it was originally, but in Maxwell's day, the divergence was called the ‘convergence’ and measured with the opposite sign. When Gibbs and Heaviside pushed using 3-vectors with the dot and cross products in place of Hamilton's quaternions, they also pushed for using divergence in place of convergence. (Divergence comes directly from a dot product, as we all know from their notation which is still used today, while the quaternionic product has this term with the opposite sign.) – Toby Bartels Apr 4 2011 at 0:48
I find that a reference to something physical really helps. In physics classes, references to calculus are inherent to the process. Similarly, calculus teachers can accelerate understanding by incorporating references to physics during lectures. To explain divergence, make reference to light from a bulb. Flux and divergence are simple to picture here, and it really helps the student. Similarly, for curl, hook up twp soda bottles and make a tornado. Emphasize the concept of source and sink. The calculus becomes intuitive after that.
-
I agree that it's very confusing that curl is again a vector field, and that there's this wacky determinant. I recommend doing the 2-d version, and "discovering" that curl is more naturally thought of as scalar-valued there, but of course you can think of it as that scalar field times the constant vector orthogonal to the plane.
Basically, I think it's reasonable to tell people that curl is measuring curl, and that it should really be fed two vectors. In the plane, you don't have to tell it the two vectors, and in 3-space you can cheat differently by feeding it the third, orthogonal vector.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 14, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586036205291748, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6873
|
### Impuzzable
This is about a fiendishly difficult jigsaw and how to solve it using a computer program.
### A Computer Program to Find Magic Squares
This follows up the 'magic Squares for Special Occasions' article which tells you you to create a 4by4 magicsquare with a special date on the top line using no negative numbers and no repeats.
### First Forward Into Logo 10: Count up - Count Down
What happens when a procedure calls itself?
# An Introduction to Computer Programming and Mathematics
##### Stage: 5
Article by David Saxton
A computer program is a series of instructions (also called code) given to the computer to perform some task, which could be anything from summing the numbers from 1 to 10 to modelling the climate. When the computer follows the instructions given in the program, we say that the computer is running the program. There are many different ways of writing these instructions for the computer (we speak of programming in different languages) - in this article, we will use a language called C++. By the end of it you will be able to write your own programs to perform basic mathematical and scientific tasks.
## Our first C++ program
Our first C++ program will tell the computer to print out the text "Hello world!". Here it is (don't worry - it will be explained line by line).
```#include
using namespace std;
int main()
{
// the next line prints out the text "Hello world!"
cout << "Hello world!";
}
```
For a computer to run this program, it must first be compiled by a compiler (this means translating it from the language of C++ to the computer's native machine code language). There's a useful online resource, codepad.org, which does the steps of compiling and running the program for us. Navigate to this website, select the option "C++" for the language, and copy and paste (or better yet, type out) the program above in the text box, before clicking the submit button. If you entered the text correctly, then you should see the following displayed for the output:
```Hello world!
```
If this doesn't appear, then you may have entered the program incorrectly - try again.
To get a feel for what is going on, let us examine the structure of the program.
```#include
using namespace std;
```
These first two lines tell the compiler about a range of functions that are available - a function is a block of code, which in this case, already exists in the computer memory ready for us to use. For now you don't need to understand exactly these lines mean; only that these should be placed at the top of most C++ programs that you will write. In this program, we want to use a function called "cout", which prints out text.
```int main()
{
...
}
```
This type of structure denotes a function in the program, called "main". This is a special function; we can (and later will) define functions with other names, but the computer will look for this function for the initial instructions to start following, which we place inside the brackets {, } (these show where the function starts and stops). We will describe more on the function syntax later.
```// the next line prints out the text "Hello world!"
```
This is a comment line. When the compiler sees "//", it will ignore anything that comes after this until the end of the line. Adding this text has no effect on the behaviour of the program, but it can be useful for when a person wishes to read and understand the code at a later date.
```cout << "Hello world!" << endl;
```
This tells the computer to print out text. You don't need to worry about how cout works for now - only how to use it. A series of text or numbers can be printed out by combining them with <<. For example we could have written the above line instead as:
```cout << "Hello" << " world" << "!" << endl;
```
The endl means "end line" and prints out a newline character.
The semicolons ; tell the compiler where one instruction stops and another begins - their role is analogous to the role played by full-stops in sentences.
Exercise: We must be precise in programming. If we type out a name of a keyword incorrectly, miss out a bracket, or forget a semicolon, for example, then the compiler will not be able to understand the program. Identify the 5 mistakes in the following program.
```#invlude
us1ng namespace std
int main()
{
/ / the next line prints out the text "Hello world!"
cout << "Hello world!;
}
```
Solution: include and using were spelt incorrectly. There is a semicolon missing at the end of the 2nd line. A space has been inserted into the "//" comment part. There is a quote " missing for the string in the 7th line.
## Variables and arithmetic
A variable in a computer program is a piece of information such as a number in the computer's memory. Once we have a variable, we can make use of and modify its value as we please. This is analogous to how a human brain stores, for example, a friend's address - it is there ready to be recalled, and may change from time to time.
Variables are essential for computer programs to work. If we sum the numbers from 1 to 10, we need a variable to store the sum at each stage of the calculation. If a program receives two numbers from the user and calculates their sum, it must use variables to store the inputed values before the sum can be calculated.
### Integer variables
In C++, there are many different types of variables, representing the different types of information that they refer to. To make use of a variable, we must first tell the computer what type we want. The first one we look at in this section are integer variables, for storing a whole number.
To "declare" an integer variable (which means setting aside computer memory for an integer), we include a line of code such as the following. The variable is then ready for use.
```int x = 0;
```
The "int" is the special codeword for integer. The "x" is the name of the variable, and can be replaced with anything made up of letters, numbers, and underscores (the first character cannot be a number however). The case of the letters are also important (so "hello" and "heLLo" would be seen as different by the computer). The " = 0" sets the variable's initial value to zero. We can also write "int x;" if we do not care about setting a specific value.
Exercise: Which of these are valid variable names?
1. seven
2. 2nd
3. TOP_speed
4. fast1
5. quick#
Solution: 1, 3, 4 are valid variable names. Name 2 starts with a number. Name 5 contains an illegal character #.
We can later change the value of a variable by writing a piece of code such as:
```x = 7;
```
This assigns the value 7 to x. To print out the value of a variable, we can again use the "cout" function - to display the value of x for example, we include the line of code:
```cout << x << endl;
```
Here is a complete program that declares a variable called x, with initial value 0, prints out its value, assigns the value of 7 to x and prints out its value again. This shows how variables can be read from and changed. Try it.
```#include
using namespace std;
int main()
{
int x = 0;
cout << x << endl;
x = 7;
cout << x << endl;
}
```
For integers, we can perform all the basic arithmetic operations such as addition, subtraction, multiplication and division. We use the symbols +, -, *, and / for these respectively. For example, this code creates two integer variables x and y with initial values, creates a third integer variable z, and sets the value of z to the value of x + y.
```int x = 2;
int y = 4;
int z;
z = x + y + 3;
```
In the above code, z would now have the value of 9. We can also write the last two lines more quickly as "int z = x + y + 3". We can also write code such as:
```int x = 2;
x = x * x;
```
The line "x = x * x" may look a bit strange - and if it was a mathematical equation then it would mean "x has a value satisfying x = x * x". In a computer program however, it means "take x, multipy it by itself, and assign the resulting value back to x". Now x would have the value 4.
There are another two operators, ++ and --, which are very useful (there are different types of operators - things like + and - will take two integer values and produce a third. ++ and -- take an integer variable and change its value). The expression "x = x + 1;" (increasing the value of x by 1) tends to occur a lot in programming, so C++ has a shorter way of writing this: the piece of code "x++;" has exactly the same effect. Similarly "x = x - 1;" and "x--;" are equivalent. For example, in the following lines of code, x has initial value 3, but has final value 5.
```int x = 3;
x++;
x++;
```
What about division? If x is an integer variable has value 4, and we write "int y = x / 2;", then y will have the value 2. However, what is the value of y in the following program?
```int x = 4;
int y = x / 3;
```
The "int" means that y is a whole number. In this expression, the computer will discard the non-integer part of the value of x/3. So since 4/3 = 1 + 1/3, y will get the value 1; y is "rounded down". If the expression is negative, then the resulting value will get "rounded up" (this is a convention, and suits the way the computer calculates the numbers). So for example, the value of (-4)/3 = -1 - 1/3, so its integer part is -1 (and not -2).
The last integer operator that we have not yet mentioned is %. The value of the expression "x % y" is the remainder of x when it is divided by y. For example, if x has the value 7 and we write "int z = x % 4;", then z will have the value 3.
Exercise: What value of x will be printed in the following code? Try creating a program with this code and running it to see if you are correct.
```int x = 11;
int y = x % 4;
x = x / 2;
x = x + y*2;
cout << "x = " << x << endl;
```
### Real variables
In the previous section, we saw how "int x;" defined an integer valued variable called x. We can similarly define variables for storing a real valued number, such as -2.5 or 3.141592. In programming these are referred to as floating point numbers. In C++ we can write
```float x;
```
to define a floating point variable called x. All the arithmetic operations +, -, *, / can be used for floating point numbers and variables. When we divide two floating point numbers, then we get the full number (not just the integer part).
The following piece of code creates two floating point variables called pi and r to store an approximation to $pi$, and a radius, and defines a third floating point variable called area which stores the result of a calculation - namely the area of the circle of radius 4. Try it!
```#include
using namespace std;
int main()
{
float pi = 3.141592;
float r = 4;
float area = pi * r * r;
cout << area << endl;
}
```
Exercise: Write a program that calculates the volume of a sphere of radius 4 (the formula for the volume is $(4/3) \pi r^3$).
### Boolean variables
A boolean variable stores a truth value (a bit of information representing either "true" or "false"). We define it using the "bool" keyword. For example, to define a boolean variable called x with initial value true, we write:
```bool x = true;
```
Similarly to how we can combine two numbers via addition or subtraction to get a third number, there are two operations, known as AND and OR that combine two boolean variables to produce a third. In C++, we write && for AND (similarly to how we write + for addition), and || for OR. If x and y are boolean variables, then "x && y" has value true only when x and y are both true. "x or y" has value true only when at least one of x or y has value true. (For more on logic, see here.) There is also a third operator, NOT, which takes a single boolean value and gives its logical inverse. C++ uses the symbol ! for this. If x is false, then "!x" has the value true, and vice versa.
Let's see an example of this. Try running the following program.
```#include
using namespace std;
int main()
{
bool is_saturday = true;
bool is_sunday = false;
bool is_weekend = is_saturday || is_sunday;
cout << is_weekend << endl;
}
```
In this program, the value of is_weekend is the logical OR of the two boolean variables is_saturday and is_sunday (and so it will have value true). Try changing the initial values of the boolean variables is_saturday and is_sunday and run the program again.
The following is a snippet of code showing the NOT operator in use. The boolean variable is_weekday will be initialized with the value false.
```bool is_weekend = true;
bool is_weekday = !is_weekend;
```
Similarly to how we can write expressions for integer variables which use multiple operators and mix variables with numbers, we can do the same with boolean variables. For example, in the following code, c will get the value true.
```bool a = false;
bool b = true;
bool c = (a || b) && (a || true);
```
Exercise: The AND and OR operators are like AND and OR logic gates in electronic circuits (for more on logic gates, see here). We can combine &&, || and ! to produce different logic gates. For example, the C++ expresion "!(!a || !b)" represents AND - it is equivalent to "a && b", where a and b are boolean variables. What boolean expressions represent the following?
• OR (using only the operators ! and &&).
• NAND
• NOR
• XOR
• XNOR
## Testing variables
C++ allows us to test the value of a variable (for example, to see if an integer variable has value 27 or has value at least 18). The result of a comparison is a boolean value (true or false). There are five "comparison operators" for numbers (we will see these in use shortly).
• x <= y - This expression is true if x $\le$ y.
• x >= y - This expression is true if x $\ge$ y.
• x < y - This expression is true if x < y.
• x > y - This expression is true if x > y.
• x == y - This expression is true if x = y.
• x != y - This expression is true if x $\ne$ y.
For example, suppose we had declared an integer variable called "age". The expression age >= 18 has a boolean value, and is true if age $\ge$ 18, and otherwise is false. Equivalently, we could have written age > 17. We can store the result of this expression in a boolean variable. Here's an example program:
```#include
using namespace std;
int main()
{
int age = 15;
bool isAdult = age >= 18;
cout << "isAdult is " << isAdult << endl;
}
```
Try running the program. Try modifying the value of age, and also try other comparison operators (so try replacing >= with one of <=, >, < ==).
Of course, we are free to compare variables with variables, variables with values, and values with values (for example, 15 > 17 is a valid piece of C++ code, and always has the value false). We can also combine comparison expressions together in the same way that we combined boolean values together with && and ||. What does the following piece of code do?
```int age = 15;
bool isTeenager = (age >= 13) && (age <= 19);
```
The expression age >= 13 has value true, and the expression age <= 19 also has value true. Therefore isTeenager has the value of the expression (age >= 13) && (age <= 19), which has value true && true, which is true.
There are in general many ways of writing a boolean expression with the same value. For example, all of these are equivalent:
• age >= 18
• age - 3 > 14
• -age < -17
• 15 - age <= 3
## Branching and if statements
So far, the programs we have written can be thought of as a sequence of instructions executed in turn, for example performing an arithmetic operation and then printing out its value. This doesn't allow us to do that much. C++ allows us to conditionally execute a segment of code depending on the value of a boolean expression. We do this using an if statement. Let us demonstrate by an example - try running this code.
```#include
using namespace std;
int main()
{
int x = 6;
int y = 5;
if ( x > y )
{
cout << "x is bigger than y" << endl;
}
}
```
When you run this program, it will output "x is bigger than y". This is an example of conditional execution - the value of the (boolean) comparison expression x > y is true, and so the code inside the brackets { ... } is executed.
This code also shows where else brackets { } are used. They are like parentheses ( ) in a mathematical expression such as (((2+3)*5)-4)*(3-4). They must be nestled and matched properly - for example, )2+(3*7)+3( is not a valid mathematical expression. Every time { appears in C++ code, it must be followed somewhere later by a corresponding }. We can nestle if statements together in this way, for example:
```int x = 6;
int y = 5;
int z = 2;
if ( x > y )
{
if ( x > z )
{
cout cout << "x is biggest of them all" << endl;
}
}
```
We can also extend if statements to an if-else block of code. This has the general form if ( expr1 ) { code1 } else if ( expr2 ) { code2 } ... else if ( exprN ) { codeN } else { lastCode } (but many variations are possible). The computer will check each expression expr1, expr2, ... in turn to see if any of them are true. If it is true then the code inside the brackets will be executed and then no further expressions will be checked. If none of the expressions are true then the code in the last brackets is executed (lastCode in the above). Any of the else if or the else parts can be eliminated to also produce valid code. The following example program shows them in use.
```#include
using namespace std;
int main()
{
int x = 4;
int y = 5;
if ( x > y )
{
cout << "x is bigger than y" << endl;
}
else if ( x == y )
{
cout << "x is equal to y" << endl;
}
else
{
cout << "x is smaller than y" << endl;
}
}
```
When you run the program, the text "x is smaller than y" is displayed. We could delete the last "else" part to produce a valid C++ program, which would not display any output when ran (since neither of the expressions x > y or x == y is true), such as:
```int x = 4;
int y = 5;
if ( x > y )
{
cout << "x is bigger than y" << endl;
}
else if ( x == y )
{
cout << "x is equal to y" << endl;
}
```
Or we could delete the middle "else if" part, so that either the first code was executed, or the second code was executed.
```int x = 4;
int y = 5;
if ( x > y )
{
cout << "x is bigger than y" << endl;
}
else
{
cout << "x is at most y" << endl;
}
```
Exercise:Write a program that defines a variable x with some initial value, and an if-else statement that prints out whether x is odd or even.
## Loops
Looping is the next important concept in programming. C++ allows us to repeatedly execute a block of code, until some condition is reached (for example, until the value of an integer counter reaches 10).
The first type of looping is the while loop. To use this in a C++ program, we insert code of the form
```while ( expression )
{
// code
}
```
where expression is replaced with a boolean expression (such as x < 10), and code is replaced with the instructions that we wish to repeatedly execute. When the while loop is encountered by the computer, the behaviour is as follows:
1. If the expression is false, then skip the code inside the brackets { } and continue with the program.
2. Otherwise, execute the code in the brackets { } and goto step 1 again.
So the code inside the brackets { } will be repeatedly executed until the expression evaluates to false.
Let us see an example - try running the following program:
```#include
using namespace std;
int main()
{
int i = 0;
while ( i < 5 )
{
cout << i << endl;
i++;
}
}
```
This program creates an integer valued variable called i with initial value 0. The code in the while loop is executed if i < 5, which is clearly true initially - and so "0" is printed out. The value of i is then increased to 1 (this is the code i++;). The condition i < 5 is still true, so the value of i - "1" - is printed out again, and i is incremented again. This continues until i reaches value 5 - at which point the condition i < 5 is false and the while loop is skipped.
Exercise: write a program to print the numbers from 0 to 9 in order, and then back down to 0. You will need two while loops.
In fact, this use of a while loop (where we increment a counter variable each time until it reaches a certain value) occurs so often that C++ has a special way of writing it - a for loop. The general form is:
```for ( int i = 0; i < N; i++ )
{
// code
}
```
Here, N is replaced with any value (such as 10, 2*5, or the name of an integer variable). The code inside the brackets { } is repeatedly executed until i < N is true - i.e., when the value of i reaches N, at which point the for loop code is skipped. Often, one will write i <= N instead, so the code will be executed until i has value greater than N. And of course, we can change 0 to any other value that we want i to have intially.
Let us see an example of a for loop summing the numbers from 0 to 10, and printing out the value:
```#include
using namespace std;
int main()
{
int sum = 0;
for ( int i = 0; i <= 10; i++ )
{
sum = sum + i;
}
cout << sum << endl;
}
```
Exercise: Modify the above program:
1. So that it sums the squares from 0 to 10.
2. Add an integer variable called N with some initial value (such as 100), and change the code so that it sums the squares from 0 to N.
Collatz conjecture states that the following process always stops for all initial values of n:
1. Take a whole number n greater than 0.
2. If n is even, then halve it. Otherwise, set its value to 3n+1.
3. If n now has value 1, then stop. Otherwise go to step 2.
Exercise: Write a C++ program to test the Collatz conjecture, that prints out n at every iteration (we give a solution program below - but try writing one first!).
Solution:
```#include
using namespace std;
int main()
{
int n = 25;
while ( n != 1 )
{
cout << n << endl;
bool isEven = n%2 == 0;
if ( isEven )
{
n = n/2;
}
else
{
n = 3*n + 1;
}
}
}
```
Exercise: Write a C++ program that implements Euclid's algorithm.
## Functions
At its simplest, a function is a way of grouping together a collection of instructions so that they can be repeatedly executed. Each function has a name (these have the same rules as variables names, so they are case sensitive; allow both letters, numbers and _, and cannot start with a number). At the top of the C++ program we "declare" a function, which tells the compiler that it exists (otherwise when the function name occured, the compiler would not know what it meant). Then somewhere in the program, we "define" the function, i.e., write the instructions for the function. When we want to execute the instructions in the function, we call it be writing its name (as will be seen).
Let's see an example. We write a function that prints out the numbers from 1 to 9, called countToTen. First, we need to include the following line of code somewhere at the top of the program to define the function:
```void countToNine();
```
The void keyword is there to say that the function does not return a value - more on this soon. The () will be explained shortly as well, and of course the semicolon ; is needed as the end of the instruction.
And to define the function, we include the following block of code (note that this time, we do not include a semicolon at the end of the first line):
```void countToNine()
{
// code
}
```
Here, // code is replaced with the code to be executed when the function is called. As you can see, the form is very similar to out int main() { ... } - this is because "main" is in fact the special function that the computer calls. When we want to call the function, we write countToTen();. Let's see a complete program which calls this function twice. Try running it.
```#include
using namespace std;
void countToNine();
int main()
{
countToNine();
countToNine();
}
void countToNine()
{
for ( int i = 1; i <= 9; i++ )
{
cout << i;
}
cout << endl;
}
```
These are called functions because they can behave like mathematical functions - they can have input values and / or an output value.
To declare a function with a set of input variables (which can be any of the variable types we have seen so far), we list these types inside the brackets ( ) in the declaration, separated by commas (we can also include names for these variables, which are helpful for descriptive purposes).
Let's modify the countToNine function above so that it has an input value (which we call N), and prints the numbers from 1 to N. We declare the function as follows (changing its name):
```void countToN( int N );
```
We similarly define the function, with the 9 in the for loop replaced by N. To call the function with value, say, 5, we write "countToN(5);". If we have an integer variable called x, we can use the value of x as an input by writing "countToN(x);". Let's put this together to form a complete program - try guessing what the following program does, before running it!.
```#include
using namespace std;
void countToN( int N );
int main()
{
for ( int i = 1; i <= 9; ++i )
{
countToN( i );
}
}
void countToN( int N )
{
for ( int i = 1; i <= N; i++ )
{
cout << i;
}
cout << endl;
}
```
To declare a function with an output value (called returning a value), we replace the void keyword with the value type. For example, if we wanted to return a decimal number, we would replace void with float. Inside the function, when we want to return from the function to where the function was called from, we write "return value;", where value is replaced by the value to be returned (such as 5, or x).
Let's construct a function called gcd that has two input values (which we will call a and b), and (crudely) calculates their greatest commond divisor, and returns the value. As described above, we declare the function as
```int gcd( int a, int b );
```
And for the definition of the function, we use a similar format (again this time, without the semicolon at the end of the first line):
```int gcd( int a, int b )
{
while ( a != b )
{
if ( a > b )
{
a = a - b;
}
else
{
b = b - a;
}
}
return a;
}
```
Exercise: How and why does the algorithm given in the above function work?
To call the function, we write, for example, gcd( 168, 120 ). This is then treated as an integer, whose value is the value returned - so we can treat it like any other integer. For example, we can write "int x = gcd( 168, 120 );" to create an integer with this value. Or we could write "cout << gcd( 168, 120 ) << endl;". Let's put this together in a program:
```#include
using namespace std;
int gcd( int a, int b );
int main()
{
int x = gcd( 168, 120 );
cout << "The greatest common divisor of 168 and 120 is " << x << endl;
x = gcd( 78, 114 );
cout << "The greatest common divisor of 78 and 114 is " << x << endl;
}
int gcd( int a, int b )
{
while ( a != b )
{
if ( a > b )
{
a = a - b;
}
else
{
b = b - a;
}
}
return a;
}
```
Exercise: Change the code in the gcd function so that it uses Euclids algorithm, which is computationally more efficient.
Exercise: Write a function that takes two integers a and b, and returns the value of $a^b$.
## Other maths functionality
We have covered the basic functionality of C++. C++ also comes with a whole range of predefined functions - for example, the common mathematical functions such as sin, cos, and square root, and functions for generating random numbers.
### Common maths functions
To make use of the mathematical functions, we write "#include " somewhere at the top of the program - this tells the compiler about these functions (what their names are, what inputs they take, what outputs they give). Most of the mathematical functions have float as their input and output type. The trignometric functions use radians as their input instead of degrees, and the log function is base e. Here's a short program demonstrating the use of some of these functions.
```#include
#include
using namespace std;
int main()
{
// x is initialized with the value of sin(1).
float x = sin( 1 );
cout << "sin(1)=" << x << endl;
// x now gets the value 3^5.
x = pow( 3.0, 5 );
cout << "3^5=" << x << endl;
// x is now log base e of 20, or ln(20).
x = log( 20 );
cout << "ln(20)=" << x << endl;
// x is now square root of 25
x = sqrt( 25 );
cout << "sqrt(25)=" << x << endl;
}
```
The most commonly used maths functions are:
• cos, sin, tan - the trigonometric functions.
• acos, asin, atan - the inverse trigonometric functions.
• cosh, sinh, tanh - the hyperbolic functions.
• exp - the exponential function (for example, exp(10) has value $e^10$).
• log, log10 - the logarithmic functions to base e and base 10 respectively.
• pow - raise to a power (so pow(a,b) is $a^b$).
• sqrt - square root.
• ceil, floor - round a number up or down (for example, ceil(2.4) = 3 and floor(9.8) = 9).
• fabs - compute the absolute value (for example, fabs(-3.4) = 3.4).
### Random numbers
To make use of random numbers, we need to include the library "cstdlib", so write #include at the top of the program. Then the rand() function returns a random positive integer between 0 and RAND_MAX (RAND_MAX is a predefined constant that is available in your program when the library cstdlib is included).
To turn this into a floating point number between 0 and 1, called x, we would write:
```float x = float(rand()) / RAND_MAX;
```
This contains some new code - the float(rand()) turns the integer returned by rand() into a floating point number with the same value. This is necessary since if we divide two integers, then the part after the decimal point is discarded. Here's an example of a program generating 10 random numbers between 0 and 1.
```#include
#include
using namespace std;
int main()
{
for ( int i = 0; i < 10; i++ )
{
float x = float(rand()) / RAND_MAX;
cout << "x has value " << x << endl;
}
}
```
Exercise: How would you generate a random integer between 1 and 10?
Exercise: Use random numbers and Buffon's needle to generate an estimate for pi.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9032036662101746, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/187377-group-homomorphisms.html
|
# Thread:
1. ## Group Homomorphisms
I am trying to understand homomorphisms from $Z_n$ to $Z_k$ and despite some help from earlier posts am still struggling.
I am thus reading Beachy and Blair: Abstract Algebra on group homomorphisms and am trying to follow Example 3.7.7 on page 156 which seeks to determine all homomorphisms from $Z_n$ to $Z_k$ (see relevant page attached). I can follow this example but am having trouble with a particular step.
I will present their argument down to the problematic step - and I will make the example more concrete by seeking to determine all homomorphisms $\phi$: $Z_6$ $\rightarrow$ $Z_{10}$
In terms of my particular example the argument of Beachy and Blair is as follows:
================================================== ====
Consider $\phi$: $Z_6$ $\rightarrow$ $Z_{10}$
Any such homomorphism is completely determined by $\phi$ ( $[1]_6$), and this must be an element $[m]_{10}$ of $Z_{10}$ whose order is a divisor of 6.
In an abelian group with the operation denoted additively we have that if o(a) | n if and only if n.a = 0.
Applying this result to $[m]_{10}$ in $Z_{10}$ we have o( $[m]_{10}$) | 6 if and only if n. $[m]_{10}$ = $[0]_{10}$ which happens if and only if 10 | 6m (Beachy and Blair ask us to compare with Exercise 11 of Section 2.1 - see other attached sheet)
etc etc - see attached sheet
================================================== ==
My question is how does the step that concludes " which happens if and only if 10 | 6m" follow ie how does this last step of the argument follow?
I would be grateful for any help!
Peter
Attached Files
• Beachy & Blair - Group Homomorphisms - Page 156.pdf (61.0 KB, 11 views)
• Beachy & Blair - Exercise 11 - Section 2.1.pdf (57.6 KB, 10 views)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004425406455994, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/conic-sections+integral
|
# Tagged Questions
0answers
29 views
### How can I find $\int{\sqrt{\left(b^2-1\right)x^2+1\over-x^2+1}}dx$?
I got this from the perimeter of an ellipse. I came up with the formula: arclength of f(x) for x from a to b=$\int_a^b\sqrt{f'(x)^2+1}dx$. Since an ellipse has the equation: \left({x-h\over ...
1answer
145 views
### how to calculate the double integral over the intersection of an ellipse and a circle
How to calculate the double integral of $f(x,y)$ within the intersected area? $$f(x,y)=a_0+a_1y+a_2x+a_3xy$$ $a_0$, $a_1$, $a_2$, and $a_3$ are constants. The area is the intersection of an ellipse ...
2answers
2k views
### Set up double integral of ellipse in polar coordinates?
How do you set up a double integral for an ellipse in polar coordinates without using Jacobian or Greens Theorem? I can't seem to figure out what (or if) the limits of r can possible be. \$x = ...
1answer
1k views
### Moment of inertia of an ellipse in 2D
I'm trying to compute the moment of inertia of a 2D ellipse about the z axis, centered on the origin, with major/minor axes aligned to the x and y axes. My best guess was to try to compute it as: ...
1answer
112 views
### The probability of $Ax^2+Bxy+Cy^2 = 1$ defining an ellipse.
In Keith Kendig's paper, Stalking the Wild Ellipse (published in the American Mathematical Monthly, November 1995), he says that if $A, B, C$ are chosen at random, the probability that the Cartesian ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8985185623168945, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/18168/hypothesis-testing-for-mean
|
# Hypothesis testing for mean
I have a system with two components, component A and component B. Let $X$ be the random variable representing time between failures of component A and $X$ is exponentially distributed. Let $E[X]=m$. Now we want to test the following hypothesis:
null hypothesis: $m\le m_0$ where $m_0$ is some given constant alternate hypothesis: $m>m_0$
To test the hypothesis we need samples of $X$ which are not available. But, we have the information that A has a failure rate lower than B (time between failures for B is also exponentially distributed). MTTF of component B is $n_0$. Can we use samples generated from B and use as samples of A to test the above hypothesis?
-
1
You know that the failure rate $h_A$ of A is smaller than the failure rate $h_B$ of B. Since the failures of B are exponentially distributed with mean $n_0$, we have that $h_B=1/n_0$. Since X is also exponentially distributed,$$h_A=\frac{1}{E[X]}=\frac{1}{m}<h_B=\frac{1}{n_0},$$ that is, $m>n_0$. So if $n_0$ is a known quantity, then the null hypothesis $m\leq m_0$ is trivially false if $m_0\leq n_0$ and untestable if $n_0<m_0$ since samples of B have no information about A. If $n_0$ is not known, test null hypothesis $n_0\leq m_0$ and if you reject it, then reject $m\leq m_0$ also. – Dilip Sarwate Nov 10 '11 at 20:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425002932548523, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/95097/explicit-formula-for-cholesky-factorization-in-a-special-case
|
## Explicit formula for Cholesky factorization in a special case
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have a positive definite matrix of the form $Q+sI-\alpha J$ ($s>2, 0 < \alpha <1$ and $J$ is the all-ones matrix), where $Q$ is "nice", nonnegative and known. I'd like to know if there is a way to obtain an explicit expression for the Cholesky factorization of my matrix in this special case. Thanks!
-
## 1 Answer
The matrix $\alpha J$ is a rank one matrix, so there are simple update/downdate formulas for computing the Choleksy factorization of $Q+sI-\alpha J$ if you start with the factorization of $Q+sI$.
I'm not aware of any update formulas that get you from the Cholesky factorization of $Q$ to a Cholesky factorization of $Q+sI$.
-
Thanks! Can you give me a particular reference? My focus is on theory, not computation. – Felix Goldberg Apr 25 2012 at 7:53
A classic reference (and the paper is available online as a free .pdf) is: P. Gill, G. Golub, W. Murray, and M. Saunders. Methods for modifying matrix factorizations. Mathematics of Computation, 126(28):505-535, 1974. stanford.edu/group/SOL/papers/ggms74.pdf – Brian Borchers Apr 26 2012 at 1:02
Let me also mention that if $\alpha$ is fixed and you want to vary $s$, then you might find that a better way to go is to compute the eigenvalue decomposition of $Q-\alpha J$, and then adjust for $sI$ by adding $s$ to the eigenvalues. – Brian Borchers Apr 26 2012 at 4:52
Well, actually it's $\alpha$ that's varying and $s$ is fixed. Thanks for the reference, I'll be sure to read. – Felix Goldberg Apr 29 2012 at 14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8809915781021118, "perplexity_flag": "middle"}
|
http://motls.blogspot.com.au/2012/08/two-state-systems-masers.html
|
# The Reference Frame
## Thursday, August 16, 2012
... /////
### Two-state systems: MASERs
The Feynman lectures on physics are special for many reasons.
Feynman delineates the big picture rather clearly; is able to be very concise about various points that make many other authors excessively talkative; isn't afraid to address questions that some people incorrectly consider a domain of philosophers even though they have become a domain of physics many years ago; isn't afraid to clarify many widespread misconceptions and explain why e.g. Einstein was wrong about quantum mechanics; offers some original cute research which may have had pedagogical motivations but that is more important than that e.g. the derivation of general relativity from a consistent completion of the coupling of the stress-energy tensor to a new spin-2 field (it was a really beautiful, pragmatic, modern, and string-theory-like approach to general relativity); and for many other reasons.
Two-state systems are among the topics in quantum mechanics that Feynman dedicated much more attention than most other textbooks of quantum mechanics which is one reason why Feynman's students were much more likely to understand the foundations of quantum mechanics properly.
On this blog, two-state systems have played a central role in many older articles such as the introduction to quantum computation (a qubit is a very important two-state system); two-fermion double-well system (a ramification of the Brian Cox telekinetic diamond insanity), and several others.
There are hundreds of important examples of two-state systems. The underlying maths governing their evolution is isomorphic in all these cases but we still tend to think about the individual examples "differently" because we have different intuitions about the different system. The electron's spin is one example; the ammonia molecule is another. The mathematical isomorphism between the two systems is a fact; nevertheless, people tend to incorrectly assume that the mathematics of the ammonia molecule we will focus on is much more "classical" than the example of the electron's spin. Well, it's not. All states in the Universe obey the same laws of quantum mechanics.
Looking at Feynman's chapter on MASERs
The third, final volume of the Feynman lectures on physics is mostly dedicated to quantum mechanics. There are many chapters on two-state systems. Chapter 9 of Volume III is about MASERs – cousins of LASERs that use a particular transition between two states of the ammonia molecule.
Ammonia, $NH_3$, is a stinky gas excreted in urine. It escapes from the liquid which is why you could smell it on most toilets, especially in the third world and the Soviet Bloc during socialism. ;-) I apologize to the French and others but Germans have the cleanest toilets. But we want to discuss a closely related issue, namely the states of the ammonia molecule. :-)
The model above shows us that the molecule looks like a pyramid with a triangle of hydrogen atoms at the base and a nitrogen atom at the top of the pyramid (above the center of the triangle). All the nuclei may move relatively to each other, in principle. However, the relative motion that would change the distances between the four atoms looks like a collection of harmonic oscillators with pretty high frequencies i.e. energies. That's why if we only look at low-lying energy states (at most a tiny fraction of an electronvolt above the minimum possible energy), we are restricted to the ground states of the harmonic oscillators, to the lowest-lying state in which the distances between the four nuclei are fixed at values that minimize the energy with this accuracy. The vibrations changing the distances are thus forbidden. The wave functions for the electrons are completely determined by the low-energy constraints.
How many states of the ammonia molecule are there?
Well, even if the lengths of the six edges of the pyramid are determined, we may do something with the pyramid: we may rotate it. If you define the vector $\vec d$ pointing from the nitrogen atom to the center of the hydrogen triangle, $\vec d$ may be any direction on the two-sphere $S^2$. I've chosen the direction of the arrow in the opposite way than what you may find natural because $\vec d$ is really proportional to the dipole moment. Note that the hydrogens are "somewhat more positive" (much like they tend to lose the electrons elsewhere: compare the $H^+$ or $H_3 O^+$ and $(OH)^-$ ions in water) while the nitrogen is more "negative", like in $N^- H_3^+$, so the dipole moment has to start from the negative nitrogen.
For each direction on the sphere, there is a distinct quantum state orthogonal to others. Also, the orientation of the hydrogen triangle within its plane may change (rotate) and is given by an arbitrary angle defined modulo 120 degrees (because the rotation by 120 degrees maps the triangle to itself). The low-lying states will pick a superposition of these states (triangles rotated by $\gamma$) that minimize the energy, pretty much the "symmetric combination" of all directions.
So the vector $\vec d$ is the only "light degree of freedom" we are allowed to vary without raising the energy too much.
In classical physics, you could take $\vec d$ to point along any axis, e.g. the negative $z$ semi-axis ("nitrogen is above the triangle"). If the nitrogen had the same properties as the classical plastic model, the value of $\vec d$ would be conserved. We could just forget about all the states with different directions of $\vec d$; we could consistently demand that $\vec d$ has a preferred direction.
Let's try to impose as similar a condition as we can in quantum mechanics, too. Quantum mechanics allows $\vec d$ to change arbitrarily (the dipole moment isn't conserved) so the true energy eigenstates would be some spherical harmonics that depend on $\vec d$. However, the transitions to different values of $\vec d$ are unusually slow and we may forget about them.
However, there is one transition that is actually fast enough: the transition from $+\vec d$ to $-\vec d$. We can't neglect it at all. Why does exist? You may explain it in many ways but one of them is the "quantum tunneling". It's just possible for the nitrogen atom to be pushed through the triangle and appear on the opposite side of the pyramid. The ammonia molecule is exactly the kind of a microscopic object where things like quantum tunneling are inevitable.
So there's a significant probability that an ammonia molecule starting with a direction of $+\vec d$ ends up with the opposite direction, \[
\vec d\to -\vec d.
\] Note that the position of the center of mass is conserved. You can't "ban" this process which is fast enough (the frequency is high enough for the purposes we consider but low enough to be compatible with our low-energy constraints). That's why you can't study the state of the ammonia molecule with a given value of $\vec d$ only. You must study the states with $\vec d$ and $-\vec d$ at the same moment.
The states "nitrogen above the triangle" and "nitrogen below the triangle" will be called $\ket 1$ and $\ket 2$, respectively. The picture above should really have the same orientation of the hydrogen triangle in both states, if you wanted it to be really natural, but it's a detail because I said that the relevant states are averaged over the orientation of the triangle, anyway.
At any rate, it is legitimate to require that the dipole moment $\vec d$ lies in a particular line – the transitions to non-parallel values of the dipole moments are so slow that they can be neglected. However, the flipping of the sign of $\vec d$ is something that simply cannot be neglected. It's possible, allowed, inevitable, and it's a vital reason why the MASERs work at all.
You see that this is already a big deviation from classical physics in which there's no tunneling effect. In classical physics, $\vec d$ seems to be conserved. Some people could argue that the ammonia molecule is already pretty large so classical physics should more or less apply. They would be wrong. Classical physics never applies exactly. Any physical system in this Universe, regardless of the size, obeys the laws of quantum mechanics. Classical physics may sometimes be a good enough approximation but it's never precise and it's always wrong at least for some questions when we talk about small molecules.
We have restricted our attention to a two-state system. The different shapes of the molecule (different lengths of edges) were banned because the change of the geometry (or excited states of the electrons) would raise the energy too much; different directions of the dipole moment than $\vec d$ and $-\vec d$ where $\vec d$ is the (measured) initial value may be ignored because the transition to non-parallel values of $\vec d$ is too slow.
We know that the state vector will belong to a two-dimensional Hilbert space. The relevant Hamiltonian is\[
\hat H = \pmatrix{ E_0 & A\\ A& E_0 }.
\] The off-diagonal matrix elements $A$ are due to the quantum tunneling. In classical physics, we would have to have $A=0$ but that's definitely not the case in quantum physics. By redefining the phase of $\ket 1$ and $\ket 2$ (only the relative phase matters), we could change the phase of $A$ and choose a basis in which $A$ is real and positive. Note that in general, $\hat H$ would be Hermitian so the upper-right matrix element would be the complex conjugate of the lower-left matrix element.
The two diagonal entries are the "best approximations for the energy" of the pyramids in which we neglect the tunneling. These matrix elements are equal to each other due to the rotational symmetry of the laws of physics. After all, $\ket 2$ may be obtained from $\ket 1$ by a rotation and because rotations are symmetries, they don't change the expectation value of the energy. It has to be the same for both states, $E_0$.
However, $E_0$ isn't an eigenvalue of the energy. Instead, to find the eigenvalues of the energy, we have to diagonalize $\hat H$. We find out that the eigenstates are\[
\frac{\ket 1- \ket 2}{\sqrt{2}}, \quad
\frac{\ket 1+ \ket 2}{\sqrt{2}}.
\] I divided the sum and difference by $\sqrt{2}$ to make the states normalized but I didn't really have to do that. Again, note that the phases (and, if you choose this convention, general normalization factors) of the eigenstates are undetermined. In the column notation, the eigenstates are\[
\pmatrix{ +\frac{1}{\sqrt 2} \\ -\frac{1}{\sqrt{2}} },\quad
\pmatrix{ +\frac{1}{\sqrt 2} \\ +\frac{1}{\sqrt{2}} }.
\] If you multiply the matrix $\hat H$ by these vectors, you will get multiples of themselves. The coefficients in the multiples are the energy eigenvalues:\[
E_I = E_0 - A, \quad E_{II} = E_0 + A.
\] Now you see why I chose the first sign to be minus – I wanted the first eigenvalue to be the lower one. In other words, I wanted the first eigenstate to be the genuine ground state. The Roman numerals were picked to label the eigenvalues from the lowest one to the highest one.
So if you really cool down the molecule, it won't sit in the shape of a particular pyramid. It just can't sit there because there exists quantum tunneling, even at vanishing temperature $T=0\,{\rm K}$. It is an inevitable process of quantum mechanics. Instead, the molecule will ultimately emit photons and drop to the lowest-energy state which means the ground state, the energy eigenstate with the lowest energy, and it has the same probability amplitude to be in the "pyramid up" and "pyramid down" states.
Because many people make a mistake, let me emphasize one more thing. The energy eigenstates \[
\ket{I,II} = \frac{\ket 1 \mp \ket 2}{\sqrt 2}
\] are not just some dull "statistical mixtures" that have a 50% probability to be in the state $\ket 1$ and 50% probability to be in the state $\ket 2$ (pyramid up or down). Instead, the relative phase between the states $\ket 1$ and $\ket 2$ is absolutely crucial for the physical properties of the linear superposition state.
In particular, if the relative phase is anything else than $+1$ or $-1$, the superposition fails to be an energy eigenstate. It is only an energy eigenstate if the relative phase is real and when it's real, it's damn important whether the relative sign is $+1$ or $-1$. The negative relative sign gives us the lower-energy state – the quantum tunneling has the effect of maximally lowering the energy from the "intermediate" level $E_0$ down to $E_0-A$, while the positive relative sign does exactly the opposite: it raises the energy to $E_0+A$.
Almost all the anti-quantum zealots would try to talk about a preferred basis and because they're obsessed with position eigenstates, they would probably allow $\ket 1$ and $\ket 2$ to be the only states that may be "truly" realized. But this is of course completely wrong because even if you start with $\ket 1$, you can't have $\ket 1$ forever. Because of the quantum tunneling, the initial state $\ket 1$ inevitably evolves into general linear superpositions of $\ket 1$ and $\ket 2$: it oscillates between $\ket 1$ and $\ket 2$.
As emphasized repeatedly on this blog, there's no way to ban general linear superpositions. The states in a given basis inevitably evolve into general complex combinations of such basis vectors. And all the complex coefficients, including the relative phases – and especially the relative phases – are absolutely critical. Only a basis of energy eigenstates is completely "sustainable": each energy eigenstate only evolves into a multiple of itself (well, it evolves into itself with a different phase).
And it is not true that the relevant energy eigenstates are always "equal mixtures" of $\ket 1$ and $\ket 2$. For example, in the case of our molecule, things change e.g. when we add the electric field.
Adding electric fields: MASER
The surrounding electric field $\vec E$ doesn't change anything about the existence of the tunneling. However, it will add the interaction energies between the electric dipole and the electric field so that the Hamiltonian is\[
\hat H = \pmatrix{ E_0+d E & A\\ A& E_0 - dE }.
\] The expectation value of energy of $\ket 1$ i.e. the upper-left matrix element (associated with the pyramid pointing up, the dipole is down) was increased by the product $|\vec d|$ and $|\vec E|$ because the two vectors are pointing in the opposite direction; the expectation value of energy of $\ket 2$ was lowered for the same reason.
Again, this matrix (the Hamiltonian) may be diagonalized. In this case of a real matrix (it was made real by our having redefined the phases of the basis vectors), the coordinates of the eigenvectors are still real. But it is no longer the case that the absolute values of both coordinates are equal. They're rather general. The energy eigenvalues are\[
E_I = E_0 - \sqrt{A^2+ d^2 E^2},\quad
E_{II} = E_0 + \sqrt{A^2+ d^2 E^2}.
\] Note that those formulae reduce to $E_0\mp A$ for $E=0$. Also, they reduce to $E_0\mp dE$ for $dE\gg A$. Note that if you drew graphs of $E_0\mp dE$ as a function of the electric field $E$, you would get two straight lines that intersect each other. However, when you draw the graphs of the exact results $E_I,E_{II}$ written in the displayed equation above, the two curves never cross. One of them never drops below $E_0+A$ and the other one never jumps above $E_0-A$ so they obviously cannot cross. In fact, the right upper arm of the curve (which is clearly a hyperbola) "bends" and continuously connects to the left upper arm of the curve; the same thing holds for the lower curve. See Avoided crossing (= "repulsion of eigenvalues" or "level repulsion") at Wikipedia.
A few final steps are needed to explain why such molecules are able to emit and absorb some radiation whose frequency is $f$ where\[
hf = E_{II} - E_{I} \sim 2A + \frac{d^2 E^2}{A}
\] where the final form of the expression is a Taylor expansion of the square root that is OK for all realistic (small enough) values of the electric field $E$. No, you won't be really able to produce $dE\gg A$ in the lab: these would be too strong electric fields.
For example, start with the absorption. Place the ammonia molecule to a variable field $\vec E$ which has the right frequency $f\sim 2A/h$. In the $\vec E=0$ energy eigenstate basis which is composed of the vectors $\ket 1\mp \ket 2$, the Hamiltonian was diagonal (that's what the energy eigenstates mean). When you add the electric field $\vec E$ going like $\cos (2\pi f t)$, it will contribute some off-diagonal elements in this basis and when the frequency is right, the resonance will be able to "accumulate" the probability amplitude of $\ket{II}$ even if the initial state is $\ket{I}$. If you choose a wrong frequency, the amplitude of the state to be in $\ket{II}$ will receive contributions with different phases each cycle and they will cancel out after a while.
So the ammonia molecule is able to increase its energy by capturing some energy from electromagnetic waves at the right frequencies. That's absorption. There's also stimulated emission, the time-reversed process to the absorption. If the molecule is already mostly at the higher level $\ket{II}$, the electric field oscillating at the right frequency will encourage the molecule to drop to $\ket{I}$ and deposit the energy difference to the electromagnetic waves.
In the text above, the electric field $\vec E$ was assumed to be large enough and treated as a classical background that only affects the Hamiltonian of the molecule via "classical parameters". However, if the electric field is weak enough, it becomes important that $\vec E$ is a quantum observable as well. The energy carried by the electromagnetic waves of frequency $f$ is no longer continuous – it is quantized i.e. forced to be a multiple of the photon's energy $E_\gamma = hf$.
When you calculate the transitions properly, you will find out that there's actually a nonzero probability amplitude for the ammonia molecule in the excited state $\ket{II}$ to emit a photon even if there's no oscillating electric field around the molecule to start with. This is the spontaneous emission. A proper look at the "time reversal argument" is enough to see why the spontaneous emission has to exist. Even one last photon may be absorbed (absorption is always "stimulated") and we end up with zero photons; the time-reversed process therefore has to start with zero photons and end with one photon and its "invariant" probability amplitude has to be the same i.e. nonzero. The ability of systems to emit even if the initial number of photons is zero is called "spontaneous emission"; the total "stimulated plus spontaneous emission" has the probability proportional to $N_\gamma+1$ – it's the squared matrix element of a harmonic oscillator's position matrix element.
However, for MASERs, the stimulated emission is more important because the intensity of the electromagnetic waves is high – "stimulated" is what the letter "S" in "MASER" (or "LASER") stands for. I recommend you e.g. Feynman's treatment in the Volume III of his lectures for sketched calculations of the formulae for the transition rates, why there is a resonance, what is the width of the curve, and so on.
My main goal was more specific here: to convince you that the superpositions of "classically intuitive" states are absolutely inevitable and natural. This wisdom holds for all systems in quantum mechanics, including many-level and many-body systems and including infinite-dimensional Hilbert spaces whose bases are labeled by positions or any other continuous or discrete observables. Physical systems may be found in any superpositions and if you need to identify a basis that is a bit more "sustainable" than other bases, it's a/the basis of the energy eigenstates, not e.g. position eigenstates. What these slightly preferred energy eigenstates are depends on the Hamiltonian. For example, in our case, the form of the eigenstates depended on the surrounding electric field. So there can never be any "a priori preferred basis". There is never a preferred basis but if some basis is more well-behaved than others, it's because it's closer to a basis of energy eigenstates, and such a basis always depends on the Hamiltonian as well as the environment: it can only be determined "a posteriori". In particular, the wave functions for the ground states are more important than all others and that's the wave functions into which cool enough systems want to sit down. These ground state wave functions describe what the degrees of freedom are doing – and it doesn't matter that you may find these wave functions "complicated" or "different from those you would like to prescribe".
The ammonia molecule is just another system that invalidates any non-quantum or "realist" (anti-quantum zealots prefer to use the term "realist" for themselves over the much more accurate but less flattering term "classical and optimized for cranks eternally and dogmatically stuck in the concepts of the 17th century physics") replacement for the proper rules of quantum mechanics.
For example, coherence of the ammonia molecule is totally essential and testable so if the Universe decided to "split" during any of the processes discussed above, as Everett liked to imagine, it would immediately have tangible consequences that disagree with the experiment.
Also, if there were any pilot waves envisioned by de Broglie or Bohm, one couldn't explain "where" the photon is created during the spontaneous emission. Note that during absorption or emission of electromagnetic waves, the number of photons isn't conserved, in contradiction with an elementary property of the flawed pilot wave paradigm. The photon emitted by an excited ammonia molecule is "everywhere". You can't fix this problem of the pilot wave theory by forcing the system to remember the "truly real classical field configuration" instead of the position of particles because that would be similarly incompatible with the particle-like properties of the electromagnetic field. The electromagnetic field – and all other fields in the world – exhibits both particle-like and wave-like behavior and which of them is more relevant depends on the situation, on the relevant terms in the Hamiltonian, on the frequency of the waves and the occupation numbers etc. If you want to declare one of the behaviors (particle-like or wave-like) to be "more real" than the other, you're guaranteed to end up with a fundamentally wrong theory. If you keep on doing such things for years, then you become an irreversible crackpot.
Theories with GRW collapses would also predict unacceptable effects whose existence may be safely ruled out experimentally. And I am not even talking about theories that would love to completely "ban" the complex superpositions because the authors of these theories must be misunderstanding everything, including the content of Schrödinger's equation that makes the evolution into general complex combinations inevitable.
Of course, the main point of all such texts is always the same: quantum mechanics has been demonstrated to be the right framework to describe the world for more than 85 years and everyone who is hoping that a completely, qualitatively different description will replace quantum mechanics – e.g. the incredible idiots that clearly gained a majority in similar threads on the Physics Stack Exhange (holy crap, the users such as "Ron Maimon" and "QuestionsAnswers" are just so unbelievably stupid!) – is a crackpot.
And that's the memo.
Posted by Luboš Motl
|
Other texts on similar topics: philosophy of science
#### snail feedback (12)
:
reader Gordon Wilson said...
Thanks for the ongoing QM lessons---despite one graduate course in it, QM was a hole in my math/physics trek which I am slowly remedying after many years.
I went to the Physics Stack link to read the t'Hooft (if it is indeed him) and comments. Holy crap, indeed.
Questions and Answers tells you basically to f\$%k off,
and calls what you say "pathetic lies and mischaracterisations"...Maybe it is a good thing that I havent been following it for ages.
I see nothing wrong with t'Hooft (again, him or a troll?)
framing models and testing them and tinkering with them. t'Hooft deserves a lot of slack because of his accomplishments. Maimon and QuestionsandAnswers are a different matter. What all three are doing is simply
looking for a deterministic substructure to QM, and denying locality rather than reality. For QandA to say you are denying objective reality is a distortion. Obviously there is an objective reality. That is not the same as saying that reality we experience is not crystallized until a "measurement", ie any interaction, occurs. We live in the reality of events that have and are interacting resulting in the decoherence of probability wave amplitudes.
IMO a problem that many people (me included) have is that describing QM using words rather than math leads to confusion about determinism and other things, particularly foundational issues.
QandA's responses to you are amazingly condescending.
reader Luboš Motl said...
Thanks, Gordon, for the comment which is still entirely off-topic and I will refrain from responding to any physics in your comment because it's only a path to trouble.
reader Raisonator said...
Off topic, sorry, but there is an interesting new paper by Connes:
http://arxiv.org/abs/1208.1030v1
Any comments ? I am having hard times to understand it. Is it a threat to string theory ?
reader Luboš Motl said...
Dear Raisonator,
the paper makes no sense. Connes (and collaborator) has been trying to claim that the Standard Model may be reformulated as a compactification on a non-commutative manifold which is able to predict relationships between some coupling constants in the Standard Model. It wasn't really ever true - the values of the coupling constants that may look "more natural" in his variables are still not physically privileged over other values so it isn't possible for a quantum field theory to become predictive in this way.
At any rate, Connes cared about this "new simplicity in the non-commutative variables" - a formalism that wasn't really worked out beyond the classical level (he uses ordinary tools of QFT for the loops only) - so he had predicted the Higgs mass of 170 GeV. It just happened that 170 GeV was the very first value of the Higgs mass that got excluded by the Tevatron:
http://motls.blogspot.com/2008/08/tevatron-falsifies-connes-model-of.html?m=1
So he was shooting into the wrongest place of the real axis. One must be pretty lucky for that. ;-) Now, when we know that the Higgs is near 126 GeV, he's trying to retroactively correct the wrong prediction by adding random components that would make it bad science even if they were used in an otherwise healthy context - but when they're used within the framework that makes invalid claims about the possibility to predict things that cannot be predicted, it's even worse.
Best wishes
Lubos
reader Ludek said...
sorry , OT
zdravim Lubosi,
I have an old problem of understanding, perhaps trivial.
Probably you know the solution immediately.
The stimulated emission, one photon comes in, interacts somehow with the atomic field (how is not important), and if everything fits, two identical photons come out. OK !
On the other hand, quantum
cryptography (one photon communication) protects themselves with
statements like : due to the non-cloning theorem , one photon can not
be.... => quantum crypto is secure.
But what is with Laser/Maser, Radio ?
All these devices produce identical photons from in principle one
photon.
Or is it so, that thouse photons are in
reality not perfectly identical, but identical enough for Laser etc.?
Thanks and ciao
Ludek
reader Luboš Motl said...
Dear Luďku, an extremely good question.
reader Mikael said...
Dear Lubos,
stupid question maybe, Would anything change if we used the good old H2O molecule as the basis for our discussion? After all it has a dipole moment as well.
reader Luboš Motl said...
Dear Mikael, H2O molecule isn't a 2-state system in any sense, so you can't construct a MASER out of it and you can't use in discussions on 2-state systems.
I may be puzzled by your question but this whole article, every single topic in this blog entry, is about 2-state systems. They're systems for which you can actually count the relevant microstates satisfying certain conditions that may evolve into each other - and you get two. I got 2 because the states essentially come from the pyramid-up and pyramid-down states of the ammonia.
But H2O isn't a pyramid. Its shape can't ever be organized "just in two ways". There isn't any group of atoms (the triangle) that would define a plane and allow another atom to be above or below it. For H2O, the counting just doesn't give 2. It gives either 1 or infinity. Your suggestion that this discussion boils down to a nonzero dipole moment indicates that you haven't even started to understand this topic yet. Pretty much every system in the world has a nonzero dipole moment. That doesn't mean that every system is a 2-state system.
reader Mikael said...
Dear Lubos,
I can see that H2O has an infinite number of states. But if you allow for rotations NH3 has them as well. Only if you keep the H atoms in place it becomes a two state system. So my question is why can I neglect the rotations in one case but not in the other?
reader Luboš Motl said...
There's still a maser transition in water, around 22 GHz, similar to the 24 GHz ammonia maser frequency, but the rotational modes for H2O have almost the same frequency so the decoupling of the problem isn't legitimate.
For ammonia, the triangle has a larger number of H-atoms and may be approximated by a circle. Due to the identical character of the atoms, forcing the 120-degree periodicity the rotational modes of NH3 correspond to higher frequencies that may be considered "parameterically higher". For H2O, this decoupling of the scale is not possible as both energies are of the same order.
reader Mikael said...
Thanks, Lubos. I think it is the kind of answer I should have guessed from the geometrical initution. But it is always reassuring to get a true expert answer.
reader Mikael said...
Dear Lubos,
after spending more time on this topic I think your answer is just not right, at least partially.
For the rotational modes the only thing what should matter is the quantized angelar momentum
and therefore the moment of inertia of the molecule for the rotation axis.
The bigger the moment of inertia the lower the transition frequencies of course.
Instead the key point for the two states of the Ammonia molecule should be fact
that if you push the N throught the triangle of the three H you create a state which is different
to the original state not just by a rotation. Instead you must additional exchange to
H atoms to come to the original situation.
Regarding the maser frequency of 22 GHz for water are you sure about its origin?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 105, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429838061332703, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/146214-quadratic-equation-word-problem.html
|
# Thread:
1. ## Quadratic equation word problem
Hey, I'm having trouble with this one. We've only discussed graphs of quadratic equations that start at (0,0), but this one seems to involve the guy jumping outwards and upwards from above a body of water... does this mean I would start my graph somewhere before (0,0)?
A man jumped upward and outward from a window of a blimp and landed in a body of water. His jump was 120 m, about the height of a 40-storey building. The path of his jump can be represented by the quadratic relationship $h=120+5.5t-4.9t^2$, where h is the height above the water in metres and t is the time in seconds.
a) Find the maximum height of his dive.
(I thought here you would use the vertex formula... this textbook doesn't have the answer in the back of course...)
b) Find the length of time it took him to reach the water.
Any help would be appreciated!
2. it is all in the equation provided:
if you plot it in the plane (t,h), you find the maximum height as being 121.54 and the h intercept represents the time at which he hits the water at t=5.54s
you could also solve the equation for 120+5.5t-4.9t^2=0 to find the time
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9736104607582092, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/107383/any-result-or-conjecture-of-computaional-complexity-of-formal-languange-with-rati
|
## Any result or conjecture of computaional complexity of formal languange with rational generating function?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As we know that context-free language is in P,any result or conjecture of computaional complexity of formal languange with rational generating function?And more,any result or conjecture of computaional complexity of formal languange with algebraic generating function?
-
2
As far as I can tell, a formal language with rational generating function can be arbitrarily hard to recognize. Consider the language with one word of each length $n$ such that each new word describes $n$ more digits of Chaitin's constant... – Qiaochu Yuan Sep 17 at 16:37
@Qiaochu,is the language recursively enumerable?Or is the language a C.E set? – XL Sep 17 at 18:01
en.wikipedia.org/wiki/Chaitin%27s_constant – Benjamin Steinberg Sep 18 at 13:04
## 1 Answer
To clarify Qiaochu's comment and make it explicit, I claim there are uncountably many languages with a rational generating function (namely, $\dfrac{1}{1-x}$). Only countably many of these can even be recursively enumerable. Namely, let $w\in \lbrace 0,1\rbrace^{\omega}$ be a right infinite word. Let $L(w)$ be the language of prefixes of $w$. Then $L(w)$ has a unique element of length $n$ for each $n$ and one can recover $w$ from $L(w)$. Thus there are uncountable many languages of the form $L(w)$. Since it has one word of each length, its generating function is $$1+x+x^2+\cdots = \dfrac{1}{1-x}.$$
Added to address the OP's comment below
There are trivially sequences $w\in \lbrace 0,1\rbrace^{\omega}$ whose language $L(w)$ is r.e. but not recursive. Namely, we can view $w$ as the characteristic function of a set $A$ of natural numbers. Clearly membership in $L(w)$ is the same as determining membership in $A$ as far as decidability goes, although there is perhaps some complexity blowup since to check if a string of length $n$ belongs to $L(w)$ we must check which of the first $n$ natural numbers belong to $A$ and conversely to check if an integer $n$ is in $A$, we have to look potentially at all bit strings of length \$n. So this seems like a PSPACE blowup.
-
@Benjamin,Yes,usually,when we talk about formal language,we assume,that the language is at least a c.e.set.Yes,we can talk about language that is not c.e.set,which is in the area of computability or unsolvability .Anyway,you have made some clarification,thank a lot.And Qiaochu's construction is really a c.e.set – XL Sep 18 at 14:40
Let me add an edit to address this. – Benjamin Steinberg Sep 18 at 15:15
It seems that languages with such rational generating function go through all classes of all computational complexity – XL Sep 18 at 16:31
Yes, that would seem to be the case since one has sequences with arbitrary complexity in computing their prefixes I believe. – Benjamin Steinberg Sep 18 at 17:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196914434432983, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/32310/whats-the-meaning-of-the-general-solution-and-the-particular-solution-in-differ?answertab=votes
|
# What's the meaning of the general solution and the particular solution in differential equations?
Can anybody cast some physical insight into this? I've been studying differential equations on my own and don't understand how you can have a whole host of general solutions. It seems like a rather curious situation which we don't come across in other areas of mathematics. Is there anything more to the discussion that I'm missing?
-
To study differential equations, you should look at a numerical book, to learn to simulate them. The analytic methods are generally primitive and misleading outside of one and two dimensional phase spaces, because of chaotic behavior. – Ron Maimon Jul 18 '12 at 17:09
## 3 Answers
The solutions of a homogeneous linear ordinary differential equation form a vector space, ie $0$ is always a valid solution and arbitrary linear combinations of solutions will yield another valid solution. Finding the general solution of such an equation means finding a basis for that vector space.
In contrast, solutions to an inhomogeneous linear ordinary differential equation form an affine space over the space of homogeneous solutions. This means that $0$ is in general not a solution, and the difference between two solutions of the inhomogeneous equation will be a solution of the homogeneous one.
Thus, you only need to find a single inhomogeneous solution: The general solution of the inhomogeneous equation can be expressed of the sum of that particular solution and the general solution of the homogeneous one.
In case of non-linear differential equations, the integration constants (or equivalently, the choice of initial conditions) still parametrize the space of general solutions, and choosing values for these parameters will get you a particular one. However, the simple relation between the general solution of an inhomogeneous equation and the general solution of the corresponding homogeneous one does not hold for arbitrary differential equations.
-
This answer is giving an answer specific to linear ODEs, while the phenomenon is general. But it's a fine answer for the linear case. – Ron Maimon Jul 19 '12 at 1:55
@Ron: added a small note about the non-linear case – Christoph Jul 19 '12 at 8:03
I don't agree it is a small note--- the reason I wrote a separate answer is because the linear case is misleading, the real reason for the arbitrary constants is not a vector space argument, it's an initial conditions argument, and it works regardless of the linearity of the equation. Neither are nonlinear equations "more complicated", they are just nonlinear. Sometimes they are simpler. – Ron Maimon Jul 19 '12 at 16:27
@Ron: I slightly reworded the last part – Christoph Jul 19 '12 at 16:49
thanks, I appreciate it, +1. – Ron Maimon Jul 19 '12 at 17:02
The differential equation needs initial conditions to simulate on a computer (which you should do once, if you want to understand what they are). A differential equation is a rule on a time-grid of size $\epsilon$ for $x(t+\epsilon)$, given $x(t)$. For example;
$${dx\over dt}= x + x^2 + x^3$$
means
$$x(t+\epsilon) = x(t) + \epsilon ( x(t) + x(t)^2 + x(t)^3)$$
in the limit $\epsilon\rightarrow 0$.
You can use the equation above to step forward in time, but you need to give the value of $x(0)$. Then from this value, you calculate $x(\epsilon)$, then $x(2\epsilon)$, then $x(3\epsilon)$ and so on into the future. A special solution is one for a given initial value, while the general solution is expressed in terms of $x(0)$ or some equivalent arbitrary constant that can be used to find $x(0)$.
The theorem you want is the uniqueness theorem for the initial value problem of ordinary differential equations. The mathematical proof tends to use a less useful approximation technique than lattice stepping, but people who do computer simulations or computer games always use some sort of lattice for ODE's.
-
To answer your remark about the host of solutions for an ode: try to solve for (x,y) the equation: x+y=1. There is an infinity of solutions, (x,y)=(x, 1-x). But you can write them as follow: (x,y)=(0,1)+x*(1,-1)
(0,1) is a peculiar solution of the full equation.
(1,-1) is solution of the homogenous equation: x+y=0 and x in the expression x*(1,-1) plays the role of the constant C you introduce in the solution of an ode.
For linear ode, the situation is quite similar. If you want a physical meaning, one can say that the homogeneous equation (without rhs) is used to study instabilities, if any of its solutions are an increasing function of time, then the system is instable. Any perturbation will blow out with time. Once you know the system is stable, it is worth studying the full equation with a rhs that describes the driving external force. Mathematically the full solution is the sum of the solutions of the homogeneous equation (general solution) plus the solution of the non homogeneous one (peculiar solution) exactly as in the exemple above. Why do we need the general solution if for time long enough they become negligeable in the stable case? Mathematicians were assigned the task to solve the ode, not to discuss their interest, but even in physics, sometimes the transiants (homogenous solutions) are important, especialy when you have to deal with the boundary conditions. For instance, if you study the flow in a pipe, you must impose a zero velocity at the wall. This can be achieved with a proper choice of the constants which appears in the solution of the homogeneous equation.
-
This answer is giving an answer specific to linear ODEs, while the phenomenon is general. But it's a fine answer for the linear case. – Ron Maimon Jul 19 '12 at 1:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328325986862183, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/4498/how-to-group-mutual-funds-by-volatility/4508
|
# How to group mutual funds by volatility?
I want to group Mutual Funds by their volatility.
Ideally, I would like to end up with the mutual funds beings attached to different groups:
• High volatility
• Medium volatility
• Low Volatility
My questions is : What numbers could be consider like low , medium , high volatility ?
Maybe some intervals: 0 -5 % is low 5 - 15% is Medium and Higher is High......
I'm little bit confused on how to tackle this problem...
-
I want to inform, not sell my product in this response. There is a tool called FundReveal that analyzes all of the mutual funds available in the US using exactly the approach you are investigating. Full disclosure: I am a cofounder of FundReveal. FundReveal uses standard deviation of daily returns over the time period in question. We consider funds that beat the S&P with risk (standard deviation) and higher Average Daily Returns to be those that are most likely to persist in positive performance. You can try the tool for free at www.fundreveal.com – user3238 Nov 9 '12 at 14:17
What do you call "rescued shares"? Are you trying to compute the volatility of the returns and the classify them? – SRKX♦ Nov 10 '12 at 18:40
Yes! In fact it doesn't matter what I meant by rescued shares, I just want to classify any returns by volatility. What would be a properly insight for this ? – Tomás Ayala Nov 11 '12 at 0:12
Ok I'll edit your question to make it understandable then. – SRKX♦ Nov 11 '12 at 10:19
Remainder of Anthony DuBon's comment: Since you seem to be interested in the relationship between risk and return you might look into the work of Robert Haugen. He focuses primarily on stocks. His book The New Finance challenges the nearly ubiquitous assumption that high return requires high risk securities. He and other proponents of the low risk anomaly are pursuing empirical evidence that the opposite may be true. – Tal Fishman Nov 28 '12 at 18:21
show 1 more comment
## 1 Answer
What you are looking for is an unsupervised learning algorithm algorithm: i.e an algorithm that will by itself determine the 3 most rational groups from your dataset. This method will allow you to choose the boundaries of the groups based on the dataset you provide and not by choosing some given fixed values.
The algorithm I suggest you to use is the K-means algorithm. You provide it with the data, and the number $k$ of clusters (groups) that you want to have. The algorithm will then split the data into the $k$ groups you would like. Note that this algorithm can handle points with several features, whereas you will be using only one (volatiliy).
Here is an idea of how it works in Matlab:
````test=[0 1 2 3 100 105 98 1000 1001 997]';
[idx,C] = kmeans(test,3);
````
The value returned for `idx` is a vector where each point in `test` is attributed a cluster number (representing its group):
````idx =
2
2
2
2
3
3
3
1
1
1
````
You can then look at the variable `C` which contains the mean of each cluster which could be undrestood as "the perfect point for each cluster"
````c =
999.3333
1.5000
101.0000
````
So it found three groups in `test` one around 999.33, one around 1.5, and one around 101.
That should do the trick.
-
Thx SRKX I studied once K-means but forgot that could be useful. Thx again for your time. – Tomás Ayala Nov 11 '12 at 19:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339694380760193, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/181022-weak-convergence-absolutely-continuous-probability-measures.html
|
# Thread:
1. ## Weak convergence of absolutely continuous probability measures
Hi there,
Suppose I have a sequence of probability measures $P_n$ which converges weakly to a probability measure $P$. Then we know that $\lim_{n}$ $P_n$ $(A)$ $=$ $P(A)$ if $A$ is a $P$-continuity set.
Is it true that if, in addition we know that each $P_n$ and $P$ are absolutely continuous, then $\lim_{n}$ $P_n$ $(A)$ $=$ $P(A)$ for any $A$?
Thank you very much,
Ilan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9592787027359009, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/31826/what-are-natural-numbers/31896
|
# What are natural numbers?
What are the natural numbers?
Is it a valid question at all? My understanding is that a set satisfying Peano axioms is called "the natural numbers" and from that one builds integers, rational numbers, real numbers, etc. But without any uniqueness theorem how can we call it "the" natural numbers? If any set satisfying the axioms is called natural numbers then maybe there is no "natural numbers" as a definite object as we know it in real world, but just a class of objects satisfying certain axioms, like Noetherian rings?
-
7
All sets satisfying the Peano axioms are isomorphic. When we refer to the natural numbers, we are referring to this isomorphism class. – Alex Becker Apr 9 '11 at 1:15
1
How could one construct such an isomorphism? 1 should map to 1 and if x maps to y then x+1 should map to to y+1, but should I use induction to define it globally? – ashpool Apr 9 '11 at 1:20
10
The (first-order) version of the Peano axioms is not categorical, there are "non-standard" models. The question that ashpool asks is a deep and contentious one in the philosophy of mathematics. – André Nicolas Apr 9 '11 at 1:23
@ashpool: yes, you use induction on each side of the isomorphism. – Carl Mummert Apr 9 '11 at 2:33
@ashpool: Please include all the information on the body; your main question appeared only on the subject. – Arturo Magidin Apr 9 '11 at 4:24
show 3 more comments
## 6 Answers
Ravichandran's answer is right: the natural numbers are the numbers 0, 1, 2, 3, ... . We can directly understand these numbers, based on our inductive definition of how to count in English or another natural language, even before we create an axiom system for them.
The standard list of axioms that we use to characterize the natural numbers was stated by Peano. Several people have brought up first-order logic in answers here, but that isn't what you want to use to establish categoricity (I will come back to that). Dedekind was the first to prove that we can axiomatize the natural numbers in a way that all models of our axioms are isomorphic, using second-order logic with second-order semantics.
Theorem (after Dedekind). Say that we have two structures $(A, 0_A, S_A)$ and $(B, 0_B,S_B)$ each of which is a model of Peano's axioms for the natural numbers with successor, and each of which has the property that every nonempy subset of the domain has a least element. Then there is a bijection $f\colon A \to B$ such that $f(0_A) = 0_B$ and for all $a \in A$ $f(S_A(a)) = S_B(f(a))$.
The theorem is proved using the axiom of induction repeatedly, but the idea is completely transparent. Suppose that Sisyphus is given a new task. He will count out all the natural numbers in order in Greek, while a fellow tortured soul will count out all the natural numbers in order in English, at the same speed as Sisyphus. Just based on Ravichandran's assessment of what the natural numbers are, the map that sends each number spoken by Sisyphus to the number spoken by his companion at the same time is obviously an isomorphism between the "Greek natural numbers" and the "English natural numbers."
This argument cannot be captured in first-order logic, not even in first-order ZFC. But that isn't the fault of the natural numbers: first order logic can't give a categorical set of axioms for any infinite structure. Dedekind's proof can be cast in ZFC in the sense that it shows that the models of Peano's axioms in a certain model of ZFC are all isomorphic to each other in that model. Moreover, ZFC is sound in the sense that the things it proves about the natural numbers are correct. But as long as we use first-order semantics for ZFC, it can't fully capture the natural numbers.
The key point of second order semantics is that "every nonempty subset" means every nonempty subset. If a structure $(A, 0_A, S_A)$ satisfies a certain finite set of axioms that are all true in $\mathbb{N}$, then $\mathbb{N}$ can be identified with an initial segment of $A$. In this case, if $A - \mathbb{N}$ is nonempty it will not have a least element, and so $A$ will not satisfy the second-order Peano axioms. Again, this argument cannot be completely captured in first-order logic because $\mathbb{N}$ cannot be fully captured.
Even if we think of Peano's axioms as only specifying an isomorphism class of structures, the situation is not like Noetherian rings, where there are many non-isomorphic examples. $\mathbb{N}$ is more like the finite field on two elements. The question of which model of the second-order Peano axioms is really the "natural numbers" is like the question of which isomorphic copy of $F_2$ is "really" $F_2$. It's a fine question for philosophers, but as mathematicians we have a perfectly good idea what $F_2$ is, up to isomorphism, and a good idea what $\mathbb{N}$ is, up to isomorphism. We also have axiom systems that let us prove things about these structures. $F_2$ is easier to apprehend because it's finite, but this only makes $\mathbb{N}$ more interesting.
-
+1 I'm glad you posted this. My answer was prompted by your early comment re: isomorphism! I agree we can prove something in first-order ZFC about isomorphism, e.g. that a smallest infinite ordinal exists, but that this doesn't "fully capture the natural numbers." It also seems that second-order semantics ought to be "more expressive" than first-order semantics, but I have doubts about the possible implication that $\mathbb{N}$ can be "fully captured" through some combination of second-order semantics and set theory. – hardmath Apr 13 '11 at 16:20
The main benefit of second-order semantics is that the Peano axioms are categorical in them, unlike in first-order semantics. Can you make your doubts more concrete, given that theorem? – Carl Mummert Apr 14 '11 at 0:14
My understanding is that, in the constructive sense, the natural numbers are the smallest inductive set. By the Axiom of Infinity, there exists an inductive set, and the intersection of inductive sets is again an inductive set.
To see this, recall that a set $A$ is inductive if $\emptyset\in A$, and for all $a$, if $a\in A$, then $a^+\in A$, where $a^+=a\cup\{a\}$ is the successor of $a$.
Now let $T$ be a nonempty family of inductive sets. So $\cap T$ is also inductive, for clearly $\emptyset\in\cap T$, and if $a\in\cap T$, then $a\in A$ for all $A\in T$ since each $A$ is inductive, so $a^+\in A$ for all $A\in T$, so $a^+\in\cap T$.
You can then define a natural number to be a set which belongs to every inductive set. To see that a set $\omega$ of such sets actually exists, consider $A$ to be an inductive set. Then let $$T=\{K\in\mathscr{P}(A)\ |\ K\textrm{ is inductive}\}$$ So $A\in T$, thus $T\neq\emptyset$. Let $\omega=\bigcap_{K\in T}K$. Indeed, $\omega$ consists exactly of the natural numbers under this definition. For let $n\in\omega$, and let $B$ be an inductive set. Then $A\cap B\in T$, so $n\in A\cap B$, and thus $n\in B$. Conversely, let $n$ be a natural number. Then $n\in K$ for all $K\in T$, so $n\in\omega$. So the usual idea of $\omega$ as the set of natural numbers makes sense.
So the natural numbers are precisely the intersection of all inductive sets, that is, elements which are in every inductive set. Also, this set is unique by Extensionality.
-
4
Nice detailed description of one set-theoretic definition. But in a sense, this pushes back the OP's question "What are the natural numbers, really?" to "What is the universe of sets, really?". And once we introduce ZFC, we are faced again with non-categoricity. – André Nicolas Apr 9 '11 at 1:39
Oof, perhaps I didn't fully understand what the OP was getting at then. I'm curious to see how this all fleshes out then. – yunone Apr 9 '11 at 1:43
In a "realist" perspective the natural numbers are the counting numbers, something that we can perceive through our rational faculties. In that view the natural numbers have an existence independent of any axiomatization of their properties.
A "formalist" perspective cannot promise much, if anything, about a definite system of natural numbers.
Indeed the Peano axiomatization specifically does not characterize the natural numbers, as it is an essentially incomplete first-order theory (if consistent, Godel-Rosser).
Nor can it be proven that two set-based models which satisfy the Peano axioms are isomorphic, since a first-order theory with a model of one infinite cardinality has a model of any infinite cardinality (Lowenheim-Skolem).
-
1
Peano's original axiomatization was a finite list of second-order axioms; the restatement as an infinite set of first-order axioms is a later development. In second-order logic, it can be proved that any two models are isomorphic, because the second-order axiomatization is more restrictive about which structures are models than the first-order axiomatization is. I've tried to address this in a separate answer. – Carl Mummert Apr 9 '11 at 12:09
Actually the Natural numbers are 0,1,2,3, ... up to $\infty$.
These numbers are also whole numbers, not fractions or decimals, and can be used for counting or ordering.
-
5
Best answer so far – Holowitz Apr 9 '11 at 10:06
I saw this issue discussed in Goldstern and Judah's The Incompleteness Phenomenon, but I'm sure there are other sources and I would welcome competing views. They provide a description of all the countable models of the Peano axioms and prove that there is an uncountable number of countable models.
First order logic is very nice because you have the completeness and compactness theorems. Unfortunately, the naturals are not categorical. I suspect I am not alone in having learned a lot of number theory (arithmetic, prime numbers, unique factorization ...) before seeing any axioms for the naturals and maybe before learning the word "axiom". It feels like the naturals should be unique. To many people there should be "true arithmetic", the set of all true sentences in $\mathbb{N}$. I haven't seen (but haven't looked hard) a claim that you can just ignore the issue like the analysts and push it off to the set theorists because $\mathbb{N}$ is absolute in ZFC. The Handbook of Analysis and Its Foundations says that $\mathbb{R}$ is categorical only because of the second order logic least upper bound axiom, but people ignore the second order situation and get on with life.
-
We often say that Godel's incompleteness theorem finds a theorem that is "true but unprovable in Peano arithmetic." That's because there are models of Peano where it isn't true, but our intuition says it is true. So, in that sense, the "natural numbers" are an idea beyond the Peano axioms.
For example, lets say you have an integer polynomial $p(x_1,...,x_n)$, and you want to know whether $p=0$ has solutions with $x_i \in \mathbb{Z}$. So, suppose we could show this question was undecidable in Peano arithmetic. Our intuition is that in that case, we could obviously never find a specific numeric solution, so we'd need a "non-standard" model for this equation to have a solution. In other words, we'd have reason to say that the undecidability of this equation in Peano axioms indicates that the equation has no solution in the intuitive "natural numbers."
The resolution of Hilbert's Tenth Problem shows that there are udecidable problems of this sort.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427846670150757, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/79743-derivative.html
|
# Thread:
1. ## The Derivative
May someone please show me how to proof the following:
Let f be differentiable on I. Show that if f is monotone increasing on I, then
f '(x) >= 0 for all x in I.
Note: >= is greater or equal to.
My idea:
Suppose that f is monotone increasing, that is f(x) <= f(c) for all x such that x<c. ... Then I know that after, we have to somehow say that [f(x)-f(c)]/(x-c) >= 0.
Can someone explain more and do we need any theorem to support the proof?
Thank you .
2. Originally Posted by zxcv
May someone please show me how to proof the following:
Let f be differentiable on I. Show that if f is monotone increasing on I, then
f '(x) >= 0 for all x in I.
Note: >= is greater or equal to.
My idea:
Suppose that f is monotone increasing, that is f(x) <= f(c) for all x such that x<c. ... Then I know that after, we have to somehow say that [f(x)-f(c)]/(x-c) >= 0.
Can someone explain more and do we need any theorem to support the proof?
Thank you .
Suppose that for some $x \in I\ f'(x)<0$.
Let $h>0$ then as $f$ is monotone increasing:
$<br /> \frac{f(x+h)-f(x-h)}{h} \ge 0<br />$
Hence:
$<br /> \lim_{h \to 0}\frac{f(x+h)-f(x-h)}{h} = f'(x) \ge 0<br />$
a contradiction, so there is no such point.
CB
3. Originally Posted by CaptainBlack
Hence:
$<br /> \lim_{h \to 0}\frac{f(x+h)-f(x-h)}{h} = f'(x) \ge 0<br />$
a contradiction, so there is no such point.
Can you explain that a bit more. I'm getting lost in the letters.
4. Originally Posted by HeirToPendragon
Can you explain that a bit more. I'm getting lost in the letters.
If $f'(x)<0$ at some $x$ point in $I$, then at that point (this uses the definition of having a derivative at that point):
$<br /> \lim_{h \to 0}\frac{f(x+h)-f(x-h)}{h} = f'(x) < 0<br />$
but we have just shown that as $h$ approaches $0$ from the left (using the assumption that $f$ is monotone increasing and the existance of the derivative at the point):
$<br /> \lim_{h \to 0}\frac{f(x+h)-f(x-h)}{h} = f'(x) \ge 0<br />$
which is a contradiction.
The only properties it uses are that $f$ is monotone increasing and the definition of a derivative.
CB
5. I understand all that. What I'm confused on is how you know that f'(x) > 0
Is there a theorem out there or a general rule I'm not understanding?
We know that
$\frac{f(x+h)-f(x-h)}{h} \ge 0$
But how does that prove that
$\lim_{h \to 0}\frac{f(x+h)-f(x-h)}{h} = f'(x) \ge 0$
What is the reason that f'(x) can't be < 0 there?
6. Originally Posted by HeirToPendragon
I understand all that. What I'm confused on is how you know that f'(x) > 0
Is there a theorem out there or a general rule I'm not understanding?
We know that
$\frac{f(x+h)-f(x-h)}{h} \ge 0$
But how does that prove that
$\lim_{h \to 0}\frac{f(x+h)-f(x-h)}{h} = f'(x) \ge 0$
What is the reason that f'(x) can't be < 0 there?[/font]
If $f(x)$ is monotone increasing what is the sign of:
$\frac{f(x+h)-f(x-h)}{h}$
when $h>0$ ?
CB
7. What do you mean the sign?
Do you mean is it positive or negative?
It's positive. That makes perfect sense. But what I don't understand is why you know the lim of it is also positive.
If a(x) is positive does that mean that it's lim has to be?
8. Originally Posted by HeirToPendragon
What do you mean the sign?
Do you mean is it positive or negative?
It's positive. That makes perfect sense. But what I don't understand is why you know the lim of it is also positive.
If a(x) is positive does that mean that it's lim has to be?
No it means the limit cannot be negative.
CB
9. Ok so what is the proof/reasoning for that?
I don't mean to sound annoying, but I really want to know why I'm allowed to just say this.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618663787841797, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/38488/excess-charge-on-an-insulator-and-conductor/38491
|
# Excess charge on an insulator and conductor
So I was recently wondering what happens to the excess charge when it is placed on an insulator or conductor e.g. rubbing two objects together. I know in the conductor the electrons are free to move whereas in the insulator they in general are not very mobile so the charge stays in a small region. But why does this happen? I know about the differences in insulator and conductor band structure, how is this related? Do the excess electrons become part of the band structure?
Thanks!
-
## 1 Answer
The localization or delocalization of the excess charge in conductors and insulators can be understood in a way similar to the uncharged case using band theory. Please refer to the first diagram in:
http://en.wikipedia.org/wiki/Work_function
For simplicity let us consider the zero temperature case. The Fermi energy $E_F$ can be thought of as a level which separates the occupied states from the empty states. Now, as you may know, the Fermi energy in a metal would lie in a band. In other words, in metals you have half filled bands; this is why electrons, near the Fermi energy, are delocalized in metals. In semiconductors, however, the Fermi energy lies within the band (as shown in the figure) leaving all the bands either completely filled or empty. As a result, the electrons are localized.
Since the Fermi energy separates the filled and empty states, it can, intuitively, be thought of as the surface of a fluid in a container; the fluid is analogous to the electrons in the system. Now, as you add or remove the fluid, its surface will either rise or drop respectively. Changes in the Fermi energy can be visualized in the same way. There is, however, one caveat: the vacuum energy $E_{VAC}$ will also change, in addition to $E_F$, as excess charge is introduced. $E_{VAC}$ is considered as the energy at which the electron is no longer bound to the solid. If the solid is charged positively or negatively, it will be harder or easier for the electron to escape respectively. As a result, one only considers changes in work function $\Phi$ or electron affinity $E_{ea}$. The former is often used in the case of metals and the latter in the case of semiconductors or insulators.
To sum it all up, excess charges will result in changes in $\Phi$ and $E_{ea}$. After these changes have occurred, by the introduction of access charges, it's a question of where the Fermi energy sits. Depending on that, the electrons will either be localized or delocalized. For a reasonable value of excess charge the Fermi energy will only move by a small amount, and will still typically lie in one of the bands or in the band gap in a metal or insulator respectively. This is why the excess charge will be delocalized, and cover the entire surface of the metal; whereas the excess charge will be localized in an insulator.
If you want to get a better feel for how $E_F$, $E_{VAC}$, $\Phi$, and $E_{ea}$ change as excess charge develops on metals, insulators, and semiconductors, you can take a look at chapter 2 of:
http://www.amazon.com/Field-Effect-Devices-Volume-Edition/dp/0201122987/ref=sr_1_1?ie=UTF8&qid=1348762717&sr=8-1&keywords=field+effect+devices
-
Thanks, this was sort of what I was looking for. I haven't come across Evac so I will read more into that! – Physbox Sep 27 '12 at 20:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391346573829651, "perplexity_flag": "head"}
|
http://www.abstractmath.org/Word%20Press/?tag=family-of-functions
|
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe
## Freezing a family of functions
2011/11/11 — SixWingedSeraph
To manipulate the diagrams in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The Mathematica notebooks used here are listed in the references below.
### Some background
• Generally, I have advocated using all sorts of images and metaphors to enable people to think about particular mathematical objects more easily.
• In previous posts I have illustrated many ways (some old, some new, many recently using Mathematica CDF files) that you can provide such images and metaphors, to help university math majors get over the abstraction cliff.
• When you have to prove something you find yourself throwing out the images and metaphors (usually a bit at a time rather than all at once) to get down to the rigorous view of math [1], [2], [3], to the point where you think of all the mathematical objects you are dealing with as unchanging and inert (not reacting to anything else). In other words, dead.
• The simple example of a family of functions in this post is intended to give people a way of thinking about getting into the rigorous view of the family. So this post uses image-and-metaphor technology to illustrate a way of thinking about one of the basic proof techniques in math (representing the object in rigor mortis so you can dissect it). I suppose this is meta-math-ed. But I don’t want to think about that too much…
• This example also illustrates the difference between parameters and variables. The bottom line is that the difference is entirely in how we think about them. I will write more about that later.
### A family of functions
This graph shows individual members of the family of functions $y=a\sin\,x$ for various values of $latex a$. Let’s look at some of the ways you can think about this.
• Each choice of “shows the function for that value of the parameter $latex a$”. But really, it shows the graph of the function, in fact only the part between $latex x=-4$ and $latex x= 4$.
• You can also think of it as showing the function changing shape as $latex a$ changes over time (as you slide the controller back and forth).
Well, you can graph something changing over time by introducing another axis for time. When you graph vertical motion of a particle over time you use a two-dimensional picture, one axis representing time and the other the height of the particle. Our representation of the function $latex y=a\sin\,x$ is a two-dimensional object (using its graph) so we represent the function in 3-space, as in this picture, where the slider not only shows the current (graph of the) function for parameter value $latex a$ but also locates it over $latex a$ on the $latex z$ axis.
The picture below shows the surface given by $latex y=a\sin\,x$ as a function of both variables $latex a$ and $latex x$. Note that this graph is static: it does not change over time (no slide bar!). This is the family of functions represented as a rigorous (dead!) mathematical object.
If you click the “Show Curves” button, you will see a selection of the curves in middle diagram above drawn as functions of $latex x$ for certain values of $latex a$. Each blue curve is thus a sine wave of amplitude $latex a$. Pushing that button illustrates the process going on in your mind when you concentrate on one aspect of the surface, namely its cross-sections in the $latex x$ direction.
Reference [4] gives the code for the diagrams in this post, as well as a couple of others that may add more insight to the idea. Reference [5] gives similar constructions for a different family of functions.
### References
1. Rigorous view in abstractmath.org
2. Representations II: Dry Bones (post)
3. Representations III: Rigor and Rigor Mortis (post)
4. FamiliesFrozen.nb, FamiliesFrozen.cdf (Mathematica file used to make this post)
5. AnotherFamiliesFrozen.nb, AnotherFamiliesFrozen.cdf (Mathematica file showing another family of functions)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9020919799804688, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/21775?sort=oldest
|
## Spectrum of a generic integral matrix.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My collaborators and I are studying certain rigidity properties of hyperbolic toral automorphisms.
These are given by integral matrices A with determinant 1 and without eigenvalues on the unit circle.
We obtain a result under two additional assumptions
1) Characteristic polynomial of the matrix A is irreducible
2) Every circle contains no more than two eigenvalues of A (i.e. no more than two eigenvalues have the same absolute values)
We feel that the second assumption holds for a "generic" matrix. Is it true?
To be more precise, consider the set X of integral hyperbolic matrices which have determinant 1 and irreducible characteristic polynomial. What are the possible ways to speak of a generic matrix from X? Does assumption 2) hold for generic matrices?
Comments:
• Assumption 1) doesn't bother us as it is a necessary assumption.
• Probably it is easier to answer the question when X is the set off all integral matrices. In this case we need to know that hyperbolicity is generic, 2) is generic and how generic is irreducibility.
-
2
Unless all eigenvalues are collinear, there must be a circle containing 3 of them. Or does 2) mean that no more than two eigenvalues have the same modulus? – Gjergji Zaimi Apr 18 2010 at 22:46
1
Yes circle centered at origin, you are right, 2) just means that no more than two eigenvalues have the same absolute value. – Andrey Gogolev Apr 18 2010 at 23:07
## 1 Answer
Yes, a generic integer matrix has no more than two eigenvalues of the same norm. More precisely, I will show that matrices with more than two eigenvalues of the same norm lie on a algebraic hypersurface in $\mathrm{Mat}_{n \times n}(\mathbb{R})$. Hence, the number of such matrices with integer entries of size $\leq N$ is $O(N^{n^2-1})$.
Let $P$ be the vector space of monic, degree $n$ real polynomials. Since the map "characteristic polynomial", from $\mathrm{Mat}_{n \times n}(\mathbb{R})$ to $P$ is a surjective polynomial map, the preimage of any algebraic hypersurface is algebraic. Thus, it is enough to show that, in $P$, the polynomials with more than two roots of the same norm lie on a hypersurface. Here are two proofs, one conceptual and one constructive.
Conceptual: Map $\mathbb{R}^3 \times \mathbb{R}^{n-4} \to P$ by `$$\phi: (a,b,r) \times (c_1, c_2, \ldots, c_{n-4}) \mapsto (t^2 + at +r)(t^2 + bt +r) (t^{n-4} + c_1 t^{n-5} + \cdots + c_{n-4}).$$`
The polynomials of interest lie in the image of $\phi$. Since the domain of $\phi$ has dimension $n-1$, the Zariski closure of this image must have dimension $\leq n-1$, and thus must lie in a hyperplane.
Constructive: Let $r_1$, $r_2$, ..., $r_n$ be the roots of $f$. Let `$$F := \prod_{i,j,k,l \ \mbox{distinct}} (r_i r_j - r_k r_l).$$` Note that $F$ is zero for any polynomial in $\mathbb{R}[t]$ with three roots of the same norm. Since $F$ is symmetric, it can be written as a polynomial in the coefficients of $f$. This gives a nontrivial polynomial condition which is obeyed by those $f$ which have roots of the sort which interest you.
-
1
Although this does not really affect the answer, there is one more component: one of the roots is real and the other two are complex conjugate. – damiano Apr 20 2010 at 7:22
Good point. So one needs a second component, parameterized by $(t-r)(t^2+at+r^2)(t^{n-3}+\ldots)$ or, from the second perspective, one needs to consider $\prod (r_i r_j - r_k^2)$. – David Speyer Apr 20 2010 at 11:51
Thank you very much! This is really nice, especially the "conceptual proof". I understand that the estimate $O(N^{n^2-1}$ should follow from the fact that "bad" matrices lie on an algebraic hypersurface. This is because all the "folding" occurs in a compact core, outside of which the hypersurface is sufficiently "straight". But is it really so obvious? – Andrey Gogolev Apr 20 2010 at 17:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9163892865180969, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/hamiltonian-formalism+electromagnetism
|
# Tagged Questions
3answers
189 views
### Factors of $c$ in the Hamiltonian for a charged particle in electromagnetic field
I've been looking for the Hamiltonian of a charged particle in an electromagnetic field, and I've found two slightly different expressions, which are as follows: H=\frac{1}{2m}(\vec{p}-q \vec{A})^2 ...
2answers
131 views
### Hamiltonian and non conservative force
I have to find the Hamiltonian of a charged particle in a uniform magnetic field; the potential vector is $\vec {A}= B/2 (-y, x, 0)$. I know that $$H=\sum_i p_i \dot q_i -L$$ where $p_i$ is ...
2answers
197 views
### Advice on classes: Theoretical Mechanics vs E&M II
So I'm having a tough time deciding between courses next semester. I'm a rising 3rd year undergrad math major whose goal is to get a solid understanding of theoretical physics through advanced math ...
0answers
93 views
### An electron is subjected to an electromagnetic field using the canonical equations solve
So I was given the following vector field: $\vec{A}(t)=\{A_{0x}cos(\omega t + \phi_x), A_{0y}cos(\omega t + \phi_y), A_{0z}cos(\omega t + \phi_z)\}$ Where the amplitudes $A_{0i}$ and phase shifts ...
1answer
123 views
### Question on 1st order Lagrangian Derivation in Faddeev-Jackiw Formalism
I'm looking at this reference (sorry it's a postscript file, but I can't find a pdf version on the web. This paper describes a similar procedure). The topic is the Faddeev-Jackiw treatment of ...
2answers
637 views
### To what extent is the “minimal substitution” or “minimal coupling” for the EM vector potential valid?
In all text books (and papers for that matter) about QFT and the classical limit of relativistic equations, one comes across the "minimal substitution" to introduce the magnetic potential into the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099360108375549, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/120096/list
|
## Return to Question
7 added 41 characters in body
Let $K$ be a nonarchimedean local field with ring of integeres $R_K$, maximal ideal $m_K$ and finite residue field $\bf k$. Let $\pi$ be an admissible irreducible complex representation of ${\rm GL}_2(K)$ with central character $\epsilon$. A fundamental result of Casselman says that there is a largest ideal $J\subseteq R_K$ such that the subspace $W_J$ of vectors in $\pi$ such that ```$$
\gamma\cdot v=\epsilon(a)v\qquad
\forall\gamma=\left(\begin{array}{cc}
a & b\\
c & d
\end{array}\right)\in{\rm GL}_2(R_K)
\ \text{with}\ c\in J
$$``` is non-trivial and in fact $1$-dimensional. As every expert knows, this result is of paramount importance for the theory of modular forms.
Let $v_0$ be a generator of the $1$-dimensional space $W_J$. In some cases, it is rather easy to obtain $v_0$ explicitly. For instance if $\pi=\pi(\mu_1,\mu_2)$ is a class $1$ principal series representation with trivial central character (for which $J=R_K$) it is immediate to check that any generator of $W_{R_K}$ is of the form ```$$
v_0(g)=|a|^{s_1}|d|^{s_2}|a/d|^{1/2}v_0(1)\quad
\text{where}\quad
g=\left(\begin{array}{cc}
a & *\\
& d
\end{array}\right)r,\quad r\in{\rm GL}_2(R_K).
GL}_2(R_K)
$$``` and $\mu_i=|\cdot|^{s_i}$, $i=1$, $2$.
My question is that if a table of generators $v_0$ has been tabulated explicitly anywhere, in particular for the supersingular representations and in other cases in which $J\subseteq m_K^2$.
6 deleted 82 characters in body
Let $K$ be a nonarchimedean local field with ring of integeres $R_K$, maximal ideal $m_K$ and finite residue field $\bf k$. Let $\pi$ be an admissible irreducible complex representation of ${\rm GL}_2(K)$ with central character $\epsilon$. A fundamental result of Casselman says that there is a largest ideal $J\subseteq R_K$ such that the subspace $W_J$ of vectors in $\pi$ such that ```$$
\gamma\cdot v=\epsilon(a)v\qquad
\forall\gamma=\left(\begin{array}{cc}
a & b\\
c & d
\end{array}\right)\in{\rm GL}_2(R_K)
\ \text{with}\ c\in J
$$``` is non-trivial and in fact $1$-dimensional. As every expert knows, this result is of paramount importance for the theory of modular forms.
Let $v_0$ be a generator of the $1$-dimensional space $W_J$. In some cases, it is rather easy to obtain $v_0$ explicitly. For instance if $\pi=\pi(\mu_1,\mu_2)$ is a class $1$ principal series representation with trivial central character (for which $J=R_K$) it is immediate to check that any generator of $W_{R_K}$ is of the form ```$$
v_0(g)=|a|^{s_1}|d|^{s_2}|a/d|^{1/2}v_0(1)\quad
\text{where}\quad
g=\left(\begin{array}{cc}
a & *\\
& d
\end{array}\right)r,\quad r\in{\rm GL}_2(R_K).
$$``` My question is that if a table of generators $v_0$ has been tabulated explicitly anywhere, in particular for the supersingular representations and in other cases in which $J\subseteq m_K^2$.
P.S.: It seems that I can't get the matrices look right ....
5 the matricesweren't displayed rightly, so I placed backticks around the equations.
Let $K$ be a nonarchimedean local field with ring of integeres $R_K$, maximal ideal $m_K$ and finite residue field $\bf k$. Let $\pi$ be an admissible irreducible complex representation of ${\rm GL}_2(K)$ with central character $\epsilon$. A fundamental result of Casselman says that there is a largest ideal $J\subseteq R_K$ such that the subspace $W_J$ of vectors in $\pi$ such that ```$$
\gamma\cdot v=\epsilon(a)v\qquad
\forall\gamma=\left(\begin{array}{cc}
a & bb\\
c & d
\end{array}\right)\in{\rm GL}_2(R_K)
\ \text{with}\ c\in J
$$``` is non-trivial and in fact $1$-dimensional. As every expert knows, this result is of paramount importance for the theory of modular forms.
Let $v_0$ be a generator of the $1$-dimensional space $W_J$. In some cases, it is rather easy to obtain $v_0$ explicitly. For instance if $\pi=\pi(\mu_1,\mu_2)$ is a class $1$ principal series representation with trivial central character (for which $J=R_K$) it is immediate to check that any generator of $W_{R_K}$ is of the form ```$$
v_0(g)=|a|^{s_1}|d|^{s_2}|a/d|^{1/2}v_0(1)\quad
\text{where}\quad
g=\left(\begin{array}{cc}
a & *\
\\
& d
\end{array}\right)r,\quad r\in{\rm GL}_2(R_K).
$$``` My question is that if a table of generators $v_0$ has been tabulated explicitly anywhere, in particular for the supersingular representations and in other cases in which $J\subseteq m_K^2$.
P.S.: It seems that I can't get the matrices look right ....
4 tag
3 edited body
Let $K$ be a nonarchimedean local field with ring of integeres $R_K$, maximal ideal $m_K$ and finite redidue residue field $\bf k$. Let $\pi$ be an admissible irreducible complex representation of ${\rm GL}_2(K)$ with central character $\epsilon$. A fundamental result of Casselman says that there is a largest ideal $J\subseteq R_K$ such that the subspace $W_J$ of vectors in $\pi$ such that $$\gamma\cdot v=\epsilon(a)v\qquad \forall\gamma=\left(\begin{array}{cc} a & b\ c & d \end{array}\right)\in{\rm GL}_2(R_K) \ \text{with}\ c\in J$$ is non-trivial and in fact $1$-dimensional. As every expert knows, this result is of paramount importance for the theory of modular forms.
Let $v_0$ be a generator of the $1$-dimensional space $W_J$. In some cases, it is rather easy to obtain $v_0$ explicitly. For instance if $\pi=\pi(\mu_1,\mu_2)$ is a class $1$ principal series representation with trivial central character (for which $J=R_K$) it is immediate to check that any generator of $W_{R_K}$ is of the form $$v_0(g)=|a|^{s_1}|d|^{s_2}|a/d|^{1/2}v_0(1)\quad \text{where}\quad g=\left(\begin{array}{cc} a & *\ & d \end{array}\right)r,\quad r\in{\rm GL}_2(R_K).$$ My question is that if a table of generators $v_0$ has been tabulated explicitly anywhere, in particular for the supersingular representations and in other cases in which $J\subseteq m_K^2$.
P.S.: It seems that I can't get the matrices look right ....
2 added 8 characters in body
Let $K$ be a nonarchimedean local field with ring of integeres $R_K$, maximal ideal $m_K$ and finite redidue field $\bf k$. Let $\pi$ be an admissible irreducible complex representation of ${\rm GL}_2(K)$ with central character $\epsilon$. A fundamental result of Casselman says that there is a largest ideal $J\subseteq R_K$ such that the subspace $W_J$ of vectors in $\pi$ such that $$\gamma\cdot v=\epsilon(a)v\qquad \forall\gamma=\left(\begin{array}{cc} a & b\ c & d \end{array}\right)\in{\rm GL}_2(R_K) \ \text{with}\ c\in J$$ is non-trivial and in fact $1$-dimensional. As every expert knows, this result is of paramount importance for the theory of modular forms.
Let $v_0$ be a generator of the $1$-dimensional space $W_J$. In some cases, it is rather easy to obtain $v_0$ explicitly. For instance if $\pi=\pi(\mu_1,\mu_2)$ is a class $1$ principal series representation with trivial central character (for which $J=R_K$) it is immediate to check that any generator of $W_{R_K}$ is of the form $$v_0(g)=|a|^{s_1}|d|^{s_2}|a/d|^{1/2}v_0(1)\quad \text{where}\quad g=\left(\begin{array}{cc} a & *\ & d \end{array}\right)r,\quad r\in{\rm GL}_2(R_K).$$ My question is that if a table of generators $v_0$ has been tabulated explicitly anywhere, in particular for the supersingular representations and in other cases in which $J\subseteq m_K^2$.
P.S.: It seems that I can't get the matrices look right ....
1
# Explicit Casselman theory: reference needed
Let $K$ be a nonarchimedean local field with ring of integeres $R_K$, maximal ideal $m_K$ and finite redidue field $\bf k$. Let $\pi$ be an admissible irreducible representation of ${\rm GL}_2(K)$ with central character $\epsilon$. A fundamental result of Casselman says that there is a largest ideal $J\subseteq R_K$ such that the subspace $W_J$ of vectors in $\pi$ such that $$\gamma\cdot v=\epsilon(a)v\qquad \forall\gamma=\left(\begin{array}{cc} a & b\ c & d \end{array}\right)\in{\rm GL}_2(R_K) \ \text{with}\ c\in J$$ is non-trivial and in fact $1$-dimensional. As every expert knows, this result is of paramount importance for the theory of modular forms.
Let $v_0$ be a generator of the $1$-dimensional space $W_J$. In some cases, it is rather easy to obtain $v_0$ explicitly. For instance if $\pi=\pi(\mu_1,\mu_2)$ is a class $1$ principal series representation with trivial central character (for which $J=R_K$) it is immediate to check that any generator of $W_{R_K}$ is of the form $$v_0(g)=|a|^{s_1}|d|^{s_2}|a/d|^{1/2}v_0(1)\quad \text{where}\quad g=\left(\begin{array}{cc} a & *\ & d \end{array}\right)r,\quad r\in{\rm GL}_2(R_K).$$ My question is that if a table of generators $v_0$ has been tabulated explicitly anywhere, in particular for the supersingular representations and in other cases in which $J\subseteq m_K^2$.
P.S.: It seems that I can't get the matrices look right ....
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 129, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8989304304122925, "perplexity_flag": "head"}
|
http://polymathprojects.org/2009/07/27/proposal-deterministic-way-to-find-primes/?like=1&_wpnonce=e3a0b2f395
|
# The polymath blog
## July 27, 2009
### Proposal: deterministic way to find primes
Filed under: polymath proposals,finding primes,research — Terence Tao @ 2:24 am
Here is a proposal for a polymath project:
Problem. Find a deterministic algorithm which, when given an integer k, is guaranteed to find a prime of at least k digits in length of time polynomial in k. You may assume as many standard conjectures in number theory (e.g. the generalised Riemann hypothesis) as necessary, but avoid powerful conjectures in complexity theory (e.g. P=BPP) if possible.
The point here is that we have no explicit formulae which (even at a conjectural level) can quickly generate large prime numbers. On the other hand, given any specific large number n, we can test it for primality in a deterministic manner in a time polynomial in the number of digits (by the AKS primality test). This leads to a probabilistic algorithm to quickly find k-digit primes: simply select k-digit numbers at random, and test each one in turn for primality. From the prime number theorem, one is highly likely to eventually hit on a prime after about O(k) guesses, leading to a polynomial time algorithm. However, there appears to be no obvious way to derandomise this algorithm.
Now, given a sufficiently strong pseudo-random number generator – one which was computationally indistinguishable from a genuinely random number generator – one could derandomise this algorithm (or indeed, any algorithm) by substituting the random number generator with the pseudo-random one. So, given sufficiently strong conjectures in complexity theory (I don’t think P=BPP is quite sufficient, but there are stronger hypotheses than this which would work), one could solve the problem.
Cramer conjectured that the largest gap between primes in [N,2N] is of size $O( \log^2 N )$. Assuming this conjecture, then the claim is easy: start at, say, $10^k$, and increment by 1 until one finds a prime, which will happen after $O(k^2)$ steps. But the only real justification for Cramer’s conjecture is that the primes behave “randomly”. Could there be another route to solving this problem which uses a more central conjecture in number theory, such as GRH? (Note that GRH only seems to give an upper bound of $O(\sqrt{N})$ or so on the largest prime gap.)
My guess is that it will be very unlikely that a polymath will be able to solve this problem unconditionally, but it might be reasonable to hope that it could map out a plausible strategy which would need to rely on a number of not too unreasonable or artificial number-theoretic claims (and perhaps some mild complexity-theory claims as well).
Note: this is only a proposal for a polymath, and is not yet a fully fledged polymath project. Thus, comments should be focused on such issues as the feasibility of the problem and its suitability for the next polymath experiment, rather than actually trying to solve the problem right now. [Update, Jul 28: It looks like this caution has become obsolete; the project is now moving forward, though it is not yet designated an official polymath project. However, because we have not yet fully assembled all the components and participants of the project, it is premature to start flooding this thread with a huge number of ideas and comments yet. If you have an immediate and solidly grounded thought which would be of clear interest to other participants, you are welcome to share it here; but please refrain from working too hard on the problem or filling this thread with overly speculative or diverting posts for now, until we have gotten the rest of the project in place.]
See also the discussion thread for this proposal, which will also contain some expository summaries of the comments below, as well as the wiki page for this proposal, which will summarise partial results, relevant terminology and literature, and other resources of interest.
## 133 Comments »
1. This is certainly an interesting problem, from a pure mathematician’s perspective anyway. (From a practical point of view, one would be happy with a randomized algorithm, but there’s no denying that to find a deterministic algorithm for such a basic problem would be a great achievement if it could be done.)
My first reaction to the question of whether it is feasible was that it seems to be the kind of problem where what is needed is a clever idea that comes seemingly out of the blue and essentially cracks the problem in one go. Or alternatively, it might need a massive advance in number theory, such as a proof of Cramer’s conjecture. But on further reflection, I can see that there might be other avenues to explore, such as a clever use of GRH to show that there is some set (not necessarily an interval) that must contain a prime. Even so, it feels to me as though this project might not last all that long before it was time to give up. But you’ve probably thought about it a lot harder and may see a number of interesting angles that I don’t expect, so this thought is not necessarily to be taken all that seriously. And perhaps the magic of Polymath would lead to unexpected lines of thought — in a sense, that is the whole point of doing things collectively.
This raises another issue. It might be that a Polymath project dwindles after a while, but then someone has an idea that suddenly makes it seem feasible again. In such cases, it might be good to have a list that people could subscribe to, where a moderator for a given project could, at his or her discretion, decide to email everyone on the list to tell them that there was a potentially important comment.
Comment by — July 27, 2009 @ 1:47 pm
• Dear Tim,
Actually I’ve only thought about this problem since Bremen, where I mentioned it in my talk. As I mentioned in my post, I doubt a polymath would be able to solve it unconditionally, but perhaps some partial result could be made – for instance, one could replace the primes by some other similarly dense set (almost primes is an obvious candidate, as sieve theory techniques become available, though the resolution of sieves seems too coarse still). Another question is whether this derandomisation result would be implied by P=BPP; I mistakenly thought this to be the case back in Bremen, but realised afterwards that P and BPP refer to decision problems rather than to search problems, and so I was not able to make the argument work properly.
Regarding notification: we already have RSS feeds that kind of do this job already. For instance, if one has a feed aggregator such as Google Reader, one can subscribe to the comments of a given post by following the appropriate link (in the case of this post, it is http://polymathprojects.wordpress.com/2009/07/27/proposal-deterministic-way-to-find-primes/feed/ ).
Comment by — July 27, 2009 @ 2:47 pm
2. Amusingly, the problem occurred to me too when you were giving your talk (not as a polymath project, but just as an interesting question).
I’m trying to think whether there’s a way of using P=BPP to find primes with k digits. I find it easy to make stupid mistakes, so here’s an argument that could be rubbish but feels quite reasonable to me at the time of writing. I’d like to show that there’s a randomized algorithm for finding primes by solving decision problems. So here is the problem that this randomized algorithm solves.
GIVEN: an interval of integers that contains a reasonable number of primes and is partitioned into two subintervals.
WANTED: one of the subintervals that still contains a reasonable number of primes.
The randomized algorithm for solving this would be to take random samples from one of the subintervals in order to estimate, with a high probability of accuracy, the density of primes in that subinterval. That enables us to choose a good subinterval with only a small probability of failure, and in that way one should, it seems to me, be able to get more and more digits of a prime. (Late on in the procedure, one would be doing more random samples than there are points in the interval, because one would have to check all points in the interval. But by that time the size of the interval would be logarithmic.)
So then the hope would be that by derandomizing one could get a deterministic algorithm for working out more and more digits. There are clearly details to check there, but it feels as though it could work. Or maybe you’ve tried it and the numbers don’t obviously work out.
I thought of RSS feeds, but what I was wondering about was a way of notifying people not necessarily every time there was a new comment, but only when the comment was of a kind that could potentially get a project going again when it appeared to have stalled. But perhaps the frequency of comments would be small enough that RSS feeds would be a perfectly good way of doing this.
Comment by — July 27, 2009 @ 3:52 pm
• I haven’t thought about the problem too deeply yet, but the problem I recall facing was that BPP requires a significant gap between the success rate when the answer is “yes” and when the answer is “no” (this is the “B” in “BPP”), whereas the transition between “lots of primes in an interval” and “few primes in an interval” is too vague to create such a sharp separation. (This is in contrast to, say, polynomial identity testing over finite fields, where there is a big gap between the identity holding all the time, and only holding a small fraction of the time.) But if there was a “locally testable” way of finding whether there were primes in an interval, then we might begin to be in business…
Regarding RSS, perhaps RSS for the discussion thread alone (rather than the research thread) may suffice. This, by the way, is one potential advantage of having a centralised location for polymath activity; anyone who follows this blog on a regular basis will learn about an exciting revival in a previously dormant project, without having to follow that project explicitly.
Comment by — July 27, 2009 @ 5:04 pm
• I’d be slightly (but possibly wrongly) surprised if that was the problem, simply because BPP could certainly detect a difference in density of say $k^{-3}$, where $k$ is the number of digits of $n$. You’d just have to take significantly more than $k^3$ samples for the difference to show up. And a product of factors of size around $1+k^{-3}$ would not get big after $k$ steps.
Comment by — July 27, 2009 @ 6:39 pm
• I think that this would be a problem and is a well-known one with “semantically”-defined complexity classes. The problem is that in order to use the P=BPP assumption you must first convert the approximation problem into a decision problem which has a randomized algorithm with probability of correctness at least 1/2+1/poly. While you can randomly in polynomial time estimate the number of primes in the interval [x,y] to within 1/poly (and this precision should suffice for a binary search), in order to de-randomize it using the P=BPP assumption you will have to transform it to some boolean query such as: “is the number of primes in the interval [x,y] at least t” which is not (as far as we know) in BPP since a randomized algorithm will not have correctness probability >1/2+1/poly for t that is very close to the right number.
Comment by — July 28, 2009 @ 11:56 am
• Ah — what I was hoping for was an argument like this. We know that the density in at least one interval is at least $\alpha$. So we use a randomized algorithm to identify an interval with density at least $\alpha -1/p(k)$. We can design this so that if the density is less than $\alpha-1/p(k)$ then with high probability it will not choose that interval, and if it is at least $\alpha$ then it will choose the interval. And if the density is between $\alpha-1/p(k)$ and $\alpha$ then we don’t care which it chooses. But it seems from what you say that a slightly stronger assumption that P=BPP is needed to derandomize this procedure. (Here is where I reveal my ignorance: it seems to me that one could just use pseudorandom bits rather than random bits in performing the above procedure. All that could go wrong is that either the density is less than $\alpha-1/p(k)$ and it accepts the interval, or the density is at least $\alpha$ and it declines it. But in both cases it is behaving differently from a truly random source.)
Comment by — July 28, 2009 @ 1:11 pm
• yes, pseudorandom bits would suffice for this to work. However the existence of a PRG is a stronger assumption than P=BPP and even stronger than the condition that you are looking for (which is sometimes called P=promise-BPP which means that randomized algorithms that sometimes do not have a gap can still be de-randomized as long as the deterministic algorithm is allowed to give an arbitrary answer when the gap is too small.)
Comment by — July 28, 2009 @ 2:09 pm
• Ah — now we’ve got to the heart of what I didn’t know. I had lazily thought that because the heuristic reason for expecting P to equal BPP is that pseudorandom generators probably exist, the two were equivalent.
Comment by — July 28, 2009 @ 2:43 pm
3. It looks that both the number theory version and the computational complexity problem are interesting and probably very difficult. Probably this is a problem I would first ask some experts in analytic number theory for some appropriate NT conjectures and some real expert in complexity. Probably there are known cases of search problems which admit randomized polynomial algorithm where even assuming P=BPP (namely that randomization does not help for decision problems) you cannot derandomize the algorithm. (But maybe this is nonsense Avi Wigderson is a person to ask.)
Now here is a silly question: If I grant you a polynomial time algorithm for factoring does this gives you a polynomial time deterministic algorithm for coming with a k digit prime number?
Comment by — July 27, 2009 @ 6:13 pm
• Hmm, I like this question also; it relates to the proposal of finding almost primes rather than finding primes. So it looks like while the original problem is very hard, there are a large number of ways to “cheat” and make the problem easier. Hopefully there is some variant of the problem which lies in the interesting region between impossibly difficult and utterly trivial…
Hopefully, some experts in NT and/or complexity theory will chime in soon… but certainly the next time I bump into Avi or anyone other expert, I might ask this question (I actually learned about this problem from Avi, incidentally).
Comment by — July 27, 2009 @ 6:29 pm
• It is tempting to try to mix-and-match various number theory and complexity theory hypotheses to get the result. I would love to have a result that needed both GRH and P!=NP to prove!
Just as a joke, here is one potential approach to an unconditional resolution of the problem:
There are two cases, P=NP, or P!=NP.
1. If P=NP, then presumably the problem of finding a k-digit prime is close enough to being in NP, that it should be in P also (modulo the usual issue about decision problem versus search problem). Done!
2. If P != NP, then one should be able to use an NP-complete problem to generate some sort of really strong pseudorandom number generator (as in Wigderson’s ICM lecture), which could then be used to derandomise the prime search problem. Done!
This is no doubt too silly to actually work, but it would be amusing to have a proof along these lines…
Comment by — July 27, 2009 @ 7:15 pm
• Two small comments about that. If P=NP then finding a k-digit prime is certainly in P, since one can simply get the computer to say yes if there’s a k-digit prime with first digit 1 (a problem in NP), and then 2,3, etc. And then one can move on to the second digit, and so on.
Also, the assertion that there is a pseudorandom generator is generally regarded as stronger than P not equalling NP. You need a 1-way function for it. Often, such results are proved on the assumption that the discrete logarithm problem is hard, but that’s a stronger assumption because discrete log is (probably) not NP complete.
The only way I can think of of rescuing the approach is to find k-digit primes by showing that if you can’t, then you can build a pseudorandom generator. Not sure how that would go though!
Comment by — July 27, 2009 @ 7:45 pm
• Ah, I knew it was too good to be true. But now I’ve learned something about computational complexity, which is great. And now we can assume P!=NP for free. Progress!
A world in which the answer to the above problem is “No” would be very strange – every large constructible integer would necessarily be composite (note that the set of all numbers constructible in poly(k) steps from an algorithm of length O(1) can be built (and tested for primality) in polynomial time). It could well be that one could use this strangeness to do something interesting (particularly if, as Gil suggests, we assume that factoring is in P, at which point in order for the answer to be “No”, all large constructible integers are not only composite, but must be smooth). It sounds from what you said that we might also be able to assume that discrete log is in P also. I’m not sure what we could do with all that, but it might well lead to some assertion that cryptographers widely believe to be false, and then that would be a reasonable “solution” to the problem.
Comment by — July 27, 2009 @ 7:53 pm
• Don’t all large constructable numbers have to not only be composite but also surrounded by large gaps free of prime numbers?
Comment by — July 28, 2009 @ 11:22 pm
• I suggest to look at the question under the assumption that we have an oracle (or a subroutine)that can factor numbers (at a unit step). (We can also ask the problem under the assumption that factoring is in P but I am not sure this is equivalent to what I asked.) Since factoring looks so much harder than primality one can wonder if factoring will help in this problem: something like: you want to have a prime number with k digits? start looking one after the other in numbers with k^3 digitsfactor them all and you are guaranteed to find a prime factor with k digits.
It is sort of a nice game: we will give you strong oracles or strong unreasonable assumptions in computational complexity just that you can believe slighly less extreme conjecture in analytic number theory, all for the purpose of derandomization.
I think this is refer to as Pig-can-fly questions in CC but I am not sure if this is the right notion and what is its origin.
Comment by — July 30, 2009 @ 12:33 pm
• That sounds quite plausible. After all, the Erdos-Kac theorem already tells us reasonably accurately how the number of prime factors of a random large number near n is distributed. How large an interval do we have to take around n for the theorem to hold? (I am not sufficiently well up on the proof to know the answer to this, but I’m sure there are people here who do.) And if it holds, we would expect not only that a typical number has about loglogn factors, but also that some of these factors are of size roughly $n^{1/\log\log n}$. Does that follow from their argument?
The weak link in that is that one might have to look at far too many numbers near n for it to work. But somehow it feels as though proving that at least some number in a short interval near n has a typical arrangement of prime factors ought to be a lot easier than proving that at least some number in a short interval near n is prime. This could be quite an interesting problem in itself, though I suppose there’s a good chance that it is either known, or else that despite what I said above, it is in fact just as hard as Cramer’s conjecture.
If this idea has anything going for it, then a great first step would be for somebody to put on the wiki a sketch of a proof of Erdos-Kac, or at least of Turan’s simpler argument that the variance of the number of prime factors is about loglogn (if I remember correctly). Talking of which, Michael Nielsen and I have written a few short wiki pages about complexity classes. Any additions would be welcome.
Comment by — July 30, 2009 @ 1:48 pm
• I can try to put some things on the Wiki about Erdös-Kàc during the week-end. I’m not sure what to expect from it, because the only knowledge about primes that goes in the proof even of the strongest version amounts to the standard zero-free region for the Riemann zeta function.
Comment by — July 30, 2009 @ 3:27 pm
• This types of questions/results are described by the phrase “If pigs could whistle then horses could fly” and apparently the use in computational complexity goes back to Mike Sipser who wrote me:
“When I taught advanced complexity theory in the 1980s, I used that phrase to describe theorems likeIf there is a sparse NP-complete set then P=NP, and If some NP-complete set has poly size circuits then PH = \Sigma_2.
I may have heard it myself somewhere before, not sure.
It appeared I think in my complexity lecture notes from that period, but not to my memory in any of my papers. But lots of good people took my class, so maybe one of them wrote something with it? (My 1986 class was especially remarkable: Steven Rudich,Russell Impagliazzo, Noam Nisan, Ronitt Rubinfeld,Moni Naor, Lance Fortnow, John Rompel, Lisa Hellerstein, Sampath Kannan, Rajeev Motwani, Roman Smolensky,
Danny Soroker, Valerie King, Mark Gross,and others I’m forgetting were in it)”
We somehow add also number theory conjectures into it, so we are looking at something like “If pigs could whistle assuming RH then horses could fly assuming Cramer’s conjecture.”
Comment by — July 30, 2009 @ 3:37 pm
• A thought that ended up going round in circles: I was wondering if we could reduce to the case where factoring was in P, by assuming factoring was hard and obtaining a pseudorandom number generator. It was tempting to just run RSA iteratively to generate this… but realised that for RSA to work, one first has to generate large primes. Argh! [Note: the same argument also renders discrete log useless, since we can't find large primes to run discrete log on.]
Comment by — July 27, 2009 @ 8:09 pm
• I think that there might be a complexity-theory de-randomization angle of attack here.
One-way functions are not needed for the PRGs here, and a sufficient assumption is that DTIME(2^n) does not have sub-exponential size circuits (this is the Impagliazzo-Wigderson PRG). There may be several strategies of getting a PRG-like entity for this problem with no (or with less) complexity assumptions either by getting rid of the “non-uniformity” replacing it by diagonalization (e.g. something like http://www.math.ias.edu/~avi/PUBLICATIONS/MYPAPERS/GW02/gw02.pdf ) or by considering a weaker set of tests.
A partial result in this direction (from a de-randomization point of view) would be to reduce the number of random bits needed for finding an n-bit prime. (O(n^2) would be trivial since you expect to find a prime after O(n) random probes, getting to O(n) bits is a nice exercise, and o(n) would be interesting, I think.)
Comment by — July 28, 2009 @ 12:16 pm
• I like having a continuum between trivial and completely solved; it greatly increases the chance of an interesting partial result.
Presumably one can get O(n) simply by choosing one random n-bit number and then incrementing it until one finds a prime? The first moment method tells us that this works in polynomial time with high probability (and if one assumes some standard number theory conjectures, such as the k-tuples conjecture, one can make the success probability 1-o(1), see Emmanuel’s comment #6).
Comment by — July 28, 2009 @ 2:45 pm
• I don’t know whether there is a theorem/conjecture that Pr[there is a prime in the range [x, x+poly(n)]]>1/2 where x is chosen to be a random n bit integer (which is weaker than Cramer’s conjecture that states that the probability is 1).
The way to reduce the number of bits simply is to use what’s called “deterministic amplification”: There are ways to use O(n) random bits to create m=poly(n) random n-bit strings with the probability that one of them will hit any set of density 1/poly(n) (such as the primes) with very high probability.
The easiest way in this setting is to choose O(n) n-bit integers in a way that is pairwise independent — this requires only O(n) bits. An O(n)-step walk on a constant degree expander labeled with n-bit integers will also do the trick as shown in AKS.
Comment by — July 28, 2009 @ 6:16 pm
• Thanks for the explanation! Regarding primes in short intervals, if we choose x to be a random n-bit integer and let X be the number of primes in [x, x+n], then the expectation of X is comparable to 1 (prime number theorem) and the second moment of X is O(1) (this can be seen from a bit of sieve theory, in fact all moments are bounded). This shows that X is positive with probability >> 1, which is enough for the naive algorithm I mentioned earlier to work.
Comment by — July 31, 2009 @ 5:21 am
• I may be completely off-base, but wouldn’t Ajtai-Komlos-Szemeredi help reducing the number of bits required? The idea is to lay out an (arbitrary) d-regular expander structure on the integers in the interval [n,n+m] and take a random walk on this graph. The probability of all steps of the walk avoiding the primes (a fairly large subset) should be very small.
Comment by Lior — July 28, 2009 @ 5:19 pm
• I don’t follow that suggestion. If you take a random walk until it mixes, then in particular you must be able to visit any vertex, so the walk must contain at least enough information to specify that vertex, which seems to suggest that the number of bits is at least log m. If the random walk doesn’t get to all vertices, then how do you know that the arbitary structure doesn’t happen to end just at composite numbers?
Comment by — July 28, 2009 @ 5:35 pm
• If you are trying to hit a large set, you don’t need to wait for the walk to mix. The probability of all k steps of an expander walk avoiding a set of density $\delta$ is bounded above by $\exp\{-C\delta^2 k\}$. For comparison, the probability of k uniformly chosen points avoiding the set is $(1-\delta)^k$. In other words, if $\delta$ is fixed and generating a single uniform point costs $n$ random bits then $n+O(k)$ random bits give the same hitting probability as $kn$ bits.
Now that I’m doing the calculation I can see that this bound by itself isn’t sufficient: in the interval [N,N+M] the density is of primes 1/log N, so we end up needing O(n^2) bits (n=log N) just for constant hitting probability. Noam’s comment (6:16) invokes more sophisticated technology I am not familiar with that can deal with sets of polylog density such as the primes.
Comment by Lior — July 28, 2009 @ 10:21 pm
• I still don’t quite follow. Let’s suppose you run the walk for a much smaller time k than it takes to mix. Then the proportion of vertices within k of your starting point will be small, and since the expander graph is arbitrary (your word) then there may simply be no primes within that radius. Or do you make the start point random? That might make things OK. (Or I could just be talking nonsense.)
Comment by — July 28, 2009 @ 10:34 pm
• Sorry — yes, the starting point is chosen uniformly at random, and after that you walk on edges of the expander. Thus k steps cost you n+O(k) random bits — n for the initial point and O(1) for each further step.
Comment by Lior — July 28, 2009 @ 10:52 pm
• O(n) random bits actually suffices to obtain an almost uniform n-bit prime. This is a corollary of my unconditional pseudorandom generator with Nisan, because the algorithm to choose a random n-bit prime uses space O(n). Our PRG uses a random O(S) bit seed and outputs poly(S) bits which appear random to any space(S) machine (S>log n).
http://www.cs.utexas.edu/~diz/pubs/space.ps
Comment by — July 29, 2009 @ 3:08 pm
• A similar idea that probably works in practice but is also hard to make deterministic is to take some integer n larger than 2, some set of generators of SL(n,Z), and do a random walk of length about k on SL(n,Z) with these generators, and then take the absolute value of the trace (or some other linear combination of coefficients) of the resulting matrix. One can expect this to be prime (and of size exponential in k) with probability about 1-1/k; using sieve, one can prove it to be “almost prime” (for certain sense of almost) with similar probability.
Comment by — July 30, 2009 @ 3:20 pm
• Just a thought: If any primality test can be expressed as a constant-degree polynomial over a finite field from the digits of the number-under-test to {0,1}, then the known unconditional pseudorandom generators for polynomials can be used to derandomize finding primes the same way as under the full P=BPP derandomization assumption. So, instead of requiring a general PRG, the proposal is trying to put the primality test into a fully derandomizable subclass of P.
Comment by — August 2, 2009 @ 6:29 am
4. Wow, it’s hard to resist the temptation to keep on thinking about this problem… fortunately (or not), I do have plenty of other things I need to do…
It occurs to me that if all large constructible numbers are smooth, then we may end up violating the ABC conjecture. Taking contrapositives, it is plausible (assuming factoring is easy) that if ABC is true, then we can find moderately large prime numbers in reasonable time (e.g. find primes of at least k digits in subexponential time, which as far as I know is still not known deterministically even on GRH).
Comment by — July 27, 2009 @ 8:46 pm
5. This problem is sort of cute but it looks a better idea would be to discuss it first as “an open problem of the week” and get feedback from people in analytic number theory (and computational complexity if we allow some assumptions regarding derandomization.) Or discuss it privately with more experts in ANT/CC.
The advantage of this problem is that it is a sort of challenge to researcher in one area (analytic number theory) to respond to questions which naturally (well, sort of) arises in another area (you may claim that finding large primes was always on the table but this specific derandomization and computational complexity formulation wasn’t).
OK you can claim that if the challange to derandomize the primality algorithm was posed to number theorists early enough (say in 1940), this could have led to the results that the AKS primality testing was based on. But reversing the table seems like a long shot.
Comment by — July 27, 2009 @ 9:47 pm
• That’s a good point. I’ll ask around.
It occurs to me that a problem which spans multiple fields may well be a perfect candidate for polymath; witness the confluence of ergodic theorists and combinatoralists for polymath1, for instance. Such projects are almost guaranteed to teach something, even if they don’t succeed in their ostensible goal.
Comment by — July 27, 2009 @ 10:00 pm
• Gil’s comment and this comment were very thought-provoking (to me).
Just to point what I think everyone knows instinctively, every polymath projects (that I have seen proposed) has an objective that is in complexity class NP.
This leads us to a mathematically natural question: what would a polymath-type project look like whose objective was in P? In other words, a project that was guaranteed to succeed … given commitment and competence on the part of the participants?
We can ask, does the class of P-Poly Projects have any interesting members?
My strong feeling is “yes” … in fact it seems plausible (to me) that P-Poly Projects are an exceptionally interesting and consequential subset of NP-Poly … but I wonder if anyone else is thinking along these lines?
Comment by — July 28, 2009 @ 6:08 pm
6. I never thought about this type of question(s) myself but I would think that the people in algorithmic/computational number theory (e.g., H. Cohen, K. Belabas, Lenstra, etc) would be the ones who have tried hardest to understand the problem — though presumably the standard software packages use fairly brute force techniques (that seems to be what Pari/GP does, with a “sieve” mod 210).
The question of what number theoretic conjecture are permitted makes the problem a bit fuzzy. For instance, how much of “primes in sequences” would be allowed? It’s known that, conditional on uniform versions of the k-tuple conjecture of Hardy-Littlewood, the number of primes between n and n+c(log n), where c is fixed, becomes asymptotically Poisson as n grows. This is independent of the Cramer model, and it is probably quite a solid conjecture. It doesn’t trivially solve the problem, as far as I can see, but it seems to be a strong tool to use.
Comment by — July 27, 2009 @ 10:57 pm
• If we take c fixed and let n increase wouldn’t there be some point where the number of primes would be nonzero then we could just go through the interval n to n+c(logn) and test each one and that would take a polynomial amount of time in terms of the number of the digits which is roughly log n for the cases before the point we could just look at everything and add a large coefficient and the result is a polynomial algorithm.
Comment by — July 31, 2009 @ 6:22 pm
7. [...] Filed under: discussion, finding primes — Terence Tao @ 3:09 pm The proposal “deterministic way to find primes” is not officially a polymath yet, but is beginning to acquire the features of one, as we [...]
Pingback by — July 28, 2009 @ 3:09 pm
8. Would P = BQP be a sufficient condition for an algorithm? Quantum computers don’t provide exponential speedup for search problems, but they have the advantage that factoring and discrete log are easy, and there is a polynomial-time speedup.
P = BQP is of course unlikely to hold in the “real world,” but it would imply lots of nice things: P = BPP, discrete log and factoring in P, without being as strong as P = NP or (I believe) P = promise-BPP. In addition, we can obtain some speedup on search problems and counting, although it’s only quadratic in general.
The other nice thing is that without assuming anything stronger, complexity-wise, than P = BQP, it seems impossible to abstract away the number-theoretic details. General search problems have a $\Omega(\sqrt{n})$ lower bound, as does approximate quantum counting — so in particular, the subinterval-density approach wouldn’t work without some number-theoretic trickery. This isn’t true for, e.g., P = NP, where general search problems are easy.
As much as I like complexity, it’s fiendishly hard (quantum doubly so), so take everything I just said with a grain of salt, and please correct me if I’m wrong.
Comment by — July 28, 2009 @ 6:17 pm
9. R. C. Baker and G. Harman, “The difference between consecutive primes,” Proc. Lond. Math. Soc., series 3, 72 (1996) 261–280. MR 96k:11111
Abstract: The main result of the paper is that for all large x, the interval A=[x-x^0.535,x] contains prime numbers.
This is from the abstract at
http://primes.utm.edu/references/refs.cgi?long=BH96
and On the other hand, given any specific large number n, we can test it for primality in a deterministic manner in a time polynomial in the number of digits (by the AKS primality test).
It looks like this enough to get the desired result.
Comment by Kristal Cantwell — July 28, 2009 @ 6:45 pm
• This won’t work without more tricks because one would need to test exponentially many integers (with respect to the number of digits) before being sure to find a prime.
From the purely number-theoretic point of view, the question seems to be roughly as follows: can one describe (deterministically), for all large X, a set of integers of size polynomial in log(X), each of them roughly of size X, such that this set is sure to contain a prime?
I think the best (number-theoretic, i.e., not Cramer-model based or similar) known results about this are in the paper “Primes in short intervals” of H. Montgomery and K. Soundararajan (Communications in Math. Phys. 252, 589–617, 2004). They give strong evidence for what the number of primes between X and X+H should look like when H is a function of X of various type, and X grows. Again, the results are convincing conjectures, but the number-theoretic assumptions are very strong.
Comment by — July 28, 2009 @ 8:53 pm
10. In a paper of Impagliazzo,
http://www.cs.ucsd.edu/users/russell/average.ps
five worlds (Algorithmica, Heuristica, Pessiland, Minicrypt, Cryptomania) were identified in complexity theory… perhaps we can determine now whether deterministic primality finding is easy in some of these worlds?
Comment by — July 28, 2009 @ 7:33 pm
• (Here is a post reviewing these five worlds . )
Comment by — July 28, 2009 @ 8:22 pm
• If P = NP, we’re done. (If we’re assuming P = BPP, this covers most if not all of Algorithmica; the situation where P $\subsetneq$ NP $\subset$ BPP is sad and confusing.)
Is the Hastad-Impagliazzo-Levin-Luby PRNG strong enough to derandomize checking primes for us? I think it is, but I can’t seem to find the paper. If so, Minicrypt and Cryptomania have unconditional deterministic primality finding.
Heuristica seems approachable: Since factoring is “almost always easy,” and sufficiently large numbers will almost always have some large enough prime factor, a good strategy might be to pick a very large number and factor it. Of course, this is neither deterministic nor guaranteed to be in polynomial time. Can this be made rigorous, say under the assumption that P = BPP?
Pessiland looks like the hardest world; I’m not even sure I could start on a list of conditions under which we might have efficient primality finding.
Comment by — July 28, 2009 @ 9:01 pm
11. Here’s a thesis in French by Pierre Dusart that is worth a nod in the bibliography:
Autour de la fonction qui compte le nombre de nombres premiers
It explicates previous results and includes a proof that for $n \geq 2010760$ there is always a prime between $n$ and $\frac{16598}{16597}n$.
Comment by — July 28, 2009 @ 7:52 pm
12. My ICM paper has several pointers to the literature on pseudorandomness and derandomization:
http://arxiv.org/abs/cs.CC/0601100
The question of deterministically finding large primes is mentioned on page 4. (I didn’t come up with it, I heard it from either Avi Wigderson or Oded Goldreich.)
One remark is that this problem (like many “explicit construction” problems) is a special type of derandomization problem. If you think of it as the problem “given n, output an n-bit prime,” then this is a problem that can be solved, with high probability, in randomized exponential time (exponential because the input n has a log n bit representation, while the running time of the algorithm is polynomial in n), and we would like to solve it in deterministic exponential time.
“Derandomizing” exponential time randomized algorithms is an “easier” task than derandomizing polynomial time ones, in the sense that a universal method to derandomize polynomial time algorithms also implies a universal method to derandomize exponential time algorithms, but the reverse is not clear.
Comment by — July 28, 2009 @ 10:05 pm
13. two quick thoughts,
1) Robin Pemantle and some other have a paper that touches on a related problem I think : http://www.math.upenn.edu/~pemantle/papers/Preprints/squares080204.pdf
2) On thing to help guide what is a realistic statement to prove re construct a k-bit prime in time poly(k) might be to look at how that procedure could then be used to factor n>k bit numbers via enumerating primes with <n bits. Or at the very least, it might help rule out statements that are potentially too strong.
Comment by — July 29, 2009 @ 1:44 am
14. Before this project is fully launched, I think it would be good to get some idea of what people think they might be able to contribute to it, so we can see whether there would be a core of people, as there was for DHJ, who were ready to follow it closely and attempt seriously to push the discussion towards a solution of the problem.
My own personal feelings about this particular problem are that I find it quite easy to imagine making preliminary remarks of the kind that have already been made, but quite difficult to imagine getting my teeth into an approach that feels as though it could lead to a solution. (This is in contrast with DHJ, where there seemed to be a lot of avenues to explore right from the beginning, so we didn’t have to wait for someone to have an unexpected idea to get things going properly.) But this feeling could change very rapidly if, as seems quite possible, people started coming up with relevant questions that didn’t seem impossible to answer. For example, as Emmanuel mentioned above, one way of thinking of the problem is as a search for a set of at most $(\log n)^C$ numbers near $n$, at least one of which has to be prime. The most obvious thought is to take a sufficiently long interval, but we don’t know how to prove that it contains a prime, even with the help of GRH. Are there any other proposals? All I can think of is conjectures that seem impossible to prove (such as that if you find a suitably rich supply of very smooth numbers and add 1 to them, then at least one of the resulting numbers is prime) or not strong enough (such as that if you apply the “W-trick” and pass to an arithmetic progression with common difference the product of the first few primes, you’ll get a higher density of primes inside it). But if somebody came up with an idea, or better still, a class of ideas, for constructing such a set, then there would be a more specific and possibly easier problem to think about. Somehow, it doesn’t feel to me as though I’m going to be the one who has the clever idea that really gets things started, so after the initial activity I think I’d find myself waiting for somebody else to have it. So I think I’d be an occasional participator, unless something happened that turned the problem into one that I felt I had lots of ways to attack.
There’s also the possibility that it would go off in a direction that would require a better knowledge of analytic number theory than I have. In that case, I would become a passive (but interested) observer.
Comment by — July 29, 2009 @ 10:22 pm
• On the analytic number theory side I’d be ready to participate as much as I can. I have no competence in complexity theory, although I will try to at least learn the terminology…
Comment by — July 30, 2009 @ 12:51 am
• I agree with Tim, it seems all the approaches boil down to the same thing: find a sufficiently large interval (but not too large) and then prove that this interval contains a prime.
Unfortunately there do not as of yet seem to be any really promising new ideas on how exactly to do this (other than solving some long standing open conjectures).
I would love to be wrong of course and hope the experts can see some serious “cracks” that would allow progress to be made.
Otherwise I see people discussing it for 2-3 weeks and then putting it back on the shelf.
Comment by rweba — July 30, 2009 @ 1:44 am
15. It occurred to me to wonder if the existence of an efficient algorithm to generate primes might imply (and thus be equivalent to) the existence of a pseudorandom generator? The intuition is simply that under sufficiently strong assumptions about the presumed randomness of the primes, it seems like a prime-generating algorithm might be useful as a source of randomness.
If this line of thought can be made to work, one might hope to prove that under some conjecture that implies randomness in the primes, the existence of pseudorandom generators is equivalent to being able to find primes deterministically.
Comment by — July 30, 2009 @ 12:03 pm
• This will imply that Cramer conjecture on the distributions of primes implies the existence of pseudorandom generator; Is it reasonable?
Comment by — July 30, 2009 @ 12:38 pm
• The deterministically found primes might have special regularity properties (e.g., they might be congruent to 1 modulo a largish deterministically determined number, or satisfy more complicated algebraic conditions) that make them less random than the general primes. In fact, one may suspect that an algorithm to construct primes is likely to extract those with a particular property.
Comment by — July 30, 2009 @ 3:16 pm
• Yes, I would think think this is asking rather too much…
Comment by Ryan O'Donnell — July 31, 2009 @ 7:20 am
• I wonder about the intuition itself here, and not just whether it can be converted into a solid argument. Suppose the primes are in some sense very random, and we have an algorithm that gives us a prime in $[n,n+m]$ whenever $m$ isn’t too small. The randomness of the primes alone will not make this a pseudorandom generator, because there is a polynomial-time algorithm for recognising primality. So whatever it is that makes it a pseudorandom generator needs to be some additional ingredient. (Of course, one could try to transform the output somehow, but then it seems that what would make it a PRG would come from the transformation.)
Comment by — July 31, 2009 @ 8:33 am
16. Is there any sequence which we can describe by a formula or a simple algorithm on which we can expect that the primes will behave “nicer” (larger density, smaller gaps etc.) than on the sequence of all integers? E.g. sequences like 2^n-1, n!+1
It would be nice if some sort of negative results hold (which would be a Direchlet type result); that any bounded-complexity attempt to produce a sequence of integers must yield a sequence with density of primes, gaps between primes, etc. upper bounded by the behavior on all the integers.
Comment by — July 30, 2009 @ 3:53 pm
• It looks that I do not know how to phrase this question formally. I would like to distinguish complexity theoretically between a sequence of the form a(n)=3n+4 or a(n)=n^2+1 or a(n)=2^n+1 or a(n)=n!+1 or perhaps even a(n) = p_1 p_2…p_n +1 or perhaps even “we choose every integer with probability 1/2 and a(n) is the nth integer”
FROM
a(n) is described by running primality algorithm on the integers one by one and taking the nth prime.
Comment by — July 30, 2009 @ 6:17 pm
So this could be something like “Between p and 2p we can explicitly identify an arithmetic progression which is guaranteed to contain at least one prime within the first log(p) terms.”
This appears very unlikely (I didn’t find any results of that type in a quick search) but perhaps some other clever sequence can be found.
Comment by rweba — July 31, 2009 @ 1:18 am
• In fact, typically if a sequence is chosen so that it satisfies the “obvious” congruence restrictions to contain infinitely many primes, it is expected to contain more than what might be thought. E.g. for arithmetic progressions modulo q, the density of primes is 1/phi(q) (for permitted classes) against 1/q for integers themselves in the same class.
Comment by — July 31, 2009 @ 2:03 am
• One possible conjecture would be that if a sequence a(n) is described by a bounded depth polynimial size (in log n) arithmetic circuit (even allowing randomization)then the density of primes in the sequence goes to zero.
Comment by — July 31, 2009 @ 7:07 am
17. Perhaps a finite field version would be easier? e.g., given a fixed finite field F, to deterministically find an irreducible polynomial P(t) of degree k (or more) in F[t]? Perhaps this is trivial though (are there explicit formulae to generate irreducible polynomials over finite fields?)
Comment by — July 31, 2009 @ 6:04 am
• …and what about polynomial with rational coefficients? in this case factoting is polynomial by a theorem of Lenstra, Lenstra and Lovasz. http://www.math.leidenuniv.nl/~hwl/PUBLICATIONS/1982f/art.pdf
Comment by — July 31, 2009 @ 7:00 am
• Well, there is a paper by Adelman and Lenstra titled “Finding irreducible polynomials over finite fields” that does this. And there is a paper by Lenstra and Pomerance titled “Primality testing with Gaussian periods” that references it and is worth looking at (in fact, this paper is where I first learned of the Adelman-Lenstra result, as I heard Carl speak on it at a conference). Of course if you could find a irreducible polys of degree exactly k, then you could deterministically produce quadratic non-residues modulo primes, an open problem, using traces.
Comment by Ernie Croot — July 31, 2009 @ 3:13 pm
• This is in regard to the problem of finding irreducible polynomials. The best known deterministic algorithm for finding an irreducible polynomial of degree d over a given finite field of size p^r has running time poly(dpr). In particular, if the characteristic p is itself a part of the input then no deterministic polynomial-time algorithm is known. On the other hand, when the characteristic is fixed then the algorithm is efficient in the sense that it has running time poly(size of input + size of output). Over the rational numbers we have very explcit irreducible polynomials: x^m – 2 is irreducible for all m >= 1 (Eisentein’s irreducibilty criterion, http://en.wikipedia.org/wiki/Eisenstein's_criterion).
Comment by — August 1, 2009 @ 4:26 pm
• There’s also a paper of Victor Shoup (“New algorithms for finding irreducible polynomials in finite fields”) which takes only (power of log p)*(power of n) to deterministically find an irreducible polynomial of degree n in F_p[t], GIVEN an oracle that supplies certain non-residues in certain extensions of F_p. I haven’t looked at this in any detail so don’t know whether the ideas are of any use for the present problem.
Comment by — August 6, 2009 @ 4:10 am
18. This is continuing a line of thought initiated by Gil, which I can no longer find (does that say something about me or about the organization of the comments?). The idea was to assume that we have an oracle that will factorize numbers instantly and to use that to find a polynomial-time algorithm for producing primes. (Ah, found it now. I suddenly had the idea of searching for “pigs”, which led me back to Gil’s comment 3 and one of his further comments in the replies to 3.)
The idea is that all we now have to do is find a small set that is guaranteed to have a number with a prime factor of k digits. We know that a “typical” number n has about $\log\log n$ prime factors, so it can reasonably be expected to have prime factors of size about $n^{1/\log\log n}$. So to find a prime near $m$ it makes sense to factorize numbers of size about $n=m^{\log\log m}$ (this function approximately inverts the function $n^{1/\log\log n}$). Note that if $m=10^k$ then $n$ is about $10^{k\log k}$, so the number of digits has not gone up by much, which is good if we are looking for a polynomial-time algorithm.
How might one go about proving that there are very small intervals of numbers that are guaranteed to contain numbers with prime factors of size about $m$? Here is a rough approach. Start with some interval $J$ of numbers near $m$ that’s guaranteed to contain plenty of primes, using the prime number theorem, say. We now want to prove that the iterated product set $J^{\log\log m}$ is very dense — so much so that it has to intersect every interval of width at least $(\log n)^C$.
This is a problem with an additive-combinatorial flavour. We can make it even more so by taking logs. So now we have a set $K=\log J$ (by which I mean $\{\log x:x\in J\}$) and we would like to prove that the iterated sumset $(\log\log m)K$, or more accurately $\lfloor\log\log m\rfloor K$, is very dense. The sort of density we would like is such that it will definitely contain at least one element in the interval $[\log n,\log(n+(\log n)^C)]$, which is roughly $[\log n,\log n+n^{-1}(\log n)^C]$. This may look an alarmingly high density, but in $K$ itself we have lots of pairs of elements that differ by at most $\log m/m$, so it doesn’t seem an unreasonable hope.
The kind of thing that could go wrong is that every prime near $m$ turns out to lie in some non-trivial multidimensional geometric progression, which then stops the iterated sumset spreading out in the expected way. But that seems so incredibly unlikely that there might be some hope of proving that it cannot happen, especially if one is allowed to use plausible conjectures. I’m reminded of results by Szeméredi and Vu connected with conjectures of Erdos about iterating sumsets of integers on the assumption that they don’t all lie in a non-trivial arithmetic progression. I can’t remember offhand what the exact statements are, but they seem to be in similar territory.
I realize that I am slightly jumping the gun by posting a comment like this. My defence is twofold. First, I will be in an almost entirely internet-free zone from tomorrow for a week. Secondly, this comment is indirectly a comment about the suitability of this problem to be a Polymath project, since it shows that there are subproblems that one can potentially get one’s teeth into (though it may be that someone can quickly see why this particular angle is not a fruitful one).
Comment by — July 31, 2009 @ 9:02 am
• In connection with the above, does anyone know the work of Goldston, Pintz and Yildirim (without the dots on the i’s) well enough to know whether it says more than just that the difference between successive primes is often small? For example, must the difference often be very close to $\pi^{-1}\log p$? This would have some relevance to the additive-combinatorial structure of the set $K$ above.
Comment by — July 31, 2009 @ 9:18 am
• I would have to check the precise details, but from my memory, the Goldston-Pintz-Yildirim results do not give any particular information on the location of the primes, except for giving intervals in which one is guaranteed to find two of them.
Comment by — July 31, 2009 @ 3:32 pm
• This sounds like smooth numbers in short intervals, assuming I haven’t missed the precise dependence of parameters. There is an old result of Balog which says that intervals [x,x+x^(1/2 + epsilon)] contain numbers that are x^(epsilon)-smooth (i.e. all prime factors smaller than about x^epsilon), or thereabouts (so the interval width is quite large — of size x^(1/2)). There are refinements due to Granville, Friedlander, Pomerance, Lagarias, and many others (including myself). It’s a very hard problem. You might check out Andrew Granville’s webpage, as he has a few survey articles on smooth numbers (at least two that I know of).
Comment by Ernie Croot — July 31, 2009 @ 2:04 pm
• Presumably there are results of the type “for almost all x, the interval [x,x+h(x)[” contains
some type of smooth/friable number (the smoothness/friability depending on h(x)), with functions h(x) much smaller than x^{1/2}? (I don’t know a reference myself, but this type of variant of gaps of various types is quite standard).
Comment by — July 31, 2009 @ 3:30 pm
• Yes, I think Friedlander has some results along those lines.
Comment by Ernie Croot — July 31, 2009 @ 3:35 pm
• In fact, there are two papers that are relevant: Friedlander and Lagarias “On the distribution in short intervals having no large prime factors”, and Hildebrand and Tenenbaum’s paper “Integers without large prime factors”. The intervals widths are still much larger than a power of log x, though.
Comment by Ernie Croot — July 31, 2009 @ 3:44 pm
• You may well be right that it’s a very hard problem, but if one weakened the aim to finding a prime with at least k digits (rather than exactly) then one wouldn’t care about large prime factors. Indeed, the aim would be more like to find a number that isn’t smooth. But also, we are looking for normal behaviour rather than atypical behaviour: the aim would be to show that in every short interval there is a number that has a prime factor of about the expected size, and for the weaker problem it would be enough to show that there is a number that has not too many prime factors. This is asking for something a lot weaker than $x^\epsilon$ smoothness.
Comment by — July 31, 2009 @ 4:44 pm
• That sounds a lot more hopeful than the smooth numbers in short intervals approach. Though, since you still need to work with quite short intervals, of width (log x)^c, probably the usual “Dirichlet polynomial” methods of Balog and others won’t work.
However, some of the constructions of Lagarias and Friendlander might work, but I doubt it.
I would probably try to find primes with at least k digits by another method.
Comment by Ernie Croot — July 31, 2009 @ 6:07 pm
• Regarding Tim’s nice idea here, at first I had the following thought: Assuming factoring is in P is a very strong assumption (especially considering that the assumption’s negation is the basis of a lot of modern cryptography). Still, we actually *have* factoring algorithms that work on n-digit numbers in time 2^{n^{1/3}} or so. And finding n-digit primes deterministically in time 2^{n^{1/3}} would be pretty great (far better than the 2^{n/2} one gets out of ERH).
But then I realized that the fastish factoring algorithms are almost surely randomized :( And indeed, it seems from a 1-minute literature search that the fastest known *deterministic* factoring algorithms take time like 2^{cn} for some fraction c, even heuristically.
Comment by Ryan O'Donnell — July 31, 2009 @ 5:57 pm
• As far as I can see, the Quadratic Sieve isn’t randomized. Heuristics predict that it runs in $e^{c \sqrt{n} log n}$ steps (where $n$ is the number of digits.)
However, it is true that those heuristics are probably more difficult to make rigorous than the ones concerning primes in short intervals.
Comment by David Speyer — August 1, 2009 @ 1:54 pm
• Maybe, for start, we can ask to find (in a polynomial time deterministic algorithm) a prome number with at least k digits. Now coninuing Tim’s line of thought (based on taking factoring for free) maybe it is true (provable??) that among any 10 consecutive integers with k^3 digits (say) there is always one with a prime factor having at least k digits?
Comment by — July 31, 2009 @ 7:14 pm
• From probabilistic reasoning alone that should probably be false, but if you replace 10 with, say, k^c, then probably what you say is true. My reasoning as to why what you say is false goes as follows: let x = 10^(k^3). We want to show that there is an integer of size about x such that x,x+1,…,x+9 are all (10^k)-smooth. The probably that a random n < x is (10^k)-smooth is something like (10^{-k^2}); so, we would certainly expect there to be 10 in a row that are (10^k)-smooth.
Of course you mean to allow 10 to be a power of k.
Comment by Ernie Croot — July 31, 2009 @ 7:26 pm
• Dear Ernie, I see (except that the probability you mention becomes smaller when k is larger so probably you meant something else). I suppose we go back to the question of showing that among polylog(k) consecutive integers with k^10 digits there is one which is not 10^k smooth. This looks like a Cramer’s type conjecture but as Tim said maybe it is easier.
Although it is getting further away from the original problem I am curious what could be a true statement of the form among T consecutive integers larger than U one must be B-smooth; and how far such a statement from the boundary of what can be proved.
Comment by — July 31, 2009 @ 7:58 pm
• Dear Gil, first regarding your parenthetical comment, I was trying to work out in my head exactly what it is in terms of k. Let’s see if I got it right… the theorem I was thinking of says that the number of n < x that are exp( c (log x loglog x)^(1/2))-smooth is about x exp( -(1/2c) (log x loglog x)^(1/2)), and this holds, I believe, even when c is allowed to depend (weakly) on x. So, if you take c to be something like (log x)^(-1/6) times some loglog's, then you are looking at exp( (log x)^(1/3})-smooth numbers — in other words, the smoothness bound has the 1/3-root as many digits as x. And, the number of these exp((log x)^(1/3}) up to x should be something like x exp( -(1/2 (log x)^(-1/6)) (log x loglog x)^(1/2}), which is something like x exp(-(log x)^(2/3)). So, the probability of picking one of these smooths is something like exp(-(log x)^{2/3}). Now, if x = 10^(k^3), then this probability is something like 10^(-k^2), with some log k factors thrown into the exponent. So, it seems what I wrote is correct, modulo some lower order terms.
I think that Tim's question is a lot, lot easier than Cramer's conjecture, but both of them seem quite difficult.
As to your last question, probably it is true that if B is a power of log(U), then T is bounded. This is because, the number of (log x)^c -smooths up to x grows like a power of x; for example, the # of (log x)^(1/2) -smooths up to x is something like x^(1/2), so you wouldn't expect there to be more than 3 of them in a row up to x. Proving such a result is probably quite hard, though, again, nowhere *near* as hard as Cramer.
Comment by Ernie Croot — July 31, 2009 @ 8:15 pm
• Well, we can be less ambitious than getting a $\geq k$-digit prime in polynomial time, and be happy with getting a $\geq \omega(k) \log k$-digit prime in polynomial time, where $\omega(k) \to \infty$ as $k \to \infty$; this is equivalent to finding a $\geq k$-digit prime in subexponential time $\exp(o(k))$. Note that brute force search (as in Klas’s comment below) gives all primes of $O(\log k)$ digits in size, so we only have to beat this by a little bit.
The task is now to exclude the scenario in which every integer in the interval $[10^k, 10^k + k]$ (say) is $k^{O(1)}$-smooth. It sounds like a sieve would be a promising strategy here.
One can use the “W-trick” to give a small boost here, actually (again related to Klas’s comment). Let W be the product of all the primes less than $k$, then W has size about $\exp(k)$. Instead of working with an interval, we can work with the arithmetic progression $P = \{ Wn+1: 1 \leq n \leq k \}$. The point is that these numbers are already coprime to all primes less than $k$.
So the question is now: can one rule out the scenario in which all elements of P are $k^{O(1)}$-smooth?
Comment by — August 1, 2009 @ 2:23 am
• This also seems like an immensely hard problem for sieves. One reason is that sieves are typically
insensitive to the starting point of the sieve interval, and in this case we know that [1,k] certainly
consists only of y := k^{O(1)}-smooth numbers.
Furthermore, it is hard to imagine such techniques as linear forms in logarithms working, as
here one has too many logs: if n, n+1, …, n+k are all y-smooth, then taking logs one has relations
like
log(n+1) – log(n) = O(1/n),
and then expressing log(n+1) and log(n) in terms of logs of their respective small prime factors, one gets
a linear form. In fact, one gets a system of linear forms (because there are k+1 smooth numbers), but
unfortunately the linear forms inequalities are too weak to tell you much, as you use ~ y/log(y) logs.
Still, there may be some way to amplify what the usual LLL inequalities give, because you have so many
different forms to work with.
There may be a way to use polynomials somehow. For example, if n is y-smooth, and is of size about 10^k,
then you can treat n as a special value of a smooth polynomial: for example, say
n = p_1 … p_t, p_i T – O(sqrt(T)) or so
in the case where the |a_i| <= T.
An F_p analogue of Mason's theorem might also be helpful here, since if we mod these polynomials out
by a small prime they typically are no longer square-free — they will be divisible by lots of linear
polynomials to large powers.
Even if we can't get sufficient control on the size of the coefficients, one might be able to use the
special form of these polynomials to conclude that there aren't too many integers n < 10^k where
n,n+1,…,n+k are all y-smooth; and furthermore, it might be possible to say just enough about the
structure of there exceptional n's to know how to avoid them (i.e. pick an n such that
n (n+1)…(n+k) is not y-smooth).
Also, we don't need to restrict ourselves to polynomials of just one variable x. Often one gets must better results
when one works with many variables, as in the proof of the Roth's theorem on diophantine approximation.
Comment by Anonymous — August 3, 2009 @ 4:31 pm
• Assuming that factoring is free here is another possibility which seems, some old computational work, to produce large primes.
1 Find the first k primes p_1….p_k. This can easliy be done by using a sieve.
2. Let a_k = 1+p_1p_2…p_k . The product can be computed is cheaply using Fourier transform. This is the “product plus one” from Euclid’s proof of the infinitude of primes, which is usually not a prime.
3. Find the largest prime factor of a_k. This number oscillates with k but on the average seems to have a number of bits which is, at least, linear in k.
I don’t know if one can prove that this actually produces a sequence of large primes, but trivially it does produce a sequence of numbers a_k which are not smooth.
Comment by — July 31, 2009 @ 9:32 pm
19. Dear Terry,
I have been working on this problem for sometime now. I found that there is an association between Mersenne numbers and prime numbers. That is each Mersenne is link to a particular prime/s. In such a way that for any given Mersenne the associated prime/s can be determined. The cramer’s conjecture of N and 2N as maximum distance between two adjacent primes reflects the relationship between two consecutive Mersennes which is M and 2M + 1. That is, in one instance two consecutive Mersennes primes has the cramer’s rule condition of distance of 2m + 1(this is fact may turn out to be the only condition).
I am currently working on the algorithm using combinatorial binary model.
Comment by jaime montuerto — July 31, 2009 @ 4:34 pm
• More on Mersenne connection, given a range x how many primes are there in that region? According to Gauss it would be about log2 x. If we rephrase the question to how many mersennes are there in that region? It would be really about log2 x. Unfortunately the primes that associated with those mersennes are not in that region but far below the region of x unless it’s a mersenne primes. But if we get all the primes associated with all the mersennes below x, we can use that to sieve out all the multiples in region x, for each mersennes primes cycles according to its mersenne.
So all the primes in the region their mersennes associates are far up the scale except the mersenne primes in the region. Although the ration of primes to its Mersennes behave randomly it averages out evenly and that reflects the distribution of log2 x and reflects the number of primes is the number of mersennes. It’s more tempting to say that the distribution of primes is the distribution of mersennes.
The subtle connection of randomness is not much the binary width of the prime numbers in relation to its mersenne and not the number of bits in prime but the subtle distribution of bits within the prime number itself. A number that has bits highly equidistant say like palindromic or cyclotomic in bits distribution is likely to be a composite while the more random or more unique distances to each other the more likely it would be prime numbers. My investigation is on this area of grammar like property.
On prime between mersennes is akin to conjecture that a prime exists between N and 2N.
Comment by jaime montuerto — July 31, 2009 @ 9:02 pm
20. Not a reply to any specific comment, but another in the rapidly developing tradition of chasing round strange implications amongst all these conjectures. I’m not quite sure of my facts here, so this is an advance warning of possible stupid assertions to follow.
The idea is this. If factorizing is hard, then discrete log is hard. If discrete log is hard, then there is a PRG. If there’s a PRG, then we can use a pseudorandom search for primes with $k$ digits instead of a genuinely random search, and it is guaranteed to work. But this means that there is some set of size about $k^C$ of numbers with $k$ digits each, at least one of which is guaranteed to be prime. But if one thinks what the output of a PRG constructed using the hardness of discrete log is actually like, it is quite concrete (I think). So we have found a very small set of integers, at least one of which is guaranteed to be prime.
As Terry pointed out earlier, there are problems in using the hardness of discrete log because first you need some large primes. Otherwise one might have a programme for finding an indirect proof of existence of a deterministic algorithm: either factorizing is hard, in which case build a PRG, or it is easy, in which case try to find a non-smooth number somewhat bigger than $n$ and factorize it.
Comment by — July 31, 2009 @ 8:42 pm
• I believe the PRG you get from the hardness of discrete log (or RSA for that matter) does not have the right properties for this to work. Namely, if you only use a logarithmic length seed (in k) you only appear random to polylogarithmic (in k) time algorithms. In particular, using this generator to make k bit integers might always give you composites. I do not think there is any easy way around this difficulty, as people have grappled with it before when trying to derandomize BPP.
You could get a PRG with the same properties directly by using multiplication as a one way function, if factoring is actually hard on average, without having to deal with generating primes. I don’t have a reference for this, but I’m sure you could find one in the cryptography literature. As I said, I don’t think this will get you to logarithmically many random bits, but it may be able to get you to o(n) random bits.
Comment by Paul Christiano — August 1, 2009 @ 12:16 am
• Ah. Well in that case I have to make the rather less interesting observation that proving the existence of any PRG has startling number-theoretic consequences (in the form of very small sets that are guaranteed to contain primes). But (i) there are good Razborov-Rudichy reasons for thinking that proving the existence of a PRG is very hard, and (ii) if we’re afraid of those number-theoretic consequences then we should be afraid of this whole project …
Comment by — August 1, 2009 @ 7:30 am
21. A less ambitious goal might be to show an analog of Cramer’s conjecture for square-free numbers: say that there is always a squarefree number in the interval [n, n+log^2(n)].
This is less ambitious because the squarefree numbers have a constant density in the integers (as opposed to the primes whose density is inverse logarithmic). Is anything along these lines known? Also the analogous problem for finite fields – finding squarefree polynomials – is easy.
Comment by — August 1, 2009 @ 4:46 pm
• There is a result due to Michael Filaseta and Ognion Trifinov, but the interval width is only a power of n, not a power of log n.
Comment by Ernie Croot — August 1, 2009 @ 6:19 pm
• And here also, in for almost all intervals, one can get a better result; in fact Bellman and Shapiro (“The distribution of squarefree integers in small intervals”, Duke Math. J. 21, (1954). 629–637) show — quite easily: no L-functions or real sieve is involved — that for any fixed function phi(n) which grows to infinity, it is true that for almost all n, the interval [n,n+phi(n)] contains a squarefree number.
I would say that there might be some chance indeed of finding an algorithm to construct squarefree numbers of arbitrary size in polynomial time, even if primes are not accessible.
Comment by — August 2, 2009 @ 12:14 am
• Let me just point out that it is easy to construct square-free numbers of arbitrary size, as you just need to product together lots of very small primes.
Comment by Ernie Croot — August 2, 2009 @ 12:49 am
• Of course…
Comment by — August 2, 2009 @ 1:57 am
22. A pure probabilistic approach to identifying a prime number is tuned to using particular tools to show that success is likely after not-too-much time. However, testing one number after another chosen at random does throw away information – it will only be useful information if it can be used to make the next choice better-than-average: could this be possible?
Or alternatively, is there a slightly more expensive algorithm for checking primality which yIelds such useful information if a number is found to be composite?
Comment by Mark Bennet — August 1, 2009 @ 9:15 pm
• But wouldn’t such an approach apply to most other situations as well? In particular, how would such an approach get the result we want without proving P = RP?
Comment by Jacob Steinhardt — August 2, 2009 @ 4:59 am
• Well, of course, if there were an effective way of using “information” otherwise discarded, it might prove something – that was the point of my post. However, it may be that any gains would be so marginal as to make no difference to the complexity class, or indeed that there is no real information to be extracted.
The various algorithms and proofs have been optimised for other problems – in various places (smooth v non-smooth, for example) it seems clear that the current formulation is asking a different kind of question from that which has previously been addressed. I just wanted to point out that here, and perhaps in other places, there is a potential leakage of information – and where this is the case, there may be opportunities for optimisation or improvement of strategies to deal with slightly different problems.
It may come to nothing, but there are some points to be explored. And there may be some useful progress or insight, even if there is no full proof.
Comment by Mark Bennet — August 2, 2009 @ 9:16 pm
23. I was thinking about the question of whether the finding primes problem can be solved purely by “soft” (or more precisely, relativisable) complexity theory techniques, using no number-theoretic information about the primes, other than that they are dense and lie in P (i.e. testing primality can be done in polynomial time). I think I almost have an argument that this is not possible (at least without additional complexity hypotheses), i.e. given the right oracle, there exist a set of “pseudoprimes” which are dense and in P (and thus can be easily located by probabilistic algorithms), but for which one cannot write down a deterministic algorithm that locates large pseudoprimes in polynomial time. But it needs a slight fix to work properly; perhaps some computational complexity expert will see how to fix it?
Here’s how it (almost) works. Let $S_k$ be the set of k-digit integers which have a Kolmogorov complexity of at least $\sqrt k$, i.e. they require a Turing machine program of at least $\sqrt k$ bits in length to construct. (Nothing special about $\sqrt k$ here; anything between $\log k$ and k, basically, would suffice here.) Let S be the union of all the $S_k$. An easy counting argument shows that S is extremely dense.
Now let A be an oracle that reports membership in S in unit time. Then we now have a very quick way to probabilistically exhibit k-digit elements of S; just select k-digit numbers at random and test them using A until we get a hit.
On the other hand, I want to say that it is not possible to find an deterministic algorithm that can locate a k-digit element of S in time polynomial in k, even with the use of oracle A. Heuristically, the reason for this is that any such algorithm will, in the course of its execution, only encounter integers with a Kolmogorov complexity of $O(\log k)$ (since that integer can be described by running a O(1)-length Turing machine for a time polynomial in k (and this time can be stored as a $O(\log k)$-bit integer)). In particular, one expects that the oracle A will return a negative answer to any query that that algorithm asks for, and so the oracle is effectively useless; as such, it should not be able to reach any high-complexity number, such as an element of S.
As stated, the argument doesn’t quite work, because integers of $O(\log k)$ complexity can respond positively to A if they are very short (specifically, if they have $O(\log^2 k)$ digits). Because of this, the oracle A is not completely useless, and can in principle provide new information to the algorithm that may let it reach numbers that previously had too high complexity to be reached. But it seems to me that there should be some way of fixing the above construction to avoid this problem (perhaps by constructing the $S_k$ in some inductive manner?)
I also don’t know what the status of P=BPP is in this oracle model. After fixing the above oracle, an obvious next step would be to try to modify it further so that P=BPP is true (or false), to rule out the possibility of using P=BPP in a soft manner (as was suggested previously) to solve the problem conditionally.
It seems that the above arguments also suggest that soft methods only allow one to find $O(\log k)$-digit primes in polynomial time, or equivalently to find k-digit primes in exponential primes. Thus finding superlogarithmic-digit primes in polynomial times, or k-digit primes in subexponential primes, would be a significant advance, and must necessarily use something nontrivial about primes beyond their density and testability.
Perhaps it could also be possible to show that reducing the amount of random bits required to find a prime cannot be made o(k) by similarly soft methods.
Comment by — August 3, 2009 @ 11:43 am
• The question seems to be whether the ability to generate short (length $\log k$) somewhat-random strings gives you the ability to easily generate long (length k) somewhat-random strings, where a string is somewhat random if it has Kolmogorov complexity at least the square root of its length. Playing Devil’s Advocate, one might guess that this is possible, simply by running through all strings of length $\log k$, and consecutively outputting those which are somewhat random. Not that I see how to prove this works.
How might one imagine modifying \$S\$ to avoid this kind of difficulty?
Comment by — August 3, 2009 @ 6:30 pm
• With access to an oracle of Kolmogorov random strings, one can deterministically construct an n-bit Kolmogorov random string in time polynomial in n.
See for example Lemma 42 in this paper
http://ftp.cs.rutgers.edu/pub/allender/KT.pdf
The idea is quite simple: one keeps adding O(log n) bit suffixes y to the (inductively) constructed random string z. One can argue that there is always a suffix y such that zy is random, as by counting many y are random relative to z and by symmetry of information
C(zy) >= C(z)+C(y|z) – 2log|zy|.
Comment by Troy Lee — August 4, 2009 @ 4:28 pm
• I am curious to what extent we have a genuine derandomization question or the problem is mainly a problem in number theory which is stated in terms of computational complexity (which is by itself welcome). To try to clarify what I mean here are a few analogs:
a) The possibility to find a detrministic polynomial algorithm for primality is a fairly clear prediction of computational complexity insights in the direction P=BPP.
On the other hand here are some problems where computational complexity is just a way to formulate a problem in NT:
b) Given an oracle which gives in unit time the nth digit of pi find a polynomial algorithm in k to find a digit ’9′ in a place between 10^k and 10^{k+1}.
c) Given an oracle for factoring in unit time find a polynomial time algorithm in k to find a square free number with k digits with odd number of prime factors.
I think our problem is more similar to b) (and even more to c)) than to a); One difference is that primality is polynomial in the number of digits, but I am not sure if this property or some particular aspects of our problem are enough to make a difference in this respect.
Comment by — August 4, 2009 @ 5:00 pm
• I think that the whole point of this oracle construction was showing that any algorithm for generating primes must use some special properties of the primes and not just their density. Question b would be similar and would need to use some special properties of the language “the n’th digit of pi is 9″ in time that is polynomial in log(n). And so would question c (although in this case, I don’t see why we don’t just take the product of k/logk primes, each about logk bits long.)
Comment by — August 4, 2009 @ 7:58 pm
• Regarding c) right, we can reformulate the question and ask to find two consecutive numbers with this property…
Comment by — August 8, 2009 @ 10:01 am
• I think that this proof can be fixed by choosing $S_k$ to the set of strings that have high Kolmogorov complexity even relative to the oracle containing $S_1 ... S_{k-1}$ (i.e. that a TM with access to this oracle still needs large description). This way any oracle queries of length strictly less than n do not help in finding strings of length n.
Comment by — August 4, 2009 @ 7:49 pm
• I think this does indeed fix the argument, thanks! I wrote it up at
http://michaelnielsen.org/polymath1/index.php?title=Oracle_counterexample_to_finding_pseudoprimes
So, to solve the problem, we either need to use something special about primes, or to use an argument that is not relativisable.
I wonder if one can modify the oracle to obey or not obey P=BPP; this would show that P=BPP is not directly relevant to the problem.
Comment by — August 4, 2009 @ 10:00 pm
24. Five years ago, Ralf Stephan from the OEIS put together 117 conjectures in this document: http://arxiv.org/PS_cache/math/pdf/0409/0409509v4.pdf
Most of them were elementary and were proved by a group effort, but some weren’t.
His Conjecture 28 was that, for any n>4, the n-bit binary numbers with four non-zero bits include a prime number.
Comment by Michael Peake — August 3, 2009 @ 1:20 pm
• This seems like a plausible conjecture; setting the first and final bit to equal 1 (to make the number odd) one has about k^2 k-bit numbers of this type, each of which one expects to have about a 1/k chance of being prime, so the failure rate should be exponentially small in k, and so by Borel-Cantelli one expects only a finite number of k for which the conjecture fails. But this is only a heuristic argument; there could be a very unlikely but (to our current knowledge) mathematically possible “conspiracy” in which all of these numbers defy the odds and end up being composite.
This is close to the existing strategy of trying to show that any interval of size (say) $k^{100}$ of k-digit integers contains a prime (cf. Cramer’s conjecture). I’m not sure whether using sets such as sparse sums of powers of two are better than intervals, though. It’s true that there are some specialised primality tests for such numbers (e.g. special number field sieve), but I don’t see how to use these (and they would also be available for, say, a shortish interval of numbers near $2^k$).
Comment by — August 3, 2009 @ 3:08 pm
• Dear Michael,
Regarding conjecture 28 which has 4 binary bits the smallest bit is 1. If two of the other bits are ODD and one EVEN bit plus 1 it would be multiple of 3.
And given a k-bit binary width odd number irregardless of number of bits the probability of it to be composite is:
(1+2+..+k+1) / LCM(1,2,..,k).
This is the combined probabilities of smaller primes that might divide that number. They work synergetically together. For instance the probability of any prime as a factor is 1 / phi(p), but this is when a particular prime is independently analyze not in combination of other primes. But together these small primes in combination, now if you analyse the probability of a particular prime is 1 / J(p). I have to describe this new function. This new function is simply the order of Mersenne in which this prime divides. The relationship of this prime to this Mersenne is simply the J(p) = J(M_p) = order of M_p which is about the log of M_p. This is actually the specialized form of euler function phi and carmichael function and it is simply the Fermat’s Little Theorem in binary form.
So the above probability’s numerator is the sum of all these Mersennes logs. Using it this way we don’t need to know what prime/s divides the mersenne concern or the probable prime/s that might divide the number but knowing that their combine probabilities (synergic) is simply express by their mersennne associates. Let me elaborate further, the probabilities :
1/1 by 1, 1/2 by 3, 1/3 by 7, 1/4 by 5,,,2/x by the M_x/2.
If you look into the probability of 7 is 1/3 it should be 1/6 or 1 /phi(7) is 6. But because of probability of 1/2 by 3 affected all probabilities.
This is the same reason why carmichael function doesn’t use multiplication in ‘combining’ totient functions of products. e.g phi(a * b) = phi(a) * phi(b), while carmichael function let’s call it Y.
Y(a * b) = LCM(a,b). This J function also uses LCM i.e J(a * b) = LCM(a,b). BUT with proviso that only using with all other primes concerns. The major difference of J function to both phi and Y functions is that J function can have ODD values, e.g. the order of Mersenne 11 is 11 and the associated primes are 23 and 89.
Comment by jaime montuerto — August 5, 2009 @ 8:07 am
25. I’ve reorganised the wiki page to specify some specific conjectures that are emerging from this project:
* Strong conjecture: one can find a $\geq k$-bit prime deterministically in polynomial time.
* Semi-weak conjecture: One can find a $\geq k$-bit prime probabilistically using $o(k)$ random bits in polynomial time.
* Weak conjecture: One can find a $\geq k$-bit prime deterministically in $\exp(o(k))$ time, or equivalently one can find a $\geq \omega(\log k)$-bit prime deterministically in polynomial time.
One can also pose any of these conjectures in the presence of a factoring oracle (or under the closely related assumption that factoring is in P), thus leading to six conjectures, of which “weak conjecture with factoring” is the easiest.
I’ve also added a section on “how could the conjecture(s) fail?”. Basically, several weird things have to happen in order for the any of the conjectures to fail, in particular all large constructible numbers have to be composite (or smooth, if factoring is allowed), and in fact sit inside large intervals of composite or smooth numbers. Also, no pseudorandom number generation is allowed. One also has the peculiar situation in which testing any given number for primality is in P, but testing an interval for the existence of primes cannot be in P (though it lies in NP).
I’ve also realised that most of the facts in number theory we know about primes are only average-case information rather than worst-case information, even with such strong conjectures as GRH, GUE, prime tuples, etc: they say a fair a bit about the distribution of primes near generic integers, but are not so effective for the “worst” integers, e.g. the ones inside the largest prime gaps. The whole problem here is that deterministic polynomial time algorithms can only reach the “constructible” k-digit integers, which (by a trivial counting argument) are an extremely sparse subset of the set of all k-digit integers (being only polynomial size in k rather than exponential). As such, there could be a massive “conspiracy” to place all the constructible numbers inside the rare set of “worst-case” integers in the statements of average-case number theory results, thus rendering such results useless. This conspiracy is incredibly unlikely, but we are now in the familiar position of not being able to disprove such a conspiracy. But if it were possible to somehow “amplify” the failure of average-case results for the constructible numbers to imply failure of the average-case results for a much larger set of numbers, then one might be able to get somewhere. For instance, pseudorandom number generators would accomplish this, but of course we would like to use less strong conjectures for this purpose. One reason I like Tim Gowers additive combinatorial approach (#18) is because it seems to be trying to achieve precisely this sort of amplification. (But the numerology of the approach will certainly have to be checked to ensure that it looks viable.)
Comment by — August 3, 2009 @ 3:01 pm
26. I have a heuristic argument that “average case” results about the primes, such as that coming from GRH, GUE, the prime tuples conjecture, and the distribution of prime gaps, are effectively useless for this problem. The argument goes as follows: suppose one replaces the set of k-digit primes with the set of k-digit “generic” primes, defined as those k-digit primes with Kolmogorov complexity at least $\sqrt{k}$ (say). Counting arguments show that almost all primes are generic, and thus any average-case statement that is true for primes, is also true for generic primes (unless the size of the exceptional set in the average case statement is extremely small). On the other hand, it is not possible to deterministically locate a k-digit generic prime in polynomial time, because any number obtained in such a fashion has a Kolmogorov complexity of O(log k).
So one is going to have to rely on worst-case number-theory results rather than average-case ones. But now things are much weaker. For instance, the average prime gap between k-digit primes is of size O(k), but the best known upper bound on the largest prime gap is $O(10^{k/2})$ even assuming GRH. The one number-theoretic conjecture I know of (besides Cramer’s conjecture) which has significant worst-case content is the ABC conjecture, but unfortunately it seems the numbers we are dealing with are not quite smooth enough for this conjecture to be relevant…
Comment by — August 4, 2009 @ 1:13 am
• In a similar light it is worth observing that most analytic results about existence of primes give more than one prime wherever they are looked for, even if they are often stated as simply existence results. E.g., universal gaps between primes are obtained by showing that pi(x+y)-pi(x) grows as a function of y, for suitable y=y(x), and even the theorem of Linnik on the smallest prime in an arithmetic progression actually finds an explicit lower bound for the number of primes in such a progression, and selects a suitably limit beyond which the main term dominates the error term. Even the Goldston-Pintz-Yildirim result has this feature. So, as in Terry’s comment, any such result is likely to hold also for “generic” primes in the sense he describes.
Comment by — August 4, 2009 @ 9:10 pm
• Hmm, that’s a little discouraging. On the other hand, things seem to get better with a factoring oracle. One cannot simply now delete the non-generic primes, because one now has to supply a (non-existent) factoring of such non-generic primes into smaller integers, so the previous objections don’t obviously seem to apply.
On a slightly unrelated note, here is another proposal to find k-digit primes in exp(o(k)) time. There is a famous result of Friedlander and Iwaniec that there are infinitely many primes of the form $a^2+b^4$, and in fact for k large enough there exists a k-digit prime of this form. The number of k-digit numbers of this form is about $O((10^k)^{3/4})$, so searching through this set beats the totally trivial bound of $O(10^k)$. Heath-Brown showed a similar result for numbers of the form $a^3+2b^3$, which is a little thinner at $O((10^k)^{2/3})$. So far, this doesn’t beat the algorithm of $O((10^k)^{0.535})$ coming from Harman’s bound on prime gaps (see comment #9) but perhaps it is possible to detect primes in even thinner sets? If we can find primes in explicit sets of size $O((10^k)^{\varepsilon})$ for arbitrarily small $\varepsilon$ then we have resolved the weak (and semi-weak) conjecture.
Comment by — August 4, 2009 @ 11:43 pm
• Heath-Brown’s result is still the one with the sparsest set of primes, as far as I know. Of course, conjectures on primes represented by irreducible polynomials of arbitrary degree would give the desired result. But that seems likely to be as hard or harder to prove than the Hardy-Littlewood conjecture (which is somewhat analogue but for reducible polynomials).
Comment by — August 5, 2009 @ 1:31 am
• OK. I wonder if it would be too ambitious to try to push beyond Heath-Brown, or at least to identify some reasonable polynomial forms that were sparse but had some chance of breaking the parity barrier due to their special structure. Harold Helfgott managed to break the parity barrier for other cubic forms as well, but this presumably doesn’t lead to an increased density. (I know for sure that my results with Ben Green and Tamar Ziegler on arithmetic or polynomial progressions in primes don’t do this; they all have at least one global average in them, rather than averaging on sparse sets.)
It does seem though that (a sufficiently quantitative version of) Schinzel’s hypothesis H would establish the weak and semi-weak conjectures, by your comments, so I can add to the list of impossibly difficult conjectures which would imply some progress on this problem :-)
Comment by — August 5, 2009 @ 1:52 am
27. This is a little off-the-cuff, so apologies in advance.
To get intuition, I was thinking about possible relaxations of the problem for which one could get partial results. For example, what about the set of numbers n which do not have prime factors smaller than n^{1/3} or polylog(n)? I’m not sure if we have efficient deterministic membership checkers for these sets, but in any case, it might be interesting to think about getting a deterministic algorithm to generate elements of such sets lying in a given interval.
Comment by Arnab Bhattacharyya — August 5, 2009 @ 2:45 am
28. Can you please assist me with some background questions (probably I should have known the answers)(I did find some meterial on the internet but it is probably partial.)
1) Regarding Cramer conjecture: I recall that it is easy to demonstrate a gap of size (log n)/log log n by looking at a sequence n!+2,…,n!+n. are the longer gaps known (explicitely or not explicitly)
One direction that I find interesting would be to show that certain low-complexity devices cannot give us a way to obtain primes, and cannot even give us even a “richer in primes” environment compared to all natural numbers. Although I do not know how to define “low complexity devices”. I saw some material on primes as values of polynomials http://mathworld.wolfram.com/Prime-GeneratingPolynomial.html (In particular, there is a result of Goldbach that there is no polynomial that always give you primes.) There are (I suppose famous) conjectures extending Direchlet’s theorem about infinitely many primes as values of polynomials (Bouniakowsky Conjecture).
2) Are there results of conjectures asserting that the density of primes values of polynomials(in one or more variables) is vanishing? (or even that there are at most n/logn primes <= n.)
(The n!+2,…n!+n gap construction extends easily to polynomials.)
3) (Most embarassing) What are the easiest ways to show that the density of primes on arithmetic progression is zero (reverse Direchlet sort of), and do they extend to polynomials?
Comment by — August 5, 2009 @ 8:18 am
• Dear Gil,
A nice survey on prime gaps is Sound’s article at http://arxiv.org/abs/math/0605696 . The longest prime gap known between primes of size n has the somewhat amusing magnitude of $\log n \frac{\log \log n \log \log \log \log n}{(\log \log \log n)^3}$, a result of Rankin.
Sieve theory can provide upper bounds on the number of primes in nice sets such as polynomial sequences which are only off by a constant factor from the expected count. In particular, the asymptotic density of primes of size n in a polynomial sequence $\{ P(k): k \in {\mathbb Z} \}$ will be $O_P(1/\log n)$, with the implied constant depending on the polynomial P. Schintzel’s hypothesis H asserts that any polynomial P will capture infinitely many primes unless there is an obvious obstruction. By “obvious obstruction” I mean either (a) a modulus q such that P(k) is not coprime to q for all but finitely many k, (b) P(k) is negative for all but finitely many k, or (c) P factors into polynomials of lower degree. These are pretty much the only ways known to stop a sequence from containing any primes.
Comment by — August 5, 2009 @ 2:37 pm
29. I was trying to look at the basic properties of primes to see how difficult it is to get integers with subsets of these properties.
The first type of properties that primes have is that they are not divisible by 2, not divisible by 3, etc. It is trivial to find an n-bit number that satisfies any given one of these conditions. How about satisfying a polynomial number of these? I.e. given a sequence of integers $f_1 ... f_m$ and a desired bit-length \$n\$, output an n-bit number that is not divisible by any of the $f_i$‘s. (This should be done in time polynomial in the input size and in n.) This problem still seems easy since we just need to multiply a sufficient number of prime numbers that are not factors of any of the $f_i$‘s, and such exist, each of length O(log n). However slight variants of this problem already seem to require something more:
Problem 1: given m integers $f_1 ... f_m$, with $f_i \ge 3$ for all i, and a desired length \$n\$, output an n-bit number x such that neither x nor x+1 is divisible by any of the $f_i$‘s.
Another class of properties that primes have (and are used in primality tests) is Fermat’s little theorem. While these tests don’t exactly characterize the primes (they also capture pseudo-primes), they have the advantage that (assuming ERH) only a polynomial number of them need to be checked. However it is not clear to me how to generate numbers that pass even a single such test:
Problem 2: Find an n-bit integer x such that $2^x = 2 \: (mod \: x)$.
Comment by — August 5, 2009 @ 9:13 am
• Dear Noam,
In the above congruence if x is prime it will divide M_x-1(Mersenne whose order is x-1). I don’t know about its number of bits, but there is set of possible candidates (there’s more about this).
In general given a number an odd n, these conditions hold:
Say j = n – 1 = (j1 * j2 * j3,.., * ji), we get all its factor, say it has m factors. If we get the set of its Gauss Sum = m * (m+1) / 2, this will be the number of potential mersennes order that is associates of n. So we will have a set of Mersennes {M_j1,,M_j1j2,,..} m number Mersennes.
One of the three conditions should happen:
1) n | M_j, one of these Mersenne is a multiple of n AND none of the associated primes of any Mersenne divides n.
2) no Mersennes in the list is multiple of n AND two of the smaller mersennes associates primes divides n.
3) n | M_j one of the Mersenne is a multiple of n AND two of the primes of the Mersennes divides n.
The first condition n is Prime, the second n is composite and condition 3, n is carmichael number.
Let’s look at n = 341.
So j = 340 = (2 * 2 * 5 * 17), get all the combinations of Mersennes.
Mset => {M2, M4, M5, M10, M20, M17, M34, M68, M85, M170, M340}
I don’t know all the associated primes of all these mersennes, but I know these M2 => 3, M4 => 5, M5 = 31 => itself, M10 => 11, M20 => 41, M17 => itself, etc..
Now condition 1: 341 | M340, and condition 3) M5 = 31 | 341 are met so 341 is a carmichael number.
Gil Kalai mentioned Boniakovsky he discovered the congruence:
2^phi(n) congruent to 1 mod n.
Also the relationship is this J(n) | n – 1 and J(n) | phi(n). Where J(n) is a function which is the Fermat’s Little Theorem applied on odd binary numbers. If we changed the congruence above by Boniakovsky into
2^J(n) congruent 1 mod n, its obvious from here that the Mersenne whose order is J(n) is the multiple on n.
Comment by jaime montuerto — August 5, 2009 @ 12:50 pm
• I have to add about the function J(n) | n – 1, only if n is prime or pseudoprime.
Comment by jaime montuerto — August 5, 2009 @ 1:08 pm
• These are nice toy problems. For problem 1 (assuming all the $f_i$ prime for simplicity), the Chinese remainder theorem can be used to solve the problem as long as the product of all the $f_i$ is less than $2^n$.
Another strategy is to pick an n-bit number (e.g. $2^n$) and advance it by 1, checking each of the numbers in turn for the desired property. Sieve theory techniques can bound the length one has to search before one is guaranteed a hit. For instance, if all the $f_i$ have magnitude greater than m, then searching 2m+1 consecutive integers will suffice since each modulus $f_i$ can only knock at most two of the candidates.
The difficulty with Problem 2 seems to be that there is no apparent relationship of much use between the residues $2^x \mod x$ as x varies, nor is there an obvious way to “control” this residue other than to control the prime factorisation of x (in which case one is done anyway). Given how rare pseudoprimes are compared to the primes, I would imagine this problem is essentially as difficult as the original problem.
Comment by — August 5, 2009 @ 2:58 pm
• Dear Terry,
Last night I made a pen and paper algolrithm I am talking about. I picked Mersenne100 because it’s the first 3 digit and 100 is a nice decimal number. Here are the associated primes:
101, 8101, 268501.
I also cheated to check that there really are primes in googles. All the talk so far strengthen and supports my algol. Picking Mersenne 100 is by whim, it could have been any other Mersennes but inline with your project slowing building up all the smaller Mersennes will help up the test later for sieving out potential composite. This Mersenne algol is similar to euclid prime production and has the non-trivial geometric series Tim Gower pointed out on Paul Erdos results. Mersennes has the other property other have not pointed out. It has the property of Fibonacci sequennce, I really don’t know the impact of Fibonacci sequence. Together the near geometric progression and Fibonacci like sequencing this has a good mix of properties your looking for.
Besides it is backed up by powerful theorems and conjectures. Fermat’s Little Theorem, Euler function, Carmichael function, Poclington’s theorem, Legendre’s theorem on Mersenne, Wilson’s primes, Wagstaff primes, Riesel composite numbers, Euclid prime production, Gause logarithmic prime connection, works of Maurer highlights aspects of this, Carl Pomerance is studying a logarithnmic function that has game theory aspect of it, I believe it has connection with my J function, Cramer’s conjecture and Bertrands Postulate I see them as two side of a coin in coining to my algol.
Comment by jaime montuerto — August 7, 2009 @ 7:49 am
30. I am not a number theorist, but doesn’t a good algorithm already exist in the form of Maurer’s method[1] for generating provable prime numbers. Maurer seeks to generate random primes for cryptographic use and therefore uses a RNG in the algorithm, but this seems (to this naive reader) easy to replace.
[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.2151
Comment by Damien Miller — August 6, 2009 @ 1:22 am
• As far as I can see, the analysis of the algorithm only provides an expected running time which is polynomial, no worse-case analysis (which is what the current project is looking for). Moreover, the running time is computed by using 1/log(2^k) as the probability that an integer is prime, and using this to compute how many iterations are required of a primality test. So this doesn’t seem to be a mathematically rigorous solution.
Comment by — August 6, 2009 @ 3:35 pm
31. Here is an article on Maurer’s method:
http://www.daimi.au.dk/~ivan/ProvablePrimes.pdf
It gives an expected running time of
O(k^4/log(k). See section 5.1 of the above paper.
Comment by — August 6, 2009 @ 3:55 am
32. Here is a link to an article by Maurer about generating prime numbers:
Fast generation of prime numbers and secure public-key cryptographic parameters at:
http://eprints.kfupm.edu.sa/40655/
Comment by — August 6, 2009 @ 4:01 am
33. Here’s a very simple strategy based on a factoring oracle and Euclid’s proof of the infinitude of primes (which is similar to Klas’s earlier suggestion and the W-trick). It isn’t immediately clear to me how to do the time analysis (maybe it is to someone else?):
1) Multiply the first ln(k) primes together and call this A.
2) Factor A. If there is a k-digit prime divisor your done. Otherwise multiply A by all of the prime factors of A+1.
3) Repeat until you find a k-digit prime.
The general idea is that if A doesn’t have a large prime factor, it must have a lot of small prime factors. Since every prime divisor of A+1 is distinct from the prime divisors of A, the list of primes you multiply together to get (the next) A should grow very quickly. I imagine this gives some kind of gain over a brute force search?
Comment by Anonymous — August 7, 2009 @ 1:11 am
• There are papers on the prime factors of products of consecutive integers; for instance Balog and Wooley show that, for any fixed positive $\epsilon$, one can find arbitrarily long strings (starting at $n$) consisting of $n^{\epsilon}$-smooth integers. The length of the string grows like $\log\log\log\log n$ in their construction, and they observe that $\log n$ or \$\latex (\log n)^2\$ is likely to not be possible heuristically, which — if correct — would give a solution to the problem under the assumption that factoring is possible quickly.
The paper is “On strings of consecutive integers with no large prime factors”, J. Austral. Math. Soc. Ser. A 64 (1998), no. 2, 266–276 (the length of the string grows like $\log\log\log\log n$)…
See here.
Comment by — August 7, 2009 @ 3:19 pm
• I’m sure this algorithm would work well on the average (though I would not be able to prove this), and several other algorithms proposed here also have convincing heuristic arguments that they should succeed for most inputs k. But for the purposes of this particular project, it is the worst-case scenario which one needs to control – how bad would things get if Murphy’s law held and the primes were maximally unfriendly towards one’s algorithm?
The basic issue here is that at the k^th step of this algorithm, one has only excluded poly(k) primes from consideration. In the worst-case scenario, the next number generated by this algorithm could be generated by the first prime that has not already been excluded, which would be only of polynomial size in k rather than exponential. Admittedly, this is an unlikely scenario, but the trick is to figure out how to rule it out completely.
Currently, all of our algorithms, after k steps, are only capable of producing a prime of size as low as k^2 (modulo lower order terms) in the worst case (and this requires GRH). Even getting a prime of size $k^2$ or greater in k steps unconditionally would be a breakthrough, I think.
Comment by — August 7, 2009 @ 3:36 pm
34. I thought I would mention an idea that occurred to me this past week, though maybe it is already implicit in some of the discussion on “soft methods” for finding large primes (I have been away for three days, so have not had the chance to read much about progress on the project).
Assuming that factoring is “easy”, I want to show that either we can find large primes quickly (say sub-exponential in k), or else such number theory conjectures as the “No Siegel Zeros” conjecture are true (and perhaps strong forms of the “Chebotarev’s density theorem”, and many other such conjectures, can be assumed to hold by similar ideas). The hope would be that having such a theorem be true can be used to boost the effectiveness of other potential methods to find large prime numbers (for one thing, if “No Siegel Zeros” holds, then we know that primes are “well-distributed in arithmetic progressions”, a powerful ingredient in potential sieve method approaches to locating primes).
My first suggestion is that if there are infinitely many (sufficiently bad) Seigel zeros, then we know, say, that there are infinitely many moduli M for which all the Jacobi symbols
(M/q) = -1, for primes \$q < B := \exp(\sqrt{\log M})\$, gcd(q,M)=1.
This means that the Jacobi symbol can be used to “approximate the Mobius function mu'', and since good bounds on “character sums'' are much easier to come by than analogous sums involving mu, it is can be used to improve the quality of results produced by sieve methods to find primes; indeed, Roger Heath-Brown used this idea to show that if there just enough Seigel zeros, then the Twin Prime Conjecture holds. Perhaps there are other important instances of the Hardy-Littlewood conjecture that can be attacked in a similar way.
My second suggestion is incomplete, but more direct: suppose that Siegel zeros exist (there are infinitely many of them, say). Then consider the form
\$\$
f(x,y) = x^2 – M y^2,
\$\$
where M is the modulus of a Siegel zero; and consider, for example, f(1,1) = 1 – M. This number cannot be divisible by any small prime q < B, else (M/q)= (1/q) = +1. So, all prime factors of f(1,1) lie in \$[B,M^3]\$, and therefore if factoring is “easy'', we will have found a large prime!
But how do we locate such a modulus M, assuming they exist? I'm not sure, but it seems like maybe we could use the fact that we have lots of different values of the form f to play around with — we would just need to choose xy coprime to M. Of course we could also try to build a sieve somehow, as numbers M satisfying (M/q)=-1 for primes q < B are quite rare!
Comment by Anonymous — August 9, 2009 @ 12:08 am
35. [...] primes” as the massively collaborative research project Polymath4, and now supercedes the proposal thread for this project as the official “research” thread for this project, which has now become rather [...]
Pingback by — August 9, 2009 @ 3:58 am
36. This preliminary research thread (which was getting quite lengthy) has now been superseded by the new research thread at
http://polymathprojects.org/2009/08/09/research-thread-ii-deterministic-way-to-find-primes/
All new research comments should now be made at that thread. Participants are of course welcome to summarise, review, or otherwise carry over existing comments from this thread at the next one; this would be a good time to recap any progress that has not been sufficiently emphasised already.
Comment by — August 9, 2009 @ 4:10 am
37. [...] The exciting new Polymath Project, to find (under some number-theoretic assumption) a deterministic polynomial-time algorithm for [...]
Pingback by — May 6, 2013 @ 2:31 pm
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 139, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487488269805908, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/109312/list
|
## Return to Answer
4 added 305 characters in body; added 12 characters in body
Let me give an example showing that the normality hypothesis is necessary.
Let $Y=\mathbb{P}^1$ with natural $G=\mathbb{G}_m$-action. Let $X$ be the $G$-variety obtained by glueing transversally the two fixed points $0$ and $\infty$. Consider the line bundle $\mathcal{O}(l)$ with $l\neq 0$ on $Y$ and glue the fibers over $0$ and $\infty$ using any linear isomorphism to obtain a line bundle $\mathcal{L}$ on $X$. Suppose that $\mathcal{L}$ has a $G$-linearisation. Pulling it back to $Y$, we obtain a $G$-linearisation of $\mathcal{O}(l)$ on $Y$ such that $G$ acts on the fibers over $0$ and $\infty$ with the same character. However, the description of the $G$-linearisations of $\mathcal{O}(l)$ when $l\neq 0$ shows that this is not possible (more precisely, for any $G$-linearisation of $\mathcal{O}(l)$, the characters through which $G$ acts on the fibers over $0$ and $\infty$ differ by the character $t\mapsto t^l$). This argument shows moreover that no multiple of $\mathcal{L}$ has a $G$-linearisation.
Note that it follows that there is no ample $G$-linearised line bundle on $X$.
As for the second question, the natural map to study is more likely to be $Pic^G(X)\to Pic(X)^G$ where $Pic(X)^G$ denotes the group of line bundles whose class in $Pic(X)$ is $G$-invariant. When $X$ is normal and proper, its Picard group is an extension of a discrete group by an abelian variety so that if $G$ is linear connected, $G$ acts necessarily trivially on $Pic(X)$ and $Pic(X)^G=Pic(X)$. However, when $X$ is not normal, this is not the case anymore (for instance in the above example).
Moreover, as far as I could check, the arguments in Dolgachev's notes extend to show that, if $X$ is an integral proper variety over an algebraically closed field endowed with an action of a connected linear algebraic group $G$, there is an exact sequence : $$0\to K\to Pic^G(X)\to Pic(X)^G\to Pic(G).$$
3 edited body
Let me give an example showing that the normality hypothesis is necessary.
Let $Y=\mathbb{P}^1$ with natural $G=\mathbb{G}_m$-action. Let $X$ be the $G$-variety obtained by glueing transversally the two fixed points $0$ and $\infty$. Consider the line bundle $\mathcal{O}(l)$ with $l\neq 0$ on $Y$ and glue the fibers over $0$ and $\infty$ using any linear isomorphism to obtain a line bundle $\mathcal{L}$ on $X$. Suppose that $\mathcal{L}$ has a $G$-linearisation. Pulling it back to $Y$, we obtain a $G$-linearisation of $\mathcal{O}(l)$ on $Y$ such that $G$ acts on the fibers over $0$ and $\infty$ with the same character. However, the description of the $G$-linearisations of $\mathcal{O}(l)$ when $l\neq 0$ shows that this is not possible (more precisely, for any $G$-linearisation of $\mathcal{O}(l)$, the characters through which $G$ acts on the fibers over $0$ and $\infty$ differ by the character $t\mapsto t^l$). This argument shows moreover that no multiple of $\mathcal{L}$ has a $G$-linearisation.
Note that it follows that there is no ample $G$-linearised line bundle on $Y$.X\$.
As for the second question, the natural map to study is more likely to be $Pic^G(X)\to Pic(X)^G$ where $Pic(X)^G$ denotes the group of line bundles whose class is $G$-invariant. When $X$ is normal and proper, its Picard group is an extension of a discrete group by an abelian variety so that if $G$ is linear connected, $G$ acts necessarily trivially on $Pic(X)$ and $Pic(X)^G=Pic(X)$. However, when $X$ is not normal, this is not the case anymore (for instance in the above example).
2 added 487 characters in body
Let me give an example showing that the normality hypothesis is necessary.
Let $Y=\mathbb{P}^1$ with natural $G=\mathbb{G}_m$-action. Let $X$ be the $G$-variety obtained by glueing transversally the two fixed points $0$ and $\infty$. Consider the line bundle $\mathcal{O}(l)$ with $l\neq 0$ on $Y$ and glue the fibers over $0$ and $\infty$ using any linear isomorphism to obtain a line bundle $\mathcal{L}$ on $X$. Suppose that $\mathcal{L}$ has a $G$-linearisation. Pulling it back to $Y$, we obtain a $G$-linearisation of $\mathcal{O}(l)$ on $Y$ such that $G$ acts on the fibers over $0$ and $\infty$ with the same character. However, the description of the $G$-linearisations of $\mathcal{O}(l)$ when $l\neq 0$ shows that this is not possible (more precisely, for any $G$-linearisation of $\mathcal{O}(l)$, the characters through which $G$ acts on the fibers over $0$ and $\infty$ differ by the character $t\mapsto t^l$). This argument shows moreover that no multiple of $\mathcal{L}$ has a $G$-linearisation.
Note that it follows that there is no ample $G$-linearised line bundle on $Y$.
As for the second question, the natural map to study is more likely to be $Pic^G(X)\to Pic(X)^G$ where $Pic(X)^G$ denotes the group of line bundles whose class is $G$-invariant. When $X$ is normal and proper, its Picard group is an extension of a discrete group by an abelian variety so that if $G$ is linear connected, $G$ acts necessarily trivially on $Pic(X)$ and $Pic(X)^G=Pic(X)$. However, when $X$ is not normal, this is not the case anymore (for instance in the above example).
1
Let me give an example showing that the normality hypothesis is necessary.
Let $Y=\mathbb{P}^1$ with natural $G=\mathbb{G}_m$-action. Let $X$ be the $G$-variety obtained by glueing transversally the two fixed points $0$ and $\infty$. Consider the line bundle $\mathcal{O}(l)$ with $l\neq 0$ on $Y$ and glue the fibers over $0$ and $\infty$ using any linear isomorphism to obtain a line bundle $\mathcal{L}$ on $X$. Suppose that $\mathcal{L}$ has a $G$-linearisation. Pulling it back to $Y$, we obtain a $G$-linearisation of $\mathcal{O}(l)$ on $Y$ such that $G$ acts on the fibers over $0$ and $\infty$ with the same character. However, the description of the $G$-linearisations of $\mathcal{O}(l)$ when $l\neq 0$ shows that this is not possible (more precisely, for any $G$-linearisation of $\mathcal{O}(l)$, the characters through which $G$ acts on the fibers over $0$ and $\infty$ differ by the character $t\mapsto t^l$). This argument shows moreover that no multiple of $\mathcal{L}$ has a $G$-linearisation.
Note that it follows that there is no ample $G$-linearised line bundle on $Y$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 170, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265381097793579, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/64777/fefermans-extensional-and-intensional-applications-of-the-method-of-arithmetizat
|
## Feferman’s extensional and intensional applications of the method of arithmetization
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
At the very beginning of Feferman's Arithmetization of metamathematics in a general setting it can be read:
The method of arithmetization, as developed by Gödel[10], exploits the possibility of defining within a formal theory $\mathcal{T}$, or in arithmetical theories closely related to $\mathcal{T}$, various syntactical and logical notions concerning $\mathcal{T}$. In broad terms, the applications of the method can be classified as being extensional if essentially only numerically correct definitions are needed, or intensional if the definitions must more fully express the notions involved, so that various of the general properties of these notions can be formally derived.
He then proceeds to enumerate results of what he calls the extensional type (Gödel's first incompleteness theorem, non-definability of predicates in formal theories, undecidability of various theories and degrees of unsolvability of various theories), results of intensional type (Gödel's second incompleteness theorem, comparison of theories by relative consistency proofs and ordinal logics), a result of mixed character (the arithmetization of Gödel's completeness theorem for first-order logic), and finally of proofs which are "instances where intensional methods are used to deduce purely extensional results" (the proofs of non-finite axiomatizibility of various theories $\mathcal{T}$ obtained by showing $\mathcal{T}$ to be reflexive, i. e. that the consistency of every finite subtheory of $\mathcal{T}$ is provable in $\mathcal{T}$).
I guess that for the trained logician these examples suffice for him or her to get a clear sense of what is meant by intensional and extensional methods and results in this context, but this is not my case. I would be grateful if anyone could help to make these notions precise.
Thank you in advance.
-
1
There is no precise definition of these notions in this context, they are used informally. (The terminology refers to intension and extension in semantics, see en.wikipedia.org/wiki/Sense_and_reference.) I don’t know how to explain it other than basically repeating what Feferman wrote: a concept $C$ is arithmetized extensionally by a formula $F$ if the relation defined by $F$ in the standard model $\mathbb N$ gives $C$, and it is arithmetized intensionally if moreover the given theory $T$ proves that $F$ obeys some basic properties that $C$ is expected to have (based on context). – Emil Jeřábek May 12 2011 at 14:58
“Results of intentional type” presumably mean results involving some intensionally arithmetized concept whose choice can affect validity of the result. For example, reflexivity of $T$ is an intensional result because its statement depends on the choice of the arithmetization of consistency, a theory may be reflexive for one choice of the arithmetization of consistency and nonreflexive for another one. OTOH, incompleteness or finite non-axiomatizability of $T$ do not refer to any arithmetization, they are properties of $T$ alone. – Emil Jeřábek May 12 2011 at 15:08
## 1 Answer
As indicated by Feferman, the key distinction is between two sorts of arithmetical definitions of certain concepts (like "being the Gödel number of a theorem of a given theory"). Suppose I have some set $S$ of integers in mind and I propose a formal definition of it in some theory $T$, i.e., a formula $\phi(x)$ intended to "mean" that $x$ is a member of $s$. There are several ways to make this "intended" explicit. Probably the weakest is that, if $n$ is a natural number and $\bar n$ is a standard numeral for it, then $\phi(\bar n)$ should be provable in $T$ if and only if $n\in S$. Slightly stronger would be requiring that $T$ should prove $\phi(\bar n)$ when $n\in S$ and should refute $\phi(\bar n)$ when $n\notin S$. (My "slightly stronger" presupposes that $T$ is consistent.) Either of these would be seem to fit what Feferman calls "numerical correctness" of $\phi(x)$ as a definition of $S$. His concept of "expressing the notions" refers to stronger requirements on $\phi(x)$, but the exact nature of those requirements will depend on $S$ and on the particular application.
Suppose, for example, that $S$ is (in Gödel-numbered form) the set of theorems of some theory, and let me save TeX-coding by identifying formulas with their Gödel numbers. Then if $S$ contains both the formula $a$ and the implication $a\to b$, then it will also contain $b$; i.e., it is closed under modus ponens. One might want (or need) this property of $S$ to be provable in $T$ under the definition $\phi$, i.e., one might require that $T$ proves the general statement $$\forall x\forall y((\phi(x)\land\phi(I(x,y)))\to\phi(y)),$$ where $I$ refers to a definition of implication via Gödel numbering. This property and similar ones may be needed in order to formalize in $T$ certain arguments (even trivial arguments) about provability. The need for such properties, going beyond numerical correctness, is what makes an argument intensional in Feferman's classification.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275680780410767, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/05/14/convergence-almost-everywhere/?like=1&source=post_flair&_wpnonce=5b12459335
|
# The Unapologetic Mathematician
## Convergence Almost Everywhere
Okay, so let’s take our idea of almost everywhere and apply it to convergence of sequences of measurable functions.
Given a sequence $\{f_n\}_{n=1}^\infty$ of extended real-valued functions on a measure space $X$, we say that $f_n$ converges a.e. to the function $f$ if there is a set $E_0\subseteq X$ with $\mu(E_0)=0$ so that $\lim\limits_{n\to\infty}f_n(x)=f(x)$ for all $x\in{E_0}^c$. Similarly, we say that the sequence $f_n$ is Cauchy a.e. if there exists a set $E_0$ of measure zero so that $\{f_n(x)\}$ is a Cauchy sequence of real numbers for all $x\in{E_0}^c$. That is, given $x\notin E_0$ and $\epsilon>0$ there is some natural number $N$ depending on $x$ and $\epsilon$ so that whenever $m,n\geq N$ we have $\lvert f_m(x)-f_n(x)\rvert<\epsilon$
Because the real numbers $\mathbb{R}$ form a complete metric space, being Cauchy and being convergent are equivalent — a sequence of finite real numbers is convergent if and only if it is Cauchy, and a similar thing happens here. If a sequence of finite-valued functions is convergent a.e., then $\{f_n(x)\}$ converges to $f(x)$ away from a set of measure zero. Each of these sequences $\{f_n(x)\}$ is thus Cauchy, and so $\{f_n\}$ is Cauchy almost everywhere. On the other hand, if $\{f_n\}$ is Cauchy a.e. then the sequences $\{f_n(x)\}$ are Cauchy away from a set of measure zero, and these sequences then converge.
We can also define what it means for a sequence of functions to converge uniformly almost everywhere. That is, there is some set $E_0$ of measure zero so that for every $\epsilon>0$ we can find a natural number $N$ so that for all $n\geq N$ and $x\notin E_0$ we have $\lvert f_n(x)-f(x)\rvert<\epsilon$. The uniformity means that $N$ is independent of $x\in{E_0}^c$, but if we choose a different negligible $E_0$ we may have to choose different values of $N$ to get the desired control on the sequence.
As it happens, the topology defined by uniform a.e. convergence comes from a norm: the essential supremum; using this notion of convergence makes the algebra of essentially bounded measurable functions on a measure space $X$ into a normed vector space. Indeed, we can check what it means for a sequence of functions $\{f_n\}$ to converge to $f$ under the essential supremum norm — for any $\epsilon>0$ there is some $N$ so that for all $n\geq N$ we have $\text{ess sup}(f_n-f)<\epsilon$. Unpacking the definition of the essential supremum, this means that there is some measurable set $E_0$ with measure zero so that $\lvert f_n(x)-f(x)\rvert<\epsilon$ for all $x\notin E_0$, which is exactly what we said for uniform a.e. convergence above.
We can also turn around and define what it means for a sequence to be uniformly Cauchy almost everywhere — for any $\epsilon>0$ there is some $N$ so that for all $m,n\geq N$ we have $\text{ess sup}(f_m-f_n)<\epsilon$. Unpacking again, there is some measurable set $E_0$ so that $\lvert f_m(x)-f_n(x)\rvert<\epsilon$ for all $x\notin E_0$. It’s straightforward to check that a sequence that converges uniformly a.e. is uniformly Cauchy a.e., and vice versa. That is, the topology defined by the essential supremum norm is complete, and the algebra of essentially bounded measurable functions on a measure space $X$ is a Banach space.
### Like this:
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217242002487183, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/52036-inequalities-help.html
|
# Thread:
1. ## inequalities help!!
hey guys,
(3x+5)/(2x-1) (greater than or equal to) 8
any ideas? thanks guys
2. Originally Posted by jvignacio
hey guys,
(3x+5)/(2x-1) (greater than or equal to) 8
any ideas? thanks guys
Note that $\frac{3x+5}{2x-1} = \frac{13/2}{2x-1} + \frac{3}{2} = \frac{1}{2} \left( \frac{13}{2x-1} + 3\right)$.
Substitute this expression and solve the inequality for x.
3. Originally Posted by mr fantastic
Note that $\frac{3x+5}{2x-1} = \frac{13/2}{2x-1} + \frac{3}{2} = \frac{1}{2} \left( \frac{13}{2x-1} + 3\right)$.
Substitute this expression and solve the inequality for x.
ok i have just solved it but not sure if its correct. is the answer
all x's in the intervals (1, negative infinity) and (1/2, positive infinity) ???
4. Originally Posted by jvignacio
ok i have just solved it but not sure if its correct. is the answer
all x's in the intervals (1, negative infinity) and (1/2, positive infinity) ???
What is (1, negative infinity) meant to mean?
And by (1/2, positive infinity) do you mean 1/2 < x < +oo ?
Have you taken some random values of x from your solution and checked to see if they satisfy the original inequality? I have. I took x = 2 from (1/2, positive infinity) (assuming it means what I think it means - see above) and it fails the test. So your solution cannot be correct.
If you show all of your working the mistakes you've made in getting this solution can be pointed out.
By the way - don't bump.
5. Originally Posted by mr fantastic
What is (1, negative infinity) meant to mean?
And by (1/2, positive infinity) do you mean 1/2 < x < +oo ?
Have you taken some random values of x from your solution and checked to see if they satisfy the original inequality? I have. I took x = 1 from (1/2, positive infinity) (assuming it means what I think it means - see above) and it fails the test. So your solution cannot be correct.
If you show all of your working the mistakes you've made in getting this solution can be pointed out.
By the way - don't bump.
whats bump?
ok this is what i did...
$<br /> \frac{3x+5}{2x-1}\geq 8<br />$
$<br /> \frac{3x+5}{2x-1}-8\geq 0<br />$
$<br /> \frac{3x+5}{2x-1}-\frac{8(2x-1)}{2x-1}\geq 0<br />$
$<br /> \frac{3x+5 - 8(2x-1)}{2x-1}\geq 0<br />$
$<br /> \frac{-13x+13}{2x-1}\geq 0<br />$
then i got -13(1) + 13 = 0 so 1 and every other negative number below 1 for x will satisfy its equal to or greater than 0.
and for 2x-1, i got 2(1/2)-1 = 0 so 1/2 and every other positive number above 1/2 will satisfy its equal to or greater than 0.
6. Originally Posted by jvignacio
whats bump?
ok this is what i did...
$<br /> \frac{3x+5}{2x-1}\geq 8<br />$
$<br /> \frac{3x+5}{2x-1}-8\geq 0<br />$
$<br /> \frac{3x+5}{2x-1}-\frac{8(2x-1)}{2x-1}\geq 0<br />$
$<br /> \frac{3x+5 - 8(2x-1)}{2x-1}\geq 0<br />$
$<br /> \frac{-13x+13}{2x-1}\geq 0<br />$
[snip]
Case 1: $-13 x + 13 \geq 0$ and $2x - 1 > 0$, that is, $x \leq 1$ and $x > \frac{1}{2}$, that is, $\frac{1}{2} < x \leq 1$.
Case 2: $-13 x + 13 \leq 0$ and $2x - 1 < 0$, that is, $x \geq 1$ and $x < \frac{1}{2}$, which has no solution.
So the solution is $\frac{1}{2} < x \leq 1$.
By the way: http://en.wikipedia.org/wiki/Bump_(Internet)
7. Originally Posted by mr fantastic
Case 1: $-13 x + 13 \geq 0$ and $2x - 1 > 0$, that is, $x \leq 1$ and $x > \frac{1}{2}$, that is, $\frac{1}{2} < x \leq 1$.
Case 2: $-13 x + 13 \leq 0$ and $2x - 1 < 0$, that is, $x \geq 1$ and $x < \frac{1}{2}$, which has no solution.
So the solution is $\frac{1}{2} < x \leq 1$.
By the way: Bump (Internet) - Wikipedia, the free encyclopedia
i get it! thank u
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320062398910522, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/33834?sort=newest
|
## Geometric models for classifying spaces of $GLn(Fq)$.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The title pretty much says it. In a follow-up to my question about alternating groups, does anyone know of a "geometric" model for $BGL_n(F_q)$? By "geometric" I mean "a space you would have heard about even if you aren't studying classifying spaces"; I mean to exclude general constructions such as standard nerve constructions, infinite joins, intractable quotients of frame bundles, etc.
-
1
I have a couple thoughts about possible sources of such models. One is that sometimes models of BAut_C X where C is some category can be of the form "embeddings of X in a universal, contractible C-object." Examples include symmetric groups (BS_n = embeddings of points in R^\infty), general linear groups (linear embeddings of a vector space in a universal vector space (also works for G-actions)), and diffeomorphism groups (subsets of R^\infty diffeomorphic to a fixed manifold = embeddings modulo diffeomorphisms). Of course, universal F_q vector spaces are discrete... – Dev Sinha Jul 31 2010 at 4:22
The other thought is that there might be a poset/ simplicial construction which is geometrically defined - say built from the poset of finite-dimensional subspaces of (F_q)^\infty under inclusion or perhaps from some matroid theory. – Dev Sinha Jul 31 2010 at 5:20
## 1 Answer
Quillens' paper on the Adams conjecture (doi:10.1016/0040-9383(71)90018-8) almost gives an answer. He maps a limit of spaces BGL_n(F_q) to BU and shows that it is not far from an isomorphism. This is related to the plus construction, but cant remember the details offhand. The space BU in turn can be described in terms of Grassmannians.
-
Thanks. As I recall, this result is stable (as you say), not valid at the prime p where q = p^r, and relies on some modular representation theory to construct the map. I am hoping (naively) that there might be a better model waiting to be found. – Dev Sinha Aug 2 2010 at 17:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224110841751099, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Chaitin's_constant
|
# Chaitin's constant
In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number)[1] or halting probability is a real number that informally represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.
Although there are infinitely many halting probabilities, it is common to use the letter Ω to refer to them as if there were only one. Because Ω depends on the program encoding used, it is sometimes called Chaitin's construction instead of Chaitin's constant when not referring to any specific encoding.
Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm enumerating its digits.
## Background
The definition of a halting probability relies on the existence of prefix-free universal computable functions. Such a function, intuitively, represents a programming language with the property that no valid program can be obtained as a proper extension of another valid program.
Suppose that F is a partial function that takes one argument, a finite binary string, and possibly returns a single binary string as output. The function F is called computable if there is a Turing machine that computes it.
The function F is called universal if the following property holds: for every computable function f of a single variable there is a string w such that for all x, F(w x) = f(x); here w x represents the concatenation of the two strings w and x. This means that F can be used to simulate any computable function of one variable. Informally, w represents a "script" for the computable function f, and F represents an "interpreter" that parses the script as a prefix of its input and then executes it on the remainder of input. Note that for any fixed w the function f(x) = F(w x) is computable; thus the universality property states that all computable functions of one variable can be obtained in this fashion.
The domain of F is the set of all inputs p on which it is defined. For F that are universal, such a p can generally be seen both as the concatenation of a program part and a data part, and as a single program for the function F.
The function F is called prefix-free if there are no two elements p, p′ in its domain such that p′ is a proper extension of p. This can be rephrased as: the domain of F is a prefix-free code (instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear; one is easily recognized by some grammar, while the other requires arbitrary computation to recognize.
The domain of any universal computable function is a computably enumerable set but never a computable set. The domain is always Turing equivalent to the halting problem.
## Definition
Let PF be the domain of a prefix-free universal computable function F. The constant ΩF is then defined as
$\Omega_F = \sum_{p \in P_F} 2^{-|p|}$,
where $\left|p\right|$ denotes the length of a string p. This is an infinite sum which has one summand for every p in the domain of F. The requirement that the domain be prefix-free, together with Kraft's inequality, ensures that this sum converges to a real number between 0 and 1. If F is clear from context then ΩF may be denoted simply Ω, although different prefix-free universal computable functions lead to different values of Ω.
## Relationship to the halting problem
Knowing the first $N$ bits of $\Omega$, one could calculate the halting problem for all programs of a size up to $N$. Let the program $p$ for which the halting problem is to be solved be N bits long. In dovetailing fashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these first N bits. If the program $p$ hasn't halted yet, then it never will, since its contribution to the halting probability would affect the first N bits. Thus, the halting problem would be solved for $p$.
Because many outstanding problems in number theory, such as Goldbach's conjecture are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, and therefore calculating any but the first few bits of Chaitin's constant is not possible, this just reduces hard problems to impossible ones, much like trying to build an oracle machine for the halting problem would be.
## Interpretation as a probability
The Cantor space is the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as the measure of a certain subset of Cantor space under the usual probability measure on Cantor space. It is from this interpretation that halting probabilities take their name.
The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary string x the set of sequences that begin with x has measure 2-|x|. This implies that for each natural number n, the set of sequences f in Cantor space such that f(n) = 1 has measure 1/2, and the set of sequences whose nth element is 0 also has measure 1/2.
Let F be a prefix-free universal computable function. The domain P of F consists of an infinite set of binary strings
$P = \{p_1,p_2,\ldots\}$.
Each of these strings pi determines a subset Si of Cantor space; the set Si contains all sequences in cantor space that begin with pi. These sets are disjoint because P is a prefix-free set. The sum
$\sum_{p \in P} 2^{-|p|}$
represents the measure of the set
$\bigcup_{i \in \mathbb{N}} S_i$.
In this way, ΩF represents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain of F. It is for this reason that ΩF is called a halting probability.
## Properties
Each Chaitin constant Ω has the following properties:
• It is algorithmically random. This means that the shortest program to output the first n bits of Ω must be of size at least n-O(1). This is because, as in the Goldbach example, those n bits enable us to find out exactly which programs halt among all those of length at most n.
• It is a normal number, which means that its digits are equidistributed as if they were generated by tossing a fair coin.
• It is not a computable number; there is no computable function that enumerates its binary expansion, as discussed below.
• The set of rational numbers q such that q < Ω is computably enumerable; a real number with such a property is called a left-c.e. real number in recursion theory.
• The set of rational numbers q such that q > Ω is not computably enumerable.
• Ω is an arithmetical number.
• It is Turing equivalent to the halting problem and thus at level $\Delta^0_2$ of the arithmetical hierarchy.
Not every set that is Turing equivalent to the halting problem is a halting probability. A finer equivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals.
## Uncomputability
A real number is called computable if there is an algorithm which, given n, returns the first n digits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number.
No halting probability is computable. The proof of this fact relies on an algorithm which, given the first n digits of Ω, solves Turing's halting problem for programs of length up to n. Since the halting problem is undecidable, Ω can not be computed.
The algorithm proceeds as follows. Given the first n digits of Ω and a k≤n, the algorithm enumerates the domain of F until enough elements of the domain have been found so that the probability they represent is within 2-(k+1) of Ω. After this point, no additional program of length k can be in the domain, because each of these would add 2-k to the measure, which is impossible. Thus the set of strings of length k in the domain is exactly the set of such strings already enumerated.
## Incompleteness theorem for halting probabilities
For each specific consistent effectively represented axiomatic system for the natural numbers, such as Peano arithmetic, there exists a constant N such that no bit of Ω after the Nth can be proven to be 1 or 0 within that system. The constant N depends on how the formal system is effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar to Gödel's incompleteness theorem in that it shows that no consistent formal theory for arithmetic can be complete.
## Super Omega
As mentioned above, the first n bits of Gregory Chaitin's constant Omega are random or incompressible in the sense that we cannot compute them by a halting algorithm with fewer than n-O(1) bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the first n bits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the first n bits of Omega. In other words, the enumerable first n bits of Omega are highly compressible in the sense that they are limit-computable by a very short algorithm; they are not random with respect to the set of enumerating algorithms. Jürgen Schmidhuber (2000) constructed a limit-computable "Super Omega" which in a sense is much more random than the original limit-computable Omega, as one cannot significantly compress the Super Omega by any enumerating non-halting algorithm.
## References
1. mathworld.wolfram.com, Chaitin's Constant. Retrieved 28 May 2012
• Cristian S. Calude (2002). Information and Randomness: An Algorithmic Perspective, second edition. Springer. ISBN 3-540-43466-6
• Cristian S. Calude, Michael J. Dinneen, and Chi-Kou Shu. Computing a Glimpse of Randomness.
• R. Downey, and D. Hirschfeldt (2010), Algorithmic Randomness and Complexity, monograph in preparation, Springer-Verlag. Preliminary version can be found online.
• Ming Li and Paul Vitányi (1997). An Introduction to Kolmogorov Complexity and Its Applications. Springer. Introduction chapter full-text.
• Jürgen Schmidhuber (2000). Algorithmic Theories of Everything (arXiv: quant-ph/ 0011122). Journal reference: J. Schmidhuber (2002). Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science 13(4):587-612.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089725613594055, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/energy+power
|
# Tagged Questions
2answers
178 views
### Mechanics Question: Energy, Work and Power
I'm a pure mathematician by trade, and have been trying to teach myself A-level mechanics. (This is not homework, it is purely self-study.) I've been working through the exercises and have come up ...
6answers
335 views
### Confused about unit of kilowatt hours
So I am a little confused on how to deal with the Kilowatt hours unit of power, I have only ever used Kilowatts and I have to design a residential fuel cell used as a backup generator for one day. ...
1answer
76 views
### What are the properties and impediments of a liquid air fueled engine?
I recently came across a very interesting article that suggested the possibility of using liquified gases like air, nitrogen, or oxygen as a power source for cars. It appears that this company is ...
0answers
93 views
### Energy efficiency Vs power efficiency
I want to understand if there can be any real difference between the requirement of Energy efficiency and the requirement of power efficiency for a physical system. For example for CMOS-based digital ...
1answer
108 views
### Inefficiency Comparison of Car Air Conditioning vs. Open Windows
On a recent long, hot journey in Spain, I was pondering which was the most efficient way of cooling the car. Which of these would be the most effective? Switching on the air conditioning, thereby ...
1answer
92 views
### Do you think it’s possible to make a power plant of the described kind?
We are working on a science project and try to engineer a power plant of a new kind. It is called Air HES (air hydroelectric station). The idea is described on our website. Do you think it is ...
1answer
453 views
### How are the CPU power and temperature caculated/estimated?
From Wikipedia The power consumed by a CPU, is approximately proportional to CPU frequency, and to the square of the CPU voltage: $$P = C V^2 f$$ (where C is capacitance, f is ...
1answer
67 views
### is it possible to gain from charging cold battery
Is it possible to get higher load of energy into a battery if you charge it when its cold eg. +9C? the battery is emptier, but does it let the power in? Is it safe?
2answers
304 views
### How can you measure battery output to see if the wattage is accurate? [closed]
When you have a battery and it shows you the milliamperehours and volts. You can use those to calculate the wattage. How can you test the battery to see if its wattage is accurate or if the company ...
1answer
118 views
### What are some ways that humans could have influence over what sequence a star was in?
How would a society go about either preventing our sun in its primary sequence from going into a Red Giant a billion years from now? Or perhaps, accelerating the process of going from main sequence ...
1answer
190 views
### Lifetime of battery
If I directly connect two terminals of 3V battery (negative to positive) using copper wire, would it lose all its charge faster compared to another 3V battery that is used to lighten a 1.5V bulb?
2answers
299 views
### What is a single word that describes the idea of the second time derivative of energy?
I think about position, its time derivative speed, and its second time derivative, acceleration. I would like to identify a single word that can be used as a handle for the second time derivative of ...
0answers
57 views
### A mixture of motion, circular motion and energy, please help, i didn't understand this problem, can anyone help? [closed]
A 20 W electric motor is being used to catapult a small ball of mass 0.5 kg, through a circular loop. The motor wind a thin thread on a pulley. The thread compresses the spring on which the ball ...
3answers
9k views
### Charging 12V 150Ah battery
I want to charge a 12V battery of 150Ah with a solar panel. The solar panel specs is 12V, 25 Watt. Can anyone please provide me how to calculate that how much time it will take to charge the battery? ...
1answer
309 views
### What type of solar energy technology has the most future potential?
In terms of dollars per watt, using theoretical efficiency limits, what technology holds the most promise to become the primary solar energy capture technology? My hunch is carbon-based modules, ...
1answer
2k views
### How to calculate fuel consumption of car (mpg) from speed and accleration knowing mass, drag coeff and rolling resistance?
How can I calculate the current (instantaneous) mpg of my car if I know the speed and acceleration of the car? From reading various answers for the "car going level or up/down hill" question asked ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518901109695435, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/78045/inducing-factor-representations
|
## Inducing factor representations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have an infinite dimensional Lie group $G$ of the form $\varinjlim G_n$, for example $SU(9,\infty)$ or $Sp(\infty,R)$. I also have a closed subgroup $P$ of $G$ and a factor representation $\chi$ of $P$ (say on the Hilbert space $H_\chi$). I can use the universal enveloping algebra $U(\mathfrak g)$ to form the induced representation $\pi = Ind_P^G(\chi)$ of $G$ on $U(\mathfrak g) \otimes_P\, H_\chi$.
Has anyone studied the properties of $\pi$? For example: if $\chi$ is factorial of type $II_1$, can $\pi$ be described in terms of factor representations? Any pointers or references would be appreciated.
-
How did you get the letters so big? – Will Jagy Oct 13 2011 at 19:20
By putting ======= under the text. I doubt it was his intention. – darij grinberg Oct 13 2011 at 19:31
I liked it, my near vision keeps getting worse and worse, it may be time for full-time glasses, not just reading glasses for books and the computer screen. I can see how it would not do for a long post. – Will Jagy Oct 13 2011 at 20:14
1
Welcome to MathOverflow! – Noah Snyder Oct 14 2011 at 4:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171919226646423, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/122104/how-to-write-a-first-order-formula-with-unspecific-parameter
|
# How to write a first order formula with unspecific parameter
Excuse me for the awkward wording. I'm new to logic. What I really mean is this:
Consider the number theory that spawns from the structure $N=\{\mathbb{N},+,\cdot\}$ (equipped with the usual interpretation) using first order logic. I understand that the formal sentences generated this way are capable of expressing all elementary number theory statements in the sense that the statement "$x$ divides $y$" can be expressed as $\exists z(x\cdot z=y)$, so that $N\models \exists z(x\cdot z=y)$ iff $x$ divides $y$.
But there's a kind of statement that I don't know how to express in the system, like "$x$ can be written as the sum of cubes". It is easy to write "$x$ can be written as the sum of $n$ cubes" given specific $n$ into something like $\exists z_1z_2...z_n(x=z_1\cdot z_1\cdot z_1+z_2\cdot z_2\cdot z_2+...+z_n\cdot z_n\cdot z_n=x)$. But in the first statement it seems one can't follow suit because the "$n$" there is not specific. How can we quantify the unspecific parameter? What are tricks for expressing it?
Edit: By the way, "...that the formal sentences generated this way are capable of expressing all elementary number theory statements..." Is this true?
-
yes you are right. thank you for pointing that out – Eric Mar 21 '12 at 3:15
## 1 Answer
The trick is to encode the sequences of natural numbers by a natural number. For example the sequence $x_1, ..., x_n$ can be encoded by the number $p_1^{x_1 + 1} \cdot ... \cdot p_n^{x_n + 1}$, where $p_i$ is the $i$-th prime number. Then you express properties of such an encoding. That is you construct formulas to express some of the following relations and functions
• $seq(x)$ - $x$ encodes a sequence
• $lh(x)$ - the length of the sequence encoded by $x$
• $(x)_i$ - $i$-th element of the sequence encoded by $x$.
Then the statement ''$x$ can be written as the sum of cubes'' can be expressed by something like this $$\exists y (seq(y) \land x = sum(y) \land \forall (i < lh(y)) cube((y)_i))$$ where x = sum(y) is the following formula $$\exists z (seq(z) \land lh(z) = lh(y) \land (z)_0 = (y)_0 \land x = (z)_{lh(z) -1} \land \forall (0 < i < lh(z)) (z)_i = (z)_{i-1} + (y)_i)$$ which expresses the relation $x$ is the sum of elements of the sequence encoded by $y$.
-
But what is $x=sum(y)$? – Eric Mar 20 '12 at 1:17
Edited the answer to explain that. – Levon Haykazyan Mar 20 '12 at 20:54
$cube((x)_i)$ should be $cube((y)_i)$, right? – Eric Mar 21 '12 at 1:25
yes, that's right, thanks. Edited to fix that. – Levon Haykazyan Mar 21 '12 at 14:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388197660446167, "perplexity_flag": "head"}
|
http://www.reference.com/browse/wiki/Stress_(physics)
|
Definitions
# Stress (physics)
Stress is a measure of the average amount of force exerted per unit area. It is a measure of the intensity of the total internal forces acting within a body across imaginary internal surfaces, as a reaction to external applied forces and body forces. It was introduced into the theory of elasticity by Cauchy around 1822. Stress is a concept that is based on the concept of continuum. In general, stress is expressed as
$sigma = frac\left\{F\right\}\left\{A\right\} ,$
where
$sigma$ is the average stress, also called engineering or nominal stress, and
$F$ is the force acting over the area $A$.
The SI unit for stress is the pascal (symbol Pa), which is a shorthand name for one newton (Force) per square metre (Unit Area). The unit for stress is the same as that of pressure, which is also a measure of Force per unit area. Engineering quantities are usually measured in megapascals (MPa) or gigapascals (GPa). In Imperial units, stress is expressed in pounds-force per square inch (psi) or kilopounds-force per square inch (ksi).
As with force, stress cannot be measured directly but is usually inferred from measurements of strain and knowledge of elastic properties of the material. Devices capable of measuring stress indirectly in this way are strain gauges and piezoresistors.
## Stress as a tensor
In its full form, linear stress is a rank-two tensor quantity, and may be represented as a 3x3 matrix. A tensor may be seen as a linear vector operator - it takes a given vector and produces another vector as a result. In the case of the stress tensor $sigma_\left\{ij\right\}$, it takes the vector normal to any area element and yields the force (or "traction") acting on that area element. In matrix notation:
$F_i=sum_\left\{j=1\right\}^3 sigma_\left\{ij\right\} A_j$
where $A_j$ are the components of the vector normal to a surface area element with a length equal to the area of the surface element, and $F_i$ are the components of the force vector (or traction vector) acting on that element. Using index notation, we can eliminate the summation sign, since all sums will be the same over repeated indices. Thus:
$F_i=sigma_\left\{ij\right\} A_j ,$
Just as it is the case with a vector (which is actually a rank-one tensor), the matrix components of a tensor depend upon the particular coordinate system chosen. As with a vector, there are certain invariants associated with the stress tensor, whose value does not depend upon the coordinate system chosen (or the area element upon which the stress tensor operates). For a vector, there is only one invariant - the length. For a tensor, there are three - the eigenvalues of the stress tensor, which are called the principal stresses. It is important to note that the only physically significant parameters of the stress tensor are its invariants, since they are not dependent upon the choice of the coordinate system used to describe the tensor.
If we choose a particular surface area element, we may divide the force vector by the area (stress vector) and decompose it into two parts: a normal component acting normal to the stressed surface, and a shear component, acting parallel to the stressed surface. An axial stress is a normal stress produced when a force acts parallel to the major axis of a body, e.g. column. If the forces pull the body producing an elongation, the axial stress is termed tensile stress. If on the other hand the forces push the body reducing its length, the axial stress is termed compressive stress. Bending stresses, e.g. produced on a bent beam, are a combination of tensile and compressive stresses. Torsional stresses, e.g. produced on twisted shafts, are shearing stresses.
In the above description, little distinction is drawn between the "stress" and the "stress vector" since the body which is being stressed provides a particular coordinate system in which to discuss the effects of the stress. The distinction between "normal" and "shear" stresses is slightly different when considered independently of any coordinate system. The stress tensor yields a stress vector for a surface area element at any orientation, and this stress vector may be decomposed into normal and shear components. The normal part of the stress vector averaged over all orientations of the surface element yields an invariant value, and is known as the hydrostatic pressure. Mathematically it is equal to the average value of the principal stresses (or, equivalently, the trace of the stress tensor divided by three). The normal stress tensor is then the product of the hydrostatic pressure and the unit tensor. Subtracting the normal stress tensor from the stress tensor gives what may be called the shear tensor. These two quantities are true tensors with physical significance, and their nature is independent of any coordinate system chosen to describe them. In fact, the extended Hooke's law is basically the statement that each of these two tensors is proportional to its strain tensor counterpart, and the two constants of proportionality (elastic moduli) are independent of each other. Note that In rheology, the normal stress tensor is called extensional stress, and in acoustics is called longitudinal stress.
Solids, liquids and gases have stress fields. Static fluids support normal stress but will flow under shear stress. Moving viscous fluids can support shear stress (dynamic pressure). Solids can support both shear and normal stress, with ductile materials failing under shear and brittle materials failing under normal stress. All materials have temperature dependent variations in stress related properties, and non-newtonian materials have rate-dependent variations.
## Cauchy's stress principle
Cauchy's stress principle asserts that when a continuum body is acted on by forces, i.e. surface forces and body forces, there are internal reactions (forces) throughout the body acting between the material points. Based on this principle, Cauchy demonstrated that the state of stress at a point in a body is completely defined by the nine components $sigma_\left\{ij\right\}$ of a second-order Cartesian tensor called the Cauchy stress tensor, given by
$begin\left\{align\right\}$
sigma_{ij}= left[{begin{matrix} mathbf{T}^{(mathbf{e}_1)} mathbf{T}^{(mathbf{e}_2)} mathbf{T}^{(mathbf{e}_3)} end{matrix}}right] &= left[{begin{matrix} sigma _{11} & sigma _{12} & sigma _{13} sigma _{21} & sigma _{22} & sigma _{23} sigma _{31} & sigma _{32} & sigma _{33} end{matrix}}right] &equiv left[{begin{matrix} sigma _{xx} & sigma _{xy} & sigma _{xz} sigma _{yx} & sigma _{yy} & sigma _{yz} sigma _{zx} & sigma _{zy} & sigma _{zz} end{matrix}}right] &equiv left[{begin{matrix} sigma _x & tau _{xy} & tau _{xz} tau _{yx} & sigma _y & tau _{yz} tau _{zx} & tau _{zy} & sigma _z end{matrix}}right] end{align}
where
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}$, $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}$, and $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}$ are the stress vectors associated with the planes perpendicular to the coordinate axis,
$sigma_\left\{11\right\}$, $sigma_\left\{22\right\}$, and $sigma_\left\{33\right\}$ are normal stresses, and
$sigma_\left\{12\right\}$, $sigma_\left\{13\right\}$, $sigma_\left\{21\right\}$, $sigma_\left\{23\right\}$, $sigma_\left\{31\right\}$, and $sigma_\left\{32\right\}$ are shear stresses.
The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a 6-dimensional vector of the form
$begin\left\{align\right\}$
boldsymbol{sigma} &= begin{bmatrix}sigma_1 & sigma_2 & sigma_3 & sigma_4 & sigma_5 & sigma_6 end{bmatrix}^T &equiv begin{bmatrix}sigma_{11} & sigma_{22} & sigma_{33} & sigma_{23} & sigma_{31} & sigma_{12} end{bmatrix}^T end{align} The Voigt notation is used extensively in representing stress-strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software.
Derivation of Cauchy's stress tensor
Considering a body subjected to surface forces $mathbf\left\{F\right\}$ and body forces $mathbf\left\{f\right\}$ per unit of volume, with an imaginary plane dividing the body into two segments (Figure 1). A small area $Delta A ,$ in one of the segments, passing through a point $P ,$, and with a normal vector $mathbf\left\{n\right\} ,$ is acted upon by a force $Delta F ,$ resulting from the action of the material in one side of the area (right segment) onto the other side (left segment). The distribution of force on $Delta A ,$ is, however, not always uniform, as there may be a moment $Delta M ,$ at $P ,$ due to the force $Delta F ,$, as shown in the Figure. As $Delta A ,$ becomes very small and tends to zero the ratio $Delta F / Delta A ,$ becomes $dF/dA$, and the moment $Delta M$ vanishes. The vector $dF/dA ,$ is defined as the stress vector $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\} ,$ at point $P ,$ associated with a plane with a normal vector $mathbf\left\{n\right\} ,$:
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\}= lim_\left\{Delta A to 0\right\} frac \left\{Delta F\right\}\left\{Delta A\right\} = \left\{dF over dA\right\} ,$
By Newton's third law, the stress vectors acting upon opposite sides of the same surface are equal in magnitude and of opposite direction. Thus,
$- mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\}= mathbf\left\{T\right\}^\left\{\left(- mathbf\left\{n\right\}\right)\right\}$
The stress vector, not necessarily being perpendicular to the plane on which it acts, can be resolved into two components: one normal to the plane, called normal stress, and the other parallel to this plane, called the shearing stress. The latter, can be further decomposed into two mutually perpendicular vectors.
The state of stress at a point would be defined by all the stress vectors $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\} ,$ associated with all planes (infinite number of planes) that pass through that point. However, by just knowing the stress vectors on three mutually perpendicular planes, the stress vector on any plane passing through that point can be found through coordinate transformation equations. Assuming a material element (Figure 2) with planes perpendicular to the coordinate axes of a cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}$, $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}$, and $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}$ can be decomposed into components in the direction of the three coordinate axes:
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}= sigma_\left\{11\right\} mathbf\left\{e\right\}_1 + sigma_\left\{12\right\} mathbf\left\{e\right\}_2 + sigma_\left\{13\right\} mathbf\left\{e\right\}_3 ,$
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}= sigma_\left\{21\right\} mathbf\left\{e\right\}_1 + sigma_\left\{22\right\} mathbf\left\{e\right\}_2 + sigma_\left\{23\right\} mathbf\left\{e\right\}_3 ,$
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}= sigma_\left\{31\right\} mathbf\left\{e\right\}_1 + sigma_\left\{32\right\} mathbf\left\{e\right\}_2 + sigma_\left\{33\right\} mathbf\left\{e\right\}_3 ,$
In index notation this is
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_i\right)\right\}= sigma_\left\{ij\right\} mathbf\left\{e\right\}_j$
The nine components $sigma_\left\{ij\right\}$ of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which completely defines the state of stresses at a point and it is given by
$sigma_\left\{ij\right\}= left\left[\left\{begin\left\{matrix\right\} mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}$
mathbf{T}^{(mathbf{e}_2)} mathbf{T}^{(mathbf{e}_3)} end{matrix}}right] = left[{begin{matrix} sigma _{11} & sigma _{12} & sigma _{13} sigma _{21} & sigma _{22} & sigma _{23} sigma _{31} & sigma _{32} & sigma _{33} end{matrix}}right] equiv left[{begin{matrix} sigma _{xx} & sigma _{xy} & sigma _{xz} sigma _{yx} & sigma _{yy} & sigma _{yz} sigma _{zx} & sigma _{zy} & sigma _{zz} end{matrix}}right] equiv left[{begin{matrix} sigma _x & tau _{xy} & tau _{xz} tau _{yx} & sigma _y & tau _{yz} tau _{zx} & tau _{zy} & sigma _z end{matrix}}right]
where
$sigma_\left\{11\right\}$, $sigma_\left\{22\right\}$, and $sigma_\left\{33\right\}$ are normal stresses, and
$sigma_\left\{12\right\}$, $sigma_\left\{13\right\}$, $sigma_\left\{21\right\}$, $sigma_\left\{23\right\}$, $sigma_\left\{31\right\}$, and $sigma_\left\{32\right\}$ are shear stresses.
The first index $i ,$ indicates the stress acts on a plane normal to the $x_i ,$ axis, and the second index $j ,$ denotes the direction in which the stress acts. A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction.
### Relationship stress vector - stress tensor
The stress vector $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\} ,$ at any point associated with a plane of normal vector $mathbf\left\{n\right\}$ can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components of the stress tensor $sigma_\left\{ij\right\}$. In tensor form this is:
$T_j^\left\{\left(n\right)\right\}= sigma_\left\{ij\right\}n_i$
Derivation of the stress vector as a function of the stress tensor
For this, we consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area $dA$ oriented in an arbitrary direction specified by a normal vector $mathbf\left\{n\right\}$ (Figure 3). The stress vector on this plane is denoted by $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\}$. The stress vectors acting on the faces of the tetrahedron are denoted as $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}$, $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}$, and $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}$, and are by definition the components of the stress tensor $sigma_\left\{ij\right\}$. From equilibrium of forces, i.e. Newton's second law, we have
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\}dA - mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}dA_1 - mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}dA_2 - mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}dA_3 = rho left\left(frac\left\{h\right\}\left\{3\right\}ds right\right) mathbf\left\{a\right\}$
where the right hand side of the equation represent the body forces acting on the tetrahedron: $rho$ is the density, $mathbf\left\{a\right\} ,$ is the acceleration, and $h ,$ is the height of the tetrahedron, considering the plane $mathbf\left\{n\right\}$ as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting $dA$ into each face (dot product):
$dA_1= left\left(mathbf\left\{n\right\} cdot mathbf\left\{e\right\}_1 right\right)dA = n_1dA ,$
$dA_2= left\left(mathbf\left\{n\right\} cdot mathbf\left\{e\right\}_2 right\right)dA = n_2dA ,$
$dA_3= left\left(mathbf\left\{n\right\} cdot mathbf\left\{e\right\}_3 right\right)dA = n_3dA ,$
Thus, taking the limit when $h to 0 ,$ and replacing the previous equations, we have
$begin\left\{align\right\} mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\} &= mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}n_1 + mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}n_2 + mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}n_3$
& = sum_{i=1}^3 mathbf{T}^{(mathbf{e}_i)}n_i &= left(sigma_{ij}mathbf{e}_j right)n_i &= sigma_{ij}n_imathbf{e}_j end{align}
or, equivalently,
$T_j^\left\{\left(n\right)\right\}= sigma_\left\{ij\right\}n_i$
In matrix form we have
$left\left[\left\{begin\left\{matrix\right\} T^\left\{\left(n\right)\right\}_1 & T^\left\{\left(n\right)\right\}_2 & T^\left\{\left(n\right)\right\}_3end\left\{matrix\right\}\right\}right\right]=left\left[\left\{begin\left\{matrix\right\} n_1 & n_2 & n_3 end\left\{matrix\right\}\right\}right\right]cdot left\left[\left\{begin\left\{matrix\right\} sigma _\left\{11\right\} & sigma _\left\{12\right\} & sigma _\left\{13\right\} sigma _\left\{21\right\} & sigma _\left\{22\right\} & sigma _\left\{23\right\} sigma _\left\{31\right\} & sigma _\left\{32\right\} & sigma _\left\{33\right\} end\left\{matrix\right\}\right\}right\right]$
This equation expresses the components of the stress vector acting on an arbitrary plane with normal vector $mathbf\left\{n\right\}$ at a given point in terms of the components of the stress tensor, $sigma_\left\{ij\right\}$, at that point.
### Transformation rule of the stress tensor
It can be shown that the stress tensor is a second order tensor; this is, under a change of the coordinate system, from an $x_i$ system to an $x^\text{'}_i$ system, the components $sigma_\left\{ij\right\}$ in the initial system are transformed into the components $sigma^\text{'}_\left\{ij\right\}$ in the new system according to the tensor transformation rule:
$sigma^\text{'}_\left\{ij\right\}=a_\left\{im\right\}a_\left\{jn\right\}sigma_\left\{mn\right\}$
where $a_\left\{ij\right\} ,$ is a rotation matrix. In matrix form this is
$left\left[\left\{begin\left\{matrix\right\}$
sigma^'_{11} & sigma^'_{12} & sigma^'_{13} sigma^'_{21} & sigma^'_{22} & sigma^'_{23} sigma^'_{31} & sigma^'_{32} & sigma^'_{33} end{matrix}}right]=left[{begin{matrix} a_{11} & a_{12} & a_{13} a_{21} & a_{22} & a_{23} a_{31} & a_{32} & a_{33} end{matrix}}right]left[{begin{matrix} sigma_{11} & sigma_{12} & sigma_{13} sigma_{21} & sigma_{22} & sigma_{23} sigma_{31} & sigma_{32} & sigma_{33} end{matrix}}right]left[{begin{matrix} a_{11} & a_{21} & a_{31} a_{12} & a_{22} & a_{32} a_{13} & a_{23} & a_{33} end{matrix}}right]
An easy visualization of this transformation for 2D and 3D stresses for simple rotations is Mohr's circle
### Normal and shear stresses
The magnitude of the normal stress component, $sigma_n$, of any stress vector $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\}$ acting on an arbitrary plane with normal vector $mathbf\left\{n\right\}$ at a given point in terms of the component of the stress tensor $sigma_\left\{ij\right\}$ is the dot product of the stress vector and the normal vector, thus
$begin\left\{align\right\}$
sigma_n &= mathbf{T}^{(mathbf{n})}cdot mathbf{n} &=T^{(n)}_in_i &=sigma_{ij}n_in_j end{align}
The magnitude of the shear stress component, $tau_n$, can then be found using the Pythagorean theorem, thus
$begin\left\{align\right\}$
tau_n &=sqrt{(T^{(n)})^2-sigma_n^2} &= sqrt{T^{(n)}T^{(n)}-sigma_n^2}
end{align}
## Equilibrium equations and symmetry of the stress tensor
When a body is in equilibrium the components of the stress tensor in every point of the body satisfy the equilibrium equations,
$$
sigma_{ji,j}+ F_i = 0
Derivation of equilibrium equations
Consider a continuum body (see Figure 4) occupying a volume $V$, having a surface area $S$, with defined traction or surface forces $T_i^\left\{\left(n\right)\right\}$ acting on every point of the body surface, and body forces $F_i$ per unit of volume on every point within the volume $V$. Thus, if the body is in equilibrium the resultant force acting on the volume is zero, thus:
$int_S T_i^\left\{\left(n\right)\right\}dS + int_V F_i dV = 0$
By definition the stress vector is $T_i^\left\{\left(n\right)\right\} =sigma_\left\{ji\right\}n_j$, then
$int_S sigma_\left\{ji\right\}n_j , dS + int_V F_i , dV = 0$
Using the Gauss's divergence theorem to convert a surface integral to a volume integral gives
$int_V sigma_\left\{ji,j\right\} , dV + int_V F_i , dV = 0$
$int_V sigma_\left\{ji,j\right\} + F_i , dV = 0$
For an arbitrary volume the integrand vanishes, and we have the equilibrium equations
$sigma_\left\{ji,j\right\} + F_i = 0$
At the same time, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, i.e.
$sigma_\left\{ij\right\}=sigma_\left\{ji\right\}$
Derivation of symmetry of the stress tensor
Summing moments about point O (Figure 4) the resultant moment is zero as the body is in equilibrium. Thus,
$begin\left\{align\right\}$
M_O &=int_S (mathbf{r}timesmathbf{T})dS + int_V (mathbf{r}timesmathbf{F})dV=0 0 &= int_Svarepsilon_{ijk}x_jT_k^{(n)}dS + int_Vvarepsilon_{ijk}x_jF_k dV end{align} where $mathbf\left\{r\right\}$ is the position vector and is expressed as
$mathbf\left\{r\right\}=x_jmathbf\left\{e\right\}_j$
Knowing that $T_k^\left\{\left(n\right)\right\} =sigma_\left\{mk\right\}n_m$ and using Gauss's divergence theorem to change from a surface integral to a volume integral, we have
$begin\left\{align\right\}$
0 &= int_S varepsilon_{ijk}x_jsigma_{mk}n_m , dS + int_Vvarepsilon_{ijk}x_jF_k , dV &= int_V (varepsilon_{ijk}x_jsigma_{mk})_{,m} dV + int_Vvarepsilon_{ijk}x_jF_k , dV &= int_V (varepsilon_{ijk}x_{j,m}sigma_{mk}+varepsilon_{ijk}x_jsigma_{mk,m}) dV + int_Vvarepsilon_{ijk}x_jF_k , dV &= int_V (varepsilon_{ijk}x_{j,m}sigma_{mk}) dV+ int_V varepsilon_{ijk}x_j(sigma_{mk,m}+F_k)dV end{align} The second integral is zero as it contains the equilibrium equations. This leaves the first integral, where $x_\left\{j,m\right\}=delta_\left\{jm\right\}$, therefore
$int_V \left(varepsilon_\left\{ijk\right\}sigma_\left\{jk\right\}\right) dV=0$
For an arbitrary volume V, we then have
$varepsilon_\left\{ijk\right\}sigma_\left\{jk\right\}=0$
which is satisfied at every point within the body. Expanding this equation we have
$sigma_\left\{12\right\}=sigma_\left\{21\right\}$, $sigma_\left\{23\right\}=sigma_\left\{32\right\}$, and $sigma_\left\{13\right\}=sigma_\left\{31\right\}$
or in general
$sigma_\left\{ij\right\}=sigma_\left\{ji\right\}$
This proves that the stress tensor is symmetric
However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, $K_\left\{n\right\}rightarrow 1$, e.g. Non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers.
## Principal stresses and stress invariants
The components $sigma_\left\{ij\right\}$ of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the length of the vector is a physical quantity (a scalar) and is independent of the coordinate system chosen to represent the vector. Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors. When the coordinate system is chosen to coincide with the eigenvectors of the stress tensor, the stress tensor is represented by a diagonal matrix:
$sigma_\left\{ij\right\}=$
begin{bmatrix} sigma_1 & 0 & 0 0 & sigma_2 & 0 0 & 0 & sigma_3 end{bmatrix}
where $sigma_1$, $sigma_2$, and $sigma_3$, are the principal stresses. These principal stresses may be combined to form three other commonly used invariants, $I_1$, $I_2$, and $I_3$ , which are the first, second and third stress invariants, respectively. The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus, we have
$begin\left\{align\right\}$
I_1 &= sigma_{1}+sigma_{2}+sigma_{3} I_2 &= sigma_{1}sigma_{2}+sigma_{2}sigma_{3}+sigma_{3}sigma_{1} I_3 &= sigma_{1}sigma_{2}sigma_{3} end{align}
Because of its simplicity, working and thinking in the principal coordinate system is often very useful when considering the state of the elastic medium at a particular point.
Derivation of principal stresses and stress invariants
At every point in a stressed body there are at least three planes, called principal planes, with normal vectors $mathbf\left\{n\right\}$, called principal directions, where the corresponding stress vector is parallel or in the same direction as the normal vector $sigma_n$ and where there are no normal shear stresses $tau_n$. Thus,
$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{n\right\}\right)\right\} = lambda mathbf\left\{n\right\}= mathbf\left\{sigma\right\}_n mathbf\left\{n\right\}$
where $lambda$ is a constant of proportionality, and in this particular case corresponds to the magnitudes $sigma_n$ of the normal stress vectors or principal stresses.
Knowing that $T_i^\left\{\left(n\right)\right\}=sigma_\left\{ij\right\}n_j$ and $n_i=delta_\left\{ij\right\}n_j$, we have
$begin\left\{align\right\}$
T_i^{(n)} &= lambda n_i sigma_{ij}n_j &=lambda n_i sigma_{ij}n_j -lambda n_i &=0 left(sigma_{ij}- lambdadelta_{ij} right)n_j &=0 end{align}
This is a homogeneous system, i.e. equal to zero, of three linear equations where $n_j$ are the unknowns. To obtain a nontrivial (non-zero) solution for $n_j$, the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus,
$left|sigma_\left\{ij\right\}- lambdadelta_\left\{ij\right\} right|=begin\left\{vmatrix\right\}$
sigma_{11} - lambda & sigma_{12} & sigma_{13} sigma_{21} & sigma_{22} - lambda & sigma_{23} sigma_{31}& sigma_{32} & sigma_{33} - lambda end{vmatrix}=0
Expanding the determinant leads to the characteristic equation
$left|sigma_\left\{ij\right\}- lambdadelta_\left\{ij\right\} right| = -lambda^3 + I_1lambda^2 - I_2lambda + I_3=0$
where
$begin\left\{align\right\}$
I_1 &= sigma_{11}+sigma_{22}+sigma_{33} &= sigma_{kk} I_2 &= begin{vmatrix} sigma_{22} & sigma_{23} sigma_{32} & sigma_{33} end{vmatrix} + begin{vmatrix} sigma_{11} & sigma_{13} sigma_{31} & sigma_{33} end{vmatrix} + begin{vmatrix} sigma_{11} & sigma_{12} sigma_{21} & sigma_{22} end{vmatrix} &= sigma_{11}sigma_{22}+sigma_{22}sigma_{33}+sigma_{11}sigma_{33}-sigma_{12}^2-sigma_{23}^2-sigma_{13}^2 &= frac{1}{2}left(sigma_{ii}sigma_{jj}-sigma_{ij}sigma_{ji}right) I_3 &= det(sigma_{ij}) &= sigma_{11}sigma_{22}sigma_{33}+2sigma_{12}sigma_{23}sigma_{31}-sigma_{12}^2sigma_{33}-sigma_{23}^2sigma_{11}-sigma_{13}^2sigma_{22} end{align}
$I_1$, $I_2$ and $I_3$ are the first, second, and third stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen.
The characteristic equation has three real roots $lambda$, i.e. not imaginary due to the symmetry of the stress tensor. The three roots $lambda_1=sigma_1$, $lambda_2=sigma_2$, and $lambda_3=sigma_3$ are the eigenvalues or principal stresses, and they are the roots of the Cayley–Hamilton theorem. For each eigenvalue, there is a non-trivial solution for $n_j$ in the equation $left\left(sigma_\left\{ij\right\}- lambdadelta_\left\{ij\right\} right\right)n_j =0$. These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent on the orientation of the coordinate system.
If we choose a coordinate system with axes oriented to the principal directions, then the normal stresses will be the principal stresses. Thus, we have
$begin\left\{align\right\}$
I_1 &= sigma_{1}+sigma_{2}+sigma_{3} I_2 &= sigma_{1}sigma_{2}+sigma_{2}sigma_{3}+sigma_{3}sigma_{1} I_3 &= sigma_{1}sigma_{2}sigma_{3} end{align}
## Stress deviator tensor
The stress tensor $sigma_\left\{ij\right\}$ can be expressed as the sum of two other stress tensors:
1. a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, $pdelta_\left\{ij\right\}$, which tends to change the volume of the stressed body; and
2. a deviatoric component called the stress deviator tensor, $s_\left\{ij\right\}$, which tends to distort it.
$sigma_\left\{ij\right\}= s_\left\{ij\right\} + pdelta_\left\{ij\right\}$
where $p$ is the mean stress given by
$p=frac\left\{sigma_\left\{kk\right\}\right\}\left\{3\right\}=frac\left\{sigma_\left\{11\right\}+sigma_\left\{22\right\}+sigma_\left\{33\right\}\right\}\left\{3\right\}=tfrac\left\{1\right\}\left\{3\right\}I_1$
The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the stress tensor:
$begin\left\{align\right\}$
s_{ij} &= sigma_{ij} - frac{sigma_{kk}}{3}delta_{ij} left[{begin{matrix} s_{11} & s_{12} & s_{13} s_{21} & s_{22} & s_{23} s_{31} & s_{32} & s_{33} end{matrix}}right] &=left[{begin{matrix} sigma_{11} & sigma_{12} & sigma_{13} sigma_{21} & sigma_{22} & sigma_{23} sigma_{31} & sigma_{32} & sigma_{33} end{matrix}}right]-left[{begin{matrix}
` p & 0 & 0 `
` 0 & p & 0 `
` 0 & 0 & p `
end{matrix}}right] &=left[{begin{matrix} sigma_{11}-p & sigma_{12} & sigma_{13} sigma_{21} & sigma_{22}-p & sigma_{23} sigma_{31} & sigma_{32} & sigma_{33}-p end{matrix}}right] end{align}
### Invariants of the stress deviator tensor
As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor $s_\left\{ij\right\}$ are the same as the principal directions of the stress tensor $sigma_\left\{ij\right\}$. Thus, the characteristic equation is
$left| s_\left\{ij\right\}- lambdadelta_\left\{ij\right\} right| = lambda^3-J_1lambda^2-J_2lambda-J_3=0$
where $J_1$, $J_2$ and $J_3$ are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of $s_\left\{ij\right\}$ or its principal values $s_1$, $s_2$, and $s_3$, or alternatively, as a function of $sigma_\left\{ij\right\}$ or its principal values $sigma_1 ,$, $sigma_2 ,$, and $sigma_3 ,$ . Thus,
$begin\left\{align\right\}$
J_1 &= s_{kk}=0 end{align}
$begin\left\{align\right\}$
J_2 &= textstyle{frac{1}{2}}s_{ij}s_{ji} &= -s_1s_2 - s_2s_3 - s_3s_1 &= tfrac{1}{6}left[(sigma_{11} - sigma_{22})^2 + (sigma_{22} - sigma_{33})^2 + (sigma_{33} - sigma_{11})^2 right ] + sigma_{12}^2 + sigma_{23}^2 + sigma_{31}^2 &= tfrac{1}{6}left[(sigma_1 - sigma_2)^2 + (sigma_2 - sigma_3)^2 + (sigma_3 - sigma_1)^2 right ] &= tfrac{1}{3}I_1^2-I_2 J_3 &= det(s_{ij}) &= tfrac{1}{3}s_{ij}s_{jk}s_{ki} &= s_1s_2s_3 &= tfrac{2}{27}I_1^3 - tfrac{1}{3}I_1 I_2 + I_3 end{align} Because $s_\left\{kk\right\}=0 ,$, the stress deviator tensor is in a state of pure shear.
A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as
$sigma_e = sqrt\left\{3~J_2\right\} = sqrt\left\{tfrac\left\{1\right\}\left\{2\right\}~left\left[\left(sigma_1-sigma_2\right)^2 + \left(sigma_2-sigma_3\right)^2 + \left(sigma_3-sigma_1\right)^2 right\right]\right\}$
## Octahedral stresses
Considering the principal directions as the coordinate axes, a plane which normal vector makes equal angles with each of the principal axes, i.e. having direction cosines equal to $|1/sqrt\left\{3\right\}|$, is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress $sigma_\left\{oct\right\}$ and octahedral shear stress $tau_\left\{oct\right\}$, respectively.
Knowing that the stress tensor of point O (Figure 6) in the principal axes is
$sigma_\left\{ij\right\}=$
begin{bmatrix} sigma_1 & 0 & 0 0 & sigma_2 & 0 0 & 0 & sigma_3 end{bmatrix}
the stress vector on an octahedral plane is then given by:
$begin\left\{align\right\}$
mathbf{T}_{oct}^{(mathbf{n})}&= sigma_{ij}n_imathbf{e}_j &=sigma_1n_1mathbf{e}_1+sigma_2n_2mathbf{e}_2+sigma_3n_3mathbf{e}_3 &=tfrac{1}{sqrt{3}}(sigma_1mathbf{e}_1+sigma_2mathbf{e}_2+sigma_3mathbf{e}_3) end{align}
The normal component of the stress vector at point O associated with the octahedral plane is
$begin\left\{align\right\}$
sigma_{oct} &= T^{(n)}_in_i &=sigma_{ij}n_in_j &=sigma_1n_1n_1+sigma_2n_2n_2+sigma_3n_3n_3 &=tfrac{1}{3}(sigma_1+sigma_2+sigma_3) end{align}
which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes. The shear stress on the octahedral plane is then
$begin\left\{align\right\}$
tau_{oct} &=sqrt{T^{(n)}T^{(n)}-sigma_n^2} &=left[tfrac{1}{3}(sigma_1^2+sigma_2^2+sigma_3^2)-tfrac{1}{9}(sigma_1+sigma_2+sigma_3)right]^{1/2} &=tfrac{1}{3}left[(sigma_1-sigma_2)^2+(sigma_2-sigma_3)^2+(sigma_3-sigma_1)^2right]^{1/2} end{align}
## Analysis of stress
All real objects occupy a three-dimensional space. However, depending on the loading condition and viewpoint of the observer the same physical object can alternatively be assumed as one-dimensional or two-dimensional, thus simplifying the mathematical modelling of the object.
### Uniaxial stress
If two of the dimensions of the object are very large or very small compared to the others, the object may be modelled as one-dimensional. In this case the stress tensor has only one component and is indistinguishable from a scalar. One-dimensional objects include a piece of wire loaded at the ends and viewed from the side, and a metal sheet loaded on the face and viewed up close and through the cross section.
When a structural element is elongated or compressed, its cross-sectional area changes by an amount that depends on the Poisson's ratio of the material. In engineering applications, structural members experience small deformations and the reduction in cross-sectional area is very small and can be neglected, i.e., the cross-sectional area is assumed constant during deformation. For this case, the stress is called engineering stress or nominal stress. In some other cases, e.g., elastomers and plastic materials, the change in cross-sectional area is significant, and the stress must be calculated assuming the current cross-sectional area instead of the initial cross-sectional area. This is termed true stress and is expressed as
$sigma_mathrm\left\{true\right\} = \left(1 + varepsilon_e\right)\left(sigma_e\right) ,$,
where
$varepsilon_e$ is the nominal (engineering) strain, and
$sigma_e$ is nominal (engineering) stress.
The relationship between true strain and engineering strain is given by
$varepsilon_mathrm\left\{true\right\} = ln\left(1 + varepsilon_e\right) ,$.
In uniaxial tension, true stress is then greater than nominal stress. The converse holds in compression.
### Plane stress
A state of plane stress exist when one of the principal stresses is zero, stresses with respect to the thin surface are zero. This usually occurs in structural elements where one dimension is very small compared to the other two, i.e. the element is flat or thin, and the stresses are negligible with respect to the smaller dimension as they are not able to develop within the material and are small compared to the in-plane stresses. Therefore, the face of the element is not acted by loads and the structural element can be analyzed as two-dimensional, e.g. thin-walled structures such as plates subject to in-plane loading or thin cylinders subject to pressure loading. The stress tensor can then be approximated by:
$sigma_\left\{ij\right\} = begin\left\{bmatrix\right\}$
sigma_{11} & sigma_{12} & 0 sigma_{21} & sigma_{22} & 0 0 & 0 & 0end{bmatrix}.
The corresponding strain tensor is:
$varepsilon_\left\{ij\right\} = begin\left\{bmatrix\right\}$
varepsilon_{11} & varepsilon_{12} & 0 varepsilon_{21} & varepsilon_{22} & 0 0 & 0 & varepsilon_{33}end{bmatrix}
in which the non-zero $varepsilon_\left\{33\right\}$ term arises from the Poisson's effect. This strain term can be temporarily removed from the stress analysis to leave only the in-plane terms, effectively reducing the analysis to two dimensions.
### Plane strain
If one dimension is very large compared to the others, the principal strain in the direction of the longest dimension is constrained and can be assumed as zero, yielding a plane strain condition. In this case, though all principal stresses are non-zero, the principal stress in the direction of the longest dimension can be disregarded for calculations. Thus, allowing a two dimensional analysis of stresses, e.g. a dam analyzed at a cross section loaded by the reservoir.
## Mohr's circle for stresses
Mohr's circle is a graphical representation of any 2-D stress state and was named for Christian Otto Mohr. Mohr's circle may also be applied to three-dimensional stress. In this case, the diagram has three circles, two within a third.
Mohr's circle is used to find the principal stresses, maximum shear stresses, and principal planes. For example, if the material is brittle, the engineer might use Mohr's circle to find the maximum component of normal stress (tension or compression); and for ductile materials, the engineer might look for the maximum shear stress.
## Alternative measures of stress
The Cauchy stress is not the only measure of stress that is used in practice. Other measures of stress include the first and second Piola-Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
### Piola-Kirchhoff stress tensor
In the case of finite deformations, the Piola-Kirchhoff stress tensors are used to express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations or rotations, the Cauchy and Piola-Kirchoff tensors are identical. These tensors take their names from Gabrio Piola and Gustav Kirchhoff.
#### 1st Piola-Kirchhoff stress tensor
Whereas the Cauchy stress tensor, $sigma_\left\{ij\right\}$, relates forces in the present configuration to areas in the present configuration, the 1st Piola-Kirchhoff stress tensor, $K_\left\{Lj\right\}$ relates forces in the present configuration with areas in the reference ("material") configuration. $K_\left\{Lj\right\}$ is given by
$K_\left\{Lj\right\}=J X_\left\{L,i\right\} sigma_\left\{ij\right\} !$
where $J$ is the Jacobian, and $X_\left\{L,i\right\}$ is the inverse of the deformation gradient.
Because it relates different coordinate systems, the 1st Piola-Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The 1st Piola-Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the 1st Piola-Kirchhoff stress tensor will vary with material orientation.
The 1st Piola-Kirchhoff stress is energy conjugate to the deformation gradient.
#### 2nd Piola-Kirchhoff stress tensor
Whereas the 1st Piola-Kirchhoff stress relates forces in the current configuration to areas in the reference configuration, the 2nd Piola-Kirchhoff stress tensor $S_\left\{IJ\right\}$ relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the current configuration.
$S_\left\{IJ\right\}=J X_\left\{I,k\right\} X_\left\{J,l\right\} sigma_\left\{kl\right\} !$
This tensor is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the 2nd Piola-Kirchhoff stress tensor will remain constant, irrespective of material orientation.
The 2nd Piola-Kirchhoff stress tensor is energy conjugate to the Green-Lagrange finite strain tensor.
## Books
• Dieter, G. E. (3 ed.). (1989). Mechanical Metallurgy. New York: McGraw-Hill. ISBN 0-07-100406-8.
• Love, A. E. H. (4 ed.). (1944). Treatise on the Mathematical Theory of Elasticity. New York: Dover Publications. ISBN 0-486-60174-9.
• Marsden, J. E., & Hughes, T. J. R. (1994). Mathematical Foundations of Elasticity. New York: Dover Publications. ISBN 0-486-67865-2.
• L.D.Landau and E.M.Lifshitz. (1959). Theory of Elasticity.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 212, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8444024324417114, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/71/what-is-an-intuitive-explanation-of-gouy-phase?answertab=votes
|
What is an intuitive explanation of Gouy phase?
In laser resonators, higher order modes (i.e. TEM01, etc) accumulate phase faster than the fundamental TEM00 mode. This extra phase is called Gouy phase. What is an intuitive explanation of this effect?
Gouy predicted and then experimentally verified the existence of this effect long before the existence of lasers. How did he do it, and what motivated him to think about it?
-
Nice question in optics ! – Cedric H. Nov 2 '10 at 22:29
I could never get my head around this. – zeristor Nov 4 '10 at 14:56
2 Answers
Several sources link to this paper: S. Feng, H. G. Winful, Physical origin of the Gouy phase shift, Optics Letters, 26, 485 (2001), which tries to give an intuitive explanation of the Gouy phase. Briefly, the point is that convergent waves going through the focus have finite spatial extent in the transverse plane. The uncertainty relation induces then some distribution over the transverse and consequently longitudinal wave vectors. It is claimed that the net effect of this distribution over wave vectors is an overall phase shift, which is larger for higher modes. However to see that one really needs to look into the formulas.
-
– Qmechanic♦ Oct 21 '11 at 17:30
A naive explanation says that a beam after its focal point is inverted, not only in the sense of its spatial distribution but also in the sense of the direction of the electrical field vector (minus sign = adding $\pi$ to the phase). It's perfectly compatible with the fact why even beam profiles change the phase by $\pi$ and odd do not.
However, this explanations says nothing about behaviour of the phase near the focal point.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278292655944824, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/gravity?page=2&sort=votes&pagesize=50
|
# Tagged Questions
Gravity is an attractive force that affects and is effected by all mass and - in general relativity - energy, pressure and stress. Prefer newtonian-gravity or general-relativity if sensible.
3answers
1k views
### Nature of gravity: gravitons, curvature of space-time or both?
General relativity tells us that what we perceive as gravity is curvature of space-time. On the other hand (as I understand it) gravity can be understood as a force between objects which are ...
3answers
391 views
### infinite grid of planets with newtonian gravity
Assuming only Newtonian gravity, suppose that the universe consists of an infinite number of uniform planets, uniformly distributed in a two-dimensional grid infinite in both directions and not moving ...
1answer
716 views
### Why does gravity need to be quantised?
The electroweak and strong forces seem to be completely different types of forces to gravity. The latter is geometric while the former are not (as far as I'm aware!). So why should they all be ...
3answers
560 views
### Why space expansion affects matter?
If space itself is expanding, then why would it have any effect on matter (separates distant galaxies)? Space is "nothing", and if "nothing" becomes bigger "nothing" it's still a "nothing" that ...
3answers
752 views
### Imagine a long bar floating in space. What force does it exert on itself in the middle due to gravity?
Problem If you had a long bar floating in space, what would be the compressive force at the centre of the bar, due to the self-weight of both ends? Diagram - what is the force at point X in the ...
2answers
585 views
### Does a photon exert a gravitational pull?
I know a photon has zero rest mass, but it does have plenty of energy. Since energy and mass are equivalent does this mean that a photon (or more practically, a light beam) exerts a gravitational pull ...
5answers
325 views
### What happens to light and mass in the center of a black hole?
I know that black holes are "black" because nothing can escape it due to the massive gravity, but I am wondering if there are any theories as to what happens to the light or mass that enters a black ...
3answers
708 views
### Gravity on a doughnut-shaped/Möbius planet
How different would the effects of gravity be if the planet we're on is in the shape of a torus (doughnut-shaped)? For an (approximately) spherical planet, it's slightly clear that objects would tend ...
3answers
884 views
### Why can't General Relativity be written in terms of physical variables?
I am aware that the field in General Relativity (the metric, $g_{\mu\nu}$) is not completely physical, as two metrics which are related by a diffeomorphism (~ a change in coordinates) are physically ...
2answers
587 views
### If two ultra-relativistic billiard balls just miss, will they still form a black hole?
This forum seems to agree that a billiard ball accellerated to ultra-relativistic speeds does not turn into a black hole. (See recent question "If a 1kg mass was accelerated close to the speed of ...
1answer
275 views
### Measurement of kaluza-klein radion field gradient?
I've been very impressed to learn about kaluza-klein theory and compactification strategies. I would like to read more about this but in the meantime i'm curious about 2 different points. I have the ...
3answers
488 views
### Questions about the Solar System
Most images you see of the solar system are 2D and all planets orbit in the same plane. In a 3D view, are really all planets orbiting in similar planes? Is there a reason for this? I'd expect that ...
6answers
509 views
### Why is there an escape velocity?
I've been trying for days, but I just can't understand why escape velocities exist. I've searched the web and even this site, and although I've read many explanations, I haven't been able to truly ...
4answers
323 views
### What makes the stars that are farther from the nucleus of the galaxy go faster than those in the middle?
It has no sense that stars that have a bigger radius and apparently less angular speed($\omega$) goes faster than the ones near the center.
6answers
1k views
### Is Newton's Law of Gravity consistent with General Relativity?
By 'Newton's Law of Gravity', I am referring to The magnitude of the force of gravity is proportional to the product of the mass of the two objects and inversely proportional to their distance ...
4answers
274 views
### Would there be time dilation at the point where two gravitational fields cancel each other out?
My question is very simple, and most likely a stupid one: One observer is at a point in space were the gravitational force form massive bodies (or a single massive body) cancel each-other out. The ...
5answers
340 views
### How Does Dark Matter Form Lumps?
As far as we know, the particles of dark matter can interact with each other only by gravitation. No electromagnetics, no weak force, no strong force. So, let's suppose a local slight concentration ...
5answers
1k views
### Why is gravitation force always attractive?
Why is the gravitation force always attractive? Is there a way to explain this other than the curvature of space time? PS: If the simple answer to this question is that mass makes space-time curve ...
6answers
457 views
### How Does Hubble's Expansion Affect Two Rope-Tied Galaxies?
Suppose we have two galaxies that are sufficiently far apart so that the distance between them increases due to Hubble's expansion. If I were to connect these two galaxies with a rope, would there be ...
2answers
252 views
### Wavefunction collapse and gravity
If gravity can be thought of as both a wave (the gravitational wave, as predicted to exist by Albert Einstein and certain calculations) and a particle (the graviton), would it make sense to apply ...
5answers
3k views
### Stephen Hawking says universe can create itself from nothing, but how exactly?
Stephen Hawking says in his latest book The Grand Design that, Because there is a law such as gravity, the universe can and will create itself from nothing. Is it not circular logic? I mean, how ...
1answer
713 views
### Why does water flow out of an upside-down bottle? (Rayleigh Taylor Instability)
I am currently reading the excellent book An Indispensable Truth: How Fusion Power Can Save the Planet by Francis F. Chen and I came across this explanation. The Rayleigh–Taylor Instability ...
3answers
260 views
### Field created by varying Gravitational field
Changing Electric Field causes Magnetic filed and changing Magnetic Field causes Electric Field. Is there anything similar in relation to Gravitational Field? What sort of field is created by varying ...
3answers
462 views
### Evidence for black hole event horizons
I know that there's a lot of evidence for extremely compact bodies. But is there any observation from which we can infer the existence of an actual horizon? Even if we are able to someday resolve ...
0answers
385 views
### Would Portal-style portals transmit gravity? [closed]
In the video game Portal, there are often puzzles which must be solved by gaining a large amount of momentum. Typically, this is accomplished by putting one portal on the ground and another directly ...
3answers
546 views
### Why is a black hole black?
In general relativity (ignoring Hawking radiation), why is a black hole black? Why nothing, not even light, can escape from inside a black hole? To make the question simpler, say, why is a ...
3answers
464 views
### Why is ski jumping not suicidal?
At least on television, ski jumpers seem to fall great vertical distances before they hit the ground - at least a few dozen meters, though I couldn't find exact distances via a quick search. And yet ...
6answers
605 views
### What prevents the accumulation of charge in a black hole?
What prevents a static black hole from accumulating more charge than its maximum? Is it just simple Coulomb repulsion? Is the answer the same for rotating black holes? Edit What I understand from ...
2answers
484 views
### Is dark matter repulsive to dark matter? Why?
I think I saw in a video that if dark matter wasn't repulsive to dark matter, it would have formed dense massive objects or even black holes which we should have detected. So, could dark matter be ...
3answers
81 views
### How would two equally massed stars orbit?
In an empty universe, except for two equally massed stars, how would they orbit? Or, for another example, if the earth suddenly grew to be the mass of the sun, how would they orbit, or interact? Would ...
3answers
211 views
### Is the “Great Attractor” an indicator of the “Multiverse”?
I have heard a bit about the Great Attractor (the gravitational anomaly that seems to be "sweeping" our universe in one direction). Someone (and forgive me, I do not recall the specifics) has ...
5answers
698 views
### Two orbiting planets in perpendicular planes
Inspired by this question. Can a 3 body problem, starting with two planets orbiting a larger one (so massive it may be taken to stand still) in perpendicular planes, be stable? Is there known an ...
3answers
265 views
### How can Voyager 1 escape gravity of moons and planets?
I think this one is pretty simple so excuse me for my ignorance. But since most planets in our solar system are very well tied to their orbit around the sun or orbit around their planet (for moons), I ...
4answers
533 views
### Why is there a search for an exchange particle for gravity?
Here's a question on something I've been wondering about for quite some time. (I am not a physicist.) If I understand correctly, according to Einstein's General Theory of Relativity, mass results in ...
2answers
231 views
### Is there a good chance that gravitational waves will be detected in the next years?
Is there a good chance that gravitational waves will be detected in the next years? Theoretical estimates on the size of the effect and the sensitivity of the newest detectors should permit a ...
4answers
404 views
### Why do we still need to think of gravity as a force?
Firstly I think shades of this question have appeared elsewhere (like here, or here). Hopefully mine is a slightly different take on it. If I'm just being thick please correct me. We always hear ...
3answers
445 views
### What is the exact gravitational force between two masses including relativistic effects?
I was wondering if there is a closed-form formula for the force between two masses $m_1$ and $m_2$ if relativistic effects are included. My understanding is that the classic formula \$G \frac{m_1 ...
4answers
796 views
### Effects of space mining on Earth's orbit
I was reading a post about space mining, specially lunar mining. I was thinking about what would change in Earth's orbit if we start bringing tons of rocks to it? I mean, in a huge scale. So, would ...
1answer
124 views
### Gravitationally bound systems in an expanding universe
This isn't yet a complete question; rather, I'm looking for a qual-level question and answer describing a gravitationally bound system in an expanding universe. Since it's qual level, this needs a ...
1answer
329 views
### Can a black hole form due to Lorentz contraction? [duplicate]
Possible Duplicate: If a 1kg mass was accelerated close to the speed of light would it turn into a black hole? Imagine, a rod of length L is moving with velocity approaching the speed of ...
1answer
148 views
### Mass Needed to Clear an Orbital Neighborhood
In 2006 the IAU deemed that Pluto was no longer a planet because it fails to "clear" the neighborhood around its Kuiper Belt orbit. Presumably, this is because Pluto (1.305E22 kg) has insufficient ...
2answers
2k views
### Why does Venus rotate the opposite direction as other planets?
Given: Law of Conservation of Angular Momentum. Reverse spinning with dense atmosphere (92 times > Earth & CO2 dominant sulphur based). Surface same degree of aging all over. Theoretical large ...
1answer
148 views
### Why is Einstein gravity not renormalizable at two loops or more?
(I found this related Phys.SE post: Why is GR renormalizable to one loop?) I want to know explicitly how it comes that Einstein-Hilbert action in 3+1 dimensions is not renormalizable at two loops or ...
2answers
235 views
### Would dark matter absorb gravitational waves?
Would the vast and seemingly diffuse clouds of dark matter floating around our galaxy (and most others) absorb gravitational waves? Is this perhaps why we haven't detected any yet?
1answer
332 views
### Is this a quaternion representation of the equations of motion of General Relativity?
In The Quaternion Group and Modern Physics by P.R. Girard, the quaternion form of the general relativistic equation of motion is derived from \$du'/ds = (d a / d s ) u {a_c}^* + a u ( d {a_c}^* / ...
1answer
601 views
### String theory and trace anomaly in semiclassical gravity?
what does string theory have to say about the trace anomaly in the expectation value of the stress energy tensor of massless quantum fields on a curved background and its interpretation as the ...
3answers
2k views
### Why is Higgs Boson given the name “The God Particle”?
Higgs Boson (messenger particle of Higgs field) accounts for inertial mass, not gravitational mass. So, how could it account for formation of universe as we know it today? I think, gravity accounts ...
5answers
3k views
### How does gravity work underground?
Would the effect of gravity on me change if I were to dig a very deep hole and stand in it? If so, how would it change? Am I more likely to be pulled downwards, or pulled towards the edges of the ...
8answers
448 views
### Gravity theories with the equivalence principle but different from GR
Einstein's general relativity assumes the equivalence of acceleration and gravitation. Is there a general class of gravity theories that have this property but disagree with general relativity? Will ...
7answers
1k views
### How does Newtonian gravitation conflict with special relativity?
In the Wikipedia article Classical Field Theory (Gravitation), it says After Newtonian gravitation was found to be inconsistent with special relativity, . . . I don't see how Newtonian ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414299726486206, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/06/24/the-category-of-matrices-iv/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## The Category of Matrices IV
Finally, all the pieces are in place to state the theorem I’ve been driving at for a while now:
The functor that we described from $\mathbf{Mat}(\mathbb{F})$ to $\mathbf{FinVec}(\mathbb{F})$ is an equivalence.
To show this, we must show that it is full, faithful, and essentially surjective. The first two conditions say that given natural numbers $m$ and $n$ the linear transformations $\mathbb{F}^m\rightarrow\mathbb{F}^n$ and the $n\times m$ matrices over $\mathbb{F}$ are in bijection.
But this is just what we’ve been showing! The vector spaces of $n$-tuples come with their canonical bases, and given these bases every linear transformation gets a uniquely-defined matrix. Conversely, every matrix defines a unique linear transformation when we’ve got the bases to work with. So fullness and faithfulness are straightforward.
Now for essential surjectivity. This says that given any finite-dimensional vector space $V$ we have some $n$ so that $V\cong\mathbb{F}^n$. But we know that every vector space has a basis, and for $V$ it must be finite; that’s what “finite-dimensional” means! Let’s say that we’ve got a basis $\left\{f_i\right\}$ consisting of $n$ vectors.
Now we just line up the canonical basis $\left\{e_i\right\}$ of $\mathbb{F}^n$ and define linear transformations by $S(e_i)=f_i$ and $T(f_i)=e_i$. Remember that we can define a linear transformation by specifying its values on a basis (which can all be picked independently) and then extending by linearity. Thus we do have two well-defined linear transformations here. But just as clearly we see that for any $v\in V$ we have
$S(T(v))=S(T(v^ie_i))=v^iS(T(e_i))=v^iS(f_i)=v^ie_i=v$
and a similar equation holds for every $n$-tuple in $\mathbb{F}^n$. Thus $S$ and $T$ are inverses of each other, and are the isomorphism we need.
This tells us that the language of linear transformations between finite-dimensional vector spaces is entirely equivalent to that of matrices. But we gain some conceptual advantages by thinking in terms of finite-dimensional vector spaces. One I can point to right here is how we can tell the difference between a vector space and its dual. Sure, they’ve got the same dimension, and so there’s some isomorphism between them. Still, when we’re dealing with both at the same time they behave differently, and it’s valuable to keep our eye on that difference.
On the other hand, there are benefits to matrices. For one thing, we can actually write them down and calculate with them. A lot of people are — surprise! — interested in using mathematics to solve problems. And the problems that linear algebra is most directly applicable to are naturally stated in terms of matrices.
What the theorem tells us is that none of this matters. We can translate problems from the category of matrices to the category of vector spaces and back, and nothing is lost in the process.
### Like this:
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376893043518066, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/18321/invariants-of-a-tensor
|
# Invariants of a tensor [closed]
I asked (almost) the same question in the math exchange.
I'm teaching a course, and I need a simple and intuitive proof that the invariants of a matrix ($3\times3$, but it doesn't matter) can be expressed as linear combinations of traces of its power. When I say "invariance" I mean under orthonormal transformation of the axes $A\to Q A Q^{-1}$ for orthonormal $Q$.
For example, that only linear invariant scalar is the trace, that every quadratic invariant scalar is a combination of $\operatorname{tr}(A)^2$ and $\operatorname{tr}(A^2)$, and that every cubic invariant is a combination of $\operatorname{tr}(A)^3$, $\operatorname{tr}(A^2)\operatorname{tr}(A)$, and $\operatorname{tr}(A^3)$.
The proof for the linear case is trivial (and intuitive) but I can't find a generalization for the quadratic case: say that f(A) is a scalar invariant that is linear in the entries of A. That means $$f=\sum_{ij}C_{ij}A_{ij}$$ where $C$ is some matrix. $C$ should be unchanged when applying an infinitesimal rotation to $A$. This means (here there're two lines of algebra) that $C$ commutes with the generators of $SO(n)$. The only $C$ that does that is the identity. QED
-
Hi yohBS, and welcome to Physics Stack Exchange! We strongly discourage cross-posting questions unless you have had the question at one site for some time without getting any response. In any case, this is really a mathematical question and accordingly seems to be off topic here. – David Zaslavsky♦ Dec 15 '11 at 16:33
Agreed with David. This really contains no physics, which would be the chief reason for not putting it on Physics.SE in my book. – Mark S. Everitt Dec 15 '11 at 16:38
## closed as off topic by David Zaslavsky♦Dec 15 '11 at 16:33
Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470722675323486, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/62045/is-complex-plane-more-than-just-a-topological-space-with-mathbbr2-topology
|
# Is Complex plane more than just a topological space with $\mathbb{R}^2$ topology?
The Set of Complex numbers is a field as well as a nice topological space homeomorphic to $\mathbb{R}^2$. But why such a particular interest for this space? For instance what is more special about it than any other $\mathbb{R}^n$? I agree that each function on a complex domain will look like a single variable function and may have a nice definition for a derivative with respect to that single variable where as in $\mathbb{R}^3$ we cannot(?) have such a single variable to differentiate a function with.Particularly Holomorphic functions are as many times differentiable as we want. But is that all? I believe there is more to it but couldn't just get a picture of it. Any insights? Thanks
-
4
Any other $\mathbb R^n$ (for $n>2$) is not a field, to begin with. That's a pretty enormous difference in my book. Being the algebraic closure of $\mathbb R$ is no trifling matter, either. – Henning Makholm Sep 5 '11 at 17:57
3
Complex analysis has a completely different feel to it than real analysis. – KCd Sep 5 '11 at 18:05
But a single complex variable is equivalent to two real variables, i.e., the homeomorphism; even diffeomorphism between the two is given $f: \mathbb C \rightarrow R^2 : z=x+iy \rightarrow (x,y)$ And, as Hardy mentioned, unlike $\mathbb R^2$, there is a product in $\mathbb C$, so that the complexes are an algebra over the reals .The definition of analitycity is also stricter than that of 'real-differentiability'; the first are analytic when they satisfy Cauchy-Riemann, but functions f:$\mathbb R^2 \rightarrow \mathbb R^2$ must only satisfy the limit definition of differentiability. – gary Sep 5 '11 at 19:30
It is one of few division algebras over the reals, the Hamiltonians, and Octonians making of the only other two. And they are not even commutative. – Sven Sep 5 '11 at 20:51
– Dinesh Sep 6 '11 at 4:54
show 1 more comment
## 2 Answers
I'll post this as an answer since I'm not allowed to comment.
As a response to your comment "I never knew multiplicative structure of C was responsible for holomorphicity", you have to use the field structure of the complex numbers to define the derivative via limits of quotients.
You might also wonder whether one can define such derivatives via quotiens in other $\mathbb{R}^n$ for n>2. However, in order to define a quotient one need an algebra structure such that any nonzero element has an inverse, ie, a division algebra structure. Due to classical theorems of Frobenius and Hurwitz this forces n=4 or 8 and corresponding algebra is not commutative (or even associative in the case n=8), which is undesirable if one wants do do calculus.
-
thanks.. can you also give the reference for those $n=4,8$ theorems of Hurwitz and and Frobenius? – Dinesh Sep 8 '11 at 20:52
The Wikipedia articles are well written: Frobenius Theorem and Hurwitz Theorem – Lucas Kaufmann Sep 12 '11 at 20:19
oh thanks :) I was too lazy to try to search them. – Dinesh Sep 13 '11 at 4:38
I think you got the main point: The definition of complex differentiable (holomorphic) functions uses the multiplicative structure of $\mathbb{C}$ in an essential way. It's not just a differentiable function from $\mathbb{R}^2 \to \mathbb{R}^2$. There are no higher-dimensional field extensions of $\mathbb{R}$, so we don't have an analog in higher dimensions.
-
Well..I never knew multiplicative structure of $\mathbb{C}$ was responsible for holomorphicity. – Dinesh Sep 5 '11 at 18:36
You can define holomorphicity without the multiplicative structure but it isn't nearly as natural. – Sven Sep 5 '11 at 20:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455649256706238, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Laplace's_equation
|
Laplace's equation
In mathematics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as:
$\Delta\varphi = 0 \qquad\mbox{or}\qquad \nabla^2 \varphi = 0$
where ∆ = ∇2 is the Laplace operator and φ is a scalar function. In general, ∆ = ∇2 is the Laplace–Beltrami or Laplace–de Rham operator.
Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. Solutions of Laplace's equation are called harmonic functions.
The general theory of solutions to Laplace's equation is known as potential theory. The solutions of Laplace's equation are the harmonic functions, which are important in many fields of science, notably the fields of electromagnetism, astronomy, and fluid dynamics, because they can be used to accurately describe the behavior of electric, gravitational, and fluid potentials. In the study of heat conduction, the Laplace equation is the steady-state heat equation.
Definition
In three dimensions, the problem is to find twice-differentiable real-valued functions f, of real variables x, y, and z, such that
In Cartesian coordinates
$\Delta f = \frac{\partial^2 f}{\partial x^2 } + \frac{\partial^2 f}{\partial y^2 } + \frac{\partial^2 f}{\partial z^2 } = 0.$
In cylindrical coordinates,
$\Delta f=\frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial f}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 f}{\partial \phi^2} + \frac{\partial^2 f}{\partial z^2} =0$
In spherical coordinates,
$\Delta f = \frac{1}{\rho^2}\frac{\partial}{\partial \rho} \left(\rho^2 \frac{\partial f}{\partial \rho}\right) + \frac{1}{\rho^2 \sin\theta} \frac{\partial}{\partial \theta} \left(\sin\theta \frac{\partial f}{\partial \theta}\right) + \frac{1}{\rho^2 \sin^2\theta} \frac{\partial^2 f}{\partial \varphi^2} =0.$
In Curvilinear coordinates,
$\Delta f =\frac{\partial}{\partial \xi^j}\left(\frac{\partial f}{\partial \xi^k}g^{ki}\right) + \frac{\partial f}{\partial \xi^j} g^{jm}\Gamma^n_{mn} =0,$
or
$\Delta f = \frac{1}{\sqrt{|g|}} \frac{\partial}{\partial \xi^i}\!\left(\sqrt{|g|}g^{ij} \frac{\partial f}{\partial \xi^j}\right) =0, \qquad (g=\mathrm{det}\{g_{ij}\}).$
This is often written as
$\nabla^2 f = 0$
or, especially in more general contexts,
$\Delta f = 0,$
where ∆ = ∇2 is the Laplace operator or "Laplacian"
$\Delta f = \nabla^2 f =\nabla \cdot \nabla f =\operatorname{div}\operatorname{grad} f,$
where ∇ ⋅ = div is the divergence, and ∇ = grad is the gradient.
If the right-hand side is specified as a given function, h(x, y, z), i.e., if the whole equation is written as
$\Delta f = h$
then it is called "Poisson's equation".
The Laplace equation is also a special case of the Helmholtz equation.
Boundary conditions
Laplace's Equation on an annulus (r=2 and R=4) with Dirichlet Boundary Conditions: u(r=2)=0 and u(r=4)=4sin(5*θ)
The Dirichlet problem for Laplace's equation consists of finding a solution φ on some domain D such that φ on the boundary of D is equal to some given function. Since the Laplace operator appears in the heat equation, one physical interpretation of this problem is as follows: fix the temperature on the boundary of the domain according to the given specification of the boundary condition. Allow heat to flow until a stationary state is reached in which the temperature at each point on the domain doesn't change anymore. The temperature distribution in the interior will then be given by the solution to the corresponding Dirichlet problem.
The Neumann boundary conditions for Laplace's equation specify not the function φ itself on the boundary of D, but its normal derivative. Physically, this corresponds to the construction of a potential for a vector field whose effect is known at the boundary of D alone.
Solutions of Laplace's equation are called harmonic functions; they are all analytic within the domain where the equation is satisfied. If any two functions are solutions to Laplace's equation (or any linear homogeneous differential equation), their sum (or any linear combination) is also a solution. This property, called the principle of superposition, is very useful, e.g., solutions to complex problems can be constructed by summing simple solutions.
Laplace equation in two dimensions
The Laplace equation in two independent variables has the form
$\frac{\partial^2\psi}{\partial x^2} + \frac{\partial^2\psi}{\partial y^2} \equiv \psi_{xx} + \psi_{yy} = 0.$
Analytic functions
The real and imaginary parts of a complex analytic function both satisfy the Laplace equation. That is, if z = x + iy, and if
$f(z) = u(x,y) + iv(x,y),$
then the necessary condition that f(z) be analytic is that the Cauchy-Riemann equations be satisfied:
$u_x = v_y, \quad v_x = -u_y.$
where ux is the first partial derivative of u with respect to x.
It follows that
$u_{yy} = (-v_x)_y = -(v_y)_x = -(u_x)_x.$
Therefore u satisfies the Laplace equation. A similar calculation shows that v also satisfies the Laplace equation.
Conversely, given a harmonic function, it is the real part of an analytic function, f(z) (at least locally). If a trial form is
$f(z) = \varphi(x,y) + i \psi(x,y),$
then the Cauchy-Riemann equations will be satisfied if we set
$\psi_x = -\varphi_y, \quad \psi_y = \varphi_x.$
This relation does not determine ψ, but only its increments:
$d \psi = -\varphi_y\, dx + \varphi_x\, dy.$
The Laplace equation for φ implies that the integrability condition for ψ is satisfied:
$\psi_{xy} = \psi_{yx},$
and thus ψ may be defined by a line integral. The integrability condition and Stokes' theorem implies that the value of the line integral connecting two points is independent of the path. The resulting pair of solutions of the Laplace equation are called conjugate harmonic functions. This construction is only valid locally, or provided that the path does not loop around a singularity. For example, if r and θ are polar coordinates and
$\varphi = \log r,$
then a corresponding analytic function is
$f(z) = \log z = \log r + i\theta.$
However, the angle θ is single-valued only in a region that does not enclose the origin.
The close connection between the Laplace equation and analytic functions implies that any solution of the Laplace equation has derivatives of all orders, and can be expanded in a power series, at least inside a circle that does not enclose a singularity. This is in sharp contrast to solutions of the wave equation, which generally have less regularity.
There is an intimate connection between power series and Fourier series. If we expand a function f in a power series inside a circle of radius R, this means that
$f(z) = \sum_{n=0}^\infty c_n z^n,$
with suitably defined coefficients whose real and imaginary parts are given by
$c_n = a_n + i b_n.$
Therefore
$f(z) = \sum_{n=0}^\infty \left[ a_n r^n \cos n \theta - b_n r^n \sin n \theta\right] + i \sum_{n=1}^\infty \left[ a_n r^n \sin n\theta + b_n r^n \cos n \theta\right],$
which is a Fourier series for f. These trigonometric functions can themselves be expanded, using multiple angle formulae.
Fluid flow
Let the quantities u and v be the horizontal and vertical components of the velocity field of a steady incompressible, irrotational flow in two dimensions. The condition that the flow be incompressible is that
$u_x + v_y=0,$
and the condition that the flow be irrotational is that
$\nabla \times \mathbf{V}=v_x - u_y =0.$
If we define the differential of a function ψ by
$d \psi = v dx - u dy,$
then the incompressibility condition is the integrability condition for this differential: the resulting function is called the stream function because it is constant along flow lines. The first derivatives of ψ are given by
$\psi_x = v, \quad \psi_y=-u,$
and the irrotationality condition implies that ψ satisfies the Laplace equation. The harmonic function φ that is conjugate to ψ is called the velocity potential. The Cauchy-Riemann equations imply that
$\varphi_x=-u, \quad \varphi_y=-v.$
Thus every analytic function corresponds to a steady incompressible, irrotational fluid flow in the plane. The real part is the velocity potential, and the imaginary part is the stream function.
Electrostatics
According to Maxwell's equations, an electric field (u,v) in two space dimensions that is independent of time satisfies
$\nabla \times (u,v) = v_x -u_y =0,$
and
$\nabla \cdot (u,v) = \rho,$
where ρ is the charge density. The first Maxwell equation is the integrability condition for the differential
$d \varphi = -u\, dx -v\, dy,$
so the electric potential φ may be constructed to satisfy
$\varphi_x = -u, \quad \varphi_y = -v.$
The second of Maxwell's equations then implies that
$\varphi_{xx} + \varphi_{yy} = -\rho,$
which is the Poisson equation.
It is important to note that the Laplace equation can be used in three-dimensional problems in electrostatics and fluid flow just as in two dimensions.
Laplace equation in three dimensions
Fundamental solution
A fundamental solution of Laplace's equation satisfies
$\Delta u = u_{xx} + u_{yy} + u_{zz} = -\delta(x-x',y-y',z-z'),$
where the Dirac delta function δ denotes a unit source concentrated at the point (x′, y′, z′). No function has this property, but it can be thought of as a limit of functions whose integrals over space are unity, and whose support (the region where the function is non-zero) shrinks to a point (see weak solution). It is common to take a different sign convention for this equation than one typically does when defining fundamental solutions. This choice of sign is often convenient to work with because −Δ is a positive operator. The definition of the fundamental solution thus implies that, if the Laplacian of u is integrated over any volume that encloses the source point, then
$\iiint_V \nabla \cdot \nabla u \, dV =-1.$
The Laplace equation is unchanged under a rotation of coordinates, and hence we can expect that a fundamental solution may be obtained among solutions that only depend upon the distance r from the source point. If we choose the volume to be a ball of radius a around the source point, then Gauss' divergence theorem implies that
$-1= \iiint_V \nabla \cdot \nabla u \, dV = \iint_S \frac{du}{dr} \, dS = \left.4\pi a^2 \frac{du}{dr}\right|_{r=a}.$
It follows that
$\frac{du}{dr} = -\frac{1}{4\pi r^2},$
on a sphere of radius r that is centered around the source point, and hence
$u = \frac{1}{4\pi r}.$
Note that, with the opposite sign convention (used in Physics), this is the potential generated by a point particle, for an inverse-square law force, arising in the solution of Poisson equation. A similar argument shows that in two dimensions
$u = -\frac{\log(r)}{2\pi}.$
where log(r) denotes the natural logarithm. Note that, with the opposite sign convention, this is the potential generated by a pointlike sink (see point particle), which is the solution of the Euler equations in two-dimensional incompressible flow.
Green's function
A Green's function is a fundamental solution that also satisfies a suitable condition on the boundary S of a volume V. For instance,
$G(x,y,z;x',y',z')$
may satisfy
$\nabla \cdot \nabla G = -\delta(x-x',y-y',z-z') \qquad \hbox{in } V,$
$G = 0 \quad \hbox{if} \quad (x,y,z) \qquad \hbox{on } S.$
Now if u is any solution of the Poisson equation in V:
$\nabla \cdot \nabla u = -f,$
and u assumes the boundary values g on S, then we may apply Green's identity, (a consequence of the divergence theorem) which states that
$\iiint_V \left[ G \, \nabla \cdot \nabla u - u \, \nabla \cdot \nabla G \right]\, dV = \iiint_V \nabla \cdot \left[ G \nabla u - u \nabla G \right]\, dV = \iint_S \left[ G u_n -u G_n \right] \, dS. \,$
The notations un and Gn denote normal derivatives on S. In view of the conditions satisfied by u and G, this result simplifies to
$u(x',y',z') = \iiint_V G f \, dV + \iint_S G_n g \, dS. \,$
Thus the Green's function describes the influence at (x′, y′, z′) of the data f and g. For the case of the interior of a sphere of radius a, the Green's function may be obtained by means of a reflection (Sommerfeld, 1949): the source point P at distance ρ from the center of the sphere is reflected along its radial line to a point P' that is at a distance
$\rho' = \frac{a^2}{\rho}. \,$
Note that if P is inside the sphere, then P' will be outside the sphere. The Green's function is then given by
$\frac{1}{4 \pi R} - \frac{a}{4 \pi \rho R'}, \,$
where R denotes the distance to the source point P and R' denotes the distance to the reflected point P'. A consequence of this expression for the Green's function is the Poisson integral formula. Let ρ, θ, and φ be spherical coordinates for the source point P. Here θ denotes the angle with the vertical axis, which is contrary to the usual American mathematical notation, but agrees with standard European and physical practice. Then the solution of the Laplace equation inside the sphere is given by
$u(P) =\frac{1}{4\pi} a^3\left(1-\frac{\rho^2}{a^2}\right) \iint \frac{g(\theta',\varphi') \sin \varphi'}{(a^2 + \rho^2 - 2 a \rho \cos \Theta)^{\frac{3}{2}}} d\theta' \, d\varphi',$
where
$\cos \Theta = \cos \varphi \cos \varphi' + \sin\varphi \sin\varphi'\cos(\theta -\theta').$
A simple consequence of this formula is that if u is a harmonic function, then the value of u at the center of the sphere is the mean value of its values on the sphere. This mean value property immediately implies that a non-constant harmonic function cannot assume its maximum value at an interior point.
Electrostatics
In free space the Laplace equation of any electrostatic potential must equal zero since ρ (charge density) is zero in free space.
Taking the gradient of the electric potential we get the electrostatic field
$E=-\nabla V$
Taking the divergence of the electrostatic field, we obtain Poisson's equation, that relates charge density and electric potential
$\nabla^2V = -\frac{\rho}{\varepsilon_0}$
In the particular case of the empty space (ρ = 0) Poisson's equation reduces to Laplace's equation for the electric potential.
Using a uniqueness theorem and showing that a potential satisfies Laplace's equation (second derivative of V should be zero i.e. in free space) and the potential has the correct values at the boundaries, the potential is then uniquely defined.
A potential that doesn't satisfy Laplace's equation together with the boundary condition is an invalid electrostatic potential.
See also
• Spherical harmonic
• Quadrature domains
• Potential theory
• Potential flow
• Bateman transform
• Earnshaw's theorem uses the Laplace equation to show that stable static ferromagnetic suspension is impossible
• Vector Laplacian
References
• Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 0-8218-0772-2.
• Petrovsky, I. G. (1967). Partial Differential Equations. Philadelphia: W. B. Saunders.
• Polyanin, A. D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-299-9.
• Sommerfeld, A. (1949). Partial Differential Equations in Physics. New York: Academic Press.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 51, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8982669711112976, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/ring-theory+algebraic-geometry
|
# Tagged Questions
0answers
52 views
### Integral homomorphism induces a closed map on spectra
I'm trying to prove the following: Let $f:A\rightarrow B$ is a integral homomorphism (e.g. $B/f(A)$ is a integral extension). Consider \$f^{*}: \operatorname{Spec}B \rightarrow ...
1answer
155 views
### Classification of local Artin (commutative) rings which are finite over an algebraically closed field.
A result in deformation theory states that if every morphism $Y=\operatorname{Spec}(A)\rightarrow X$ where $A$ is a local Artin ring finite over $k$ can be extended to every $Y'\supset Y$ where $Y'$ ...
1answer
60 views
### Localization of $K[x,y|x^2-y^3]$ and $K[x,y|xy]$ at $\langle x,y\rangle$ and $\{\text{non-zero-divisors}\}$ (exercise in SICA)
In Greuel & Pfister's A Singular Introduction to Commutative Algebra, p. 38, there is written: So we have rings \begin{array}{l l} R_1:= K[x,y|x^2\!-\!y^3], & R_4:= K[x,y|xy],\\ R_2:= ...
1answer
57 views
### Showing that $\mathbb{C}[x,y]^{\mu_n}$ and $\mathbb{C}[x,y,z]/(xy-z^n)$ are isomorphic as rings
The problem: Let $\mu_n$ act on $\mathbb{C}[u,v]$ with weights $(1,-1)$. I would like to show that the rings $\mathbb{C}[u,v]^{\mu_n}$ and $\mathbb{C}[x,y,z]/(xy-z^n)$ are isomorphic. Explanation of ...
1answer
52 views
### Bijection between hom sets of $k$ - algebras
Let $R:= k[x_1,\ldots,x_r]$, $S:= k[x_{r+1},\ldots,x_{r+s}]$ and $Q:= k[x_1,\ldots,x_{r+s}]$. Let $I \subseteq R$ and $J \subseteq S$ be ideals. I have in texts in algebraic geometry that for any $k$ ...
2answers
91 views
### Where do I use the fact that $F$ is algebraically closed in this proof?
I have to do the following. Let $F$ be an algebraically closed field. $I\in F[X_1,...,X_n]$ an ideal. Denote by $S(I)$ the subset in $F^n$ consisting of all $n$-tuples $(a_1,...,a_n)\in F^n$ such that ...
0answers
41 views
### Morphisms from the group variety of $n$-th roots
In Milne - Lectures on étale cohomology, example 6.10 i came across the following. We fix a variety $X$ and work in the category $Var/X$ of varieties over $X$ (so with fixed morphisms to $X$!) and ...
1answer
129 views
### A valuation ring
In Qing Liu, Algebraic Geometry and Arithmetic Curves, page 116, exemple 4.1.8, one has $\mathcal{O}_K$ a discrete valuation ring with uniformizing parameter $t$, $P\in\mathcal{O}_K[S]$ an Eisenstein ...
0answers
77 views
### Some elementary facts
What is the simplest and the most conceptual proof of some basic facts on algebraic geometry? 1) Hilbert's Nullstellensatz 2) Regular functions on projective variety - only constants 3) elemination ...
1answer
139 views
### Questions about subalgebras of finitely generated $k$-algebras
Let $k$ be a field (if necessary assume $k$ to be algebraically closed). Let $A$ be a finitely generated $k$-algebra and let $B$ be a subalgebra of $A$. Remark that $B$ doesn't have to be noetherian, ...
1answer
119 views
### What is Proj $\mathbb{C}[x,y][z]/\langle xz-yz\rangle$?
Assuming that $x,y$ have weight $0$ and $z$ has weight $1$, $$R= \mathbb{C}[x,y][z]/\langle xz-yz\rangle = \mathbb{C}[x,y]\oplus ( \oplus_{i\geq 1}\mathbb{C}[x]z^i),$$ what closed subvariety is ...
1answer
57 views
### Graded rings and their localizations
Let $A$ be a $\mathbb{Z}_{\geq 0}$-graded ring, $f \in A$ - homogenious, and $I \subset A$ - homogenious ideal. Let $A_f$ be its localization, and $A_{(f)}$ - subring of elements of degree 0. How to ...
1answer
146 views
### Discrete Valuation Rings
Let $V = \mathbb A^1(k)$ ($k$ is an algebraically closed field), $\Gamma(V) = k[X]$ and let $K = k(V) = k(X)$. Prove that for each $a \in k = V$, $\mathcal{O}_a(V) := \{f\in K(V): f$ is defined at ...
0answers
50 views
### Arithmetic progressions of units in a domain
Let $R$ be a domain with unity, and suppose that $R^\times$ has finite rank as an abelian group. Can $R^\times$ contain infinitely long arithmetic progressions? Can $R^\times$ contain arithmetic ...
1answer
57 views
### Limits of subrings and surjectivity
Let $A$ be a ring and let $\mathcal{F}$ be the inductive system of subrings of $A$ which are of finite type over $\mathbb{Z}$: \mathcal{F} = \{ \mathbb{Z}[a_1,\dots,a_n] \subseteq A \mid n \geq 0, ...
1answer
102 views
### Ring homomorphism and affine scheme
How to describe all ring homomorphisms $f: A \rightarrow B$, such that corresponding affine scheme morphism $f: Spec \, B \rightarrow Spec \, A$ is open immersion?
2answers
140 views
### Injectivity of Homomorphism in Localization
Let $\alpha:A\to B$ be a ring homomorphism, $Q\subset B$ a prime ideal, $P=\alpha^{-1}Q\subset A$ a prime ideal. Consider the natural map $\alpha_Q:A_P\to B_Q$ defined by ...
1answer
143 views
### Krull dimension and transcendence degree
What is the simpliest proof of the fact, that integral algebra $R$ over a field $k$ has the same Krull dimension as transcendence degree $deg.tr_k R$? Is it possibple to use only Noether normalization ...
4answers
197 views
### Spectrum of $\mathbb{Z}[x]$
Can someone point me towards a resource that proves that the spectrum of $\mathbb{Z}[x]$ consists of ideals $(p,f)$ where $p$ prime or zero and $f$ irred mod $p$? In particular I remember this can be ...
1answer
170 views
### Irreducible Components of the Prime Spectrum of a Quotient Ring and Primary Decomposition
Recently I encountered a problem (the first exercise from chapter four of Atiyah & McDonald's Introduction to Commutative Algebra) stating that if $\mathfrak{a}$ is a decomposable ideal of $A$ (a ...
2answers
99 views
### Finite presentation of algebra of invariants
(1) Let $R$ be a ring, let $A$ be a finitely presented $R$-algebra, and let $G$ be a finite group of $R$-automorphisms of $A$. Is the algebra of invariant $A^G$ finitely presented over $R$? I can ...
1answer
215 views
### Invertible elements in the ring $K[x,y,z,t]/(xy+zt-1)$
I would like to know how big is the set of invertible elements in the ring $$R=K[x,y,z,t]/(xy+zt-1),$$ where $K$ is any field. In particular whether any invertible element is a (edit: scalar) multiple ...
1answer
74 views
### Are minimal prime ideals in a graded ring graded?
Let $A=\oplus A_i$ be a graded ring. Let $\mathfrak p$ be a minial prime in $A$, is $\mathfrak p$ an graded ideal? Intuitively, this means the irreducible components of a projective variety are also ...
2answers
64 views
### Understanding the image under the map $\mathbb{C}[t]\stackrel{f^*}{\rightarrow} \dfrac{\mathbb{C}[x,y]}{\langle xy\rangle}$ given by $t\mapsto x+y$
Consider the map of affine schemes $$\operatorname{Spec}\left( \dfrac{\mathbb{C}[x,y]}{\langle xy\rangle }\right)\stackrel{f}{\rightarrow} \operatorname{Spec}\mathbb{C}[t]$$ whose corresponding ...
3answers
105 views
### The closure of $\overline{\{x\}}$ being irreducible and relating the generic point to its associated irreducible scheme
If $x$ is a point in $X$ where $X$ is a scheme, we write $\overline{\{ x\}}$ for the closure of $x$ in $X$. $\mathbf{Question \;1}$: I am a bit confused why $\overline{\{ x\}}$ is irreducible. ...
0answers
108 views
### Tensoring is thought as both restricting and extending?
I hope these questions are not too trivial. Let $I$ be an ideal in $R$. Write $I'\subseteq R[t]$. Then the notion of tensoring (R[t]/I')\otimes_{\,\mathbb{C}[t]} \mathbb{C}[t]/\langle t-c ...
0answers
52 views
### Complete intersection but not a domain
This may be a rather trivial question but here it goes. I am looking for a scheme defined by (more than one irreducible and reduced) homogeneous equation in a polynomial ring that is a complete ...
1answer
110 views
### Difference between $\left< x\right> \cap \left< x,y\right>^2$ and $\left< x,y\right>^3$
Consider the ideals $I = \left< x\right> \cap \left< x,y\right>^2 = \left<x^3,x^2y, xy^2\right>$ and $J=\left< x,y\right>^3=\left< x^3, x^2y, xy^2, y^3 \right>$ in ...
0answers
29 views
### Understanding $Bl_{\mathcal{I}}(k^4)/S_2$ where $\mathcal{I}$ is defined by $(x_1-x_2,x_3-x_4)$
Let $k_4=Spec(k[x_1, x_2, x_3,x_4])$ and $\mathcal{I}$ is the ideal sheaf defined by $(x_1-x_2,x_3-x_4)$. Then $$Bl_{\mathcal{I}}(k^4) = Proj (\oplus_{i\geq 0} I^i t^i)$$ where ...
0answers
70 views
### Degree 1 elements in a graded ring from a blow-up perspective
This may be an elementary question but I hope this question will benefit others as much as myself. Let $k^4 = Spec \; k[x_1, x_2, x_3, x_4]$. Writing $Bl_{\mathcal{I}}(k^4)$ as \$R = ...
0answers
58 views
### understanding a graded ring in geometric terms
Consider a ring $k[x_1,x_2,z]$ where the variables $x_1$ and $x_2$ have degree $0$ and $z$ has degree 1 and $k$ is an algebraically closed field. It is clear that \$Proj(k[x_1,x_2,z])=k^2 \times ...
2answers
150 views
### What if $\operatorname{char}\mathbb{K}$ is not $0$ or if $\mathbb{K}$ is not algebraically closed? (Nullstellensatz)
Given a field $\mathbb{K}$ which is algebraically closed and of characteristic 0, we can say exactly what the maximal ideals of $\mathbb{K}[x_1,\dots,x_n]$ are and they correspond to points in ...
1answer
111 views
### On limits, schemes and Spec functor
I have several related questions: Do there exist colimits in the category of schemes? If not, do there exist just direct limits? Do there exist limits? If not, do there exist just inverse limits? ...
1answer
129 views
### Integral closure in the total ring of fractions
My question is linked with normalization of reduced algebraic curves that are not necessarily irreducible. Let $(A,\mathfrak{m})$ be a local reduced noetherian ring with Krull dimension $1$, let ...
2answers
117 views
### can singular points become nonsingular after a base change
Let $X$ be a normal surface over a field $k$. Assume that $X$ is singular. Does there exist a field extension $L/k$ (finite or infinite) such that $X_L$ is nonsingular? The answer is no in general. ...
3answers
186 views
### When is a local algebra reduced?
Let $k$ be a field and let $A$ be a local $k$-algebra which has finite dimension over $k$. Let $\mathfrak{m}$ be the maximal ideal of $A$ and let $k' = A / \mathfrak{m}$ be the residue field. For ...
1answer
124 views
### Is the functor $\mbox{Rings}\rightarrow \mbox{Sets}$ given by $R \mapsto \{\pm 1 \in R\}$ corepresentable?
Is the function $\mbox{Rings}\rightarrow\mbox{Sets}$ given by $R\mapsto \{\pm 1\in R\}$ corepresentable? Of course this might be problematic in characteristic 2 since this set is then a singleton, ...
2answers
126 views
### Subrings of formal series rings
Let $k$ be a field and $A = k[[x_1, \dots, x_n ]]$ be the ring of formal series in $n$ variables. Consider $g_1, \dots, g_m \in A$ such that $g_1(0) = \cdots = g_m(0) = 0$. For every \$f \in k[[t_1, ...
1answer
245 views
### Is the number of prime ideals of a zero-dimensional ring stable under base change?
Let $A$ be a zero-dimensional ring of finite type over a field $k$ and let $X= \textrm{Spec} \ A$ be its spectrum. Note that $X$ is a finite set. Suppose that $k\subset K$ is a finite field extension ...
1answer
317 views
### What is the connection between the definition of complete intersection variety and complete intersection ring?
An algebraic variety is called a complete intersection if its defining ideal is generated by codimension many polynomials. A Noetherian local ring $R$ is called a complete intersection if its ...
0answers
116 views
### Minimal systems of generators for commutative rings
Let $S$ be some base ring (a commutative ring or even just a field), and $R$ a commutative ring containing $S$ which is finitely generated (as an algebra) over $S$. What conditions guarantee that any ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 177, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9096822142601013, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/11/05/coalgebras/?like=1&source=post_flair&_wpnonce=c1c9c39881
|
# The Unapologetic Mathematician
## Coalgebras
Okay, back to business. We’re about to need a little more algebraic structure floating around. This is something that’s always present, but many approaches don’t explicitly mention it until much later. Since I’m taking a categorical view of things, it’s easier to show what’s really going on right away.
Remember that an $\mathbb{F}$-algebra is a monoid object in the category of vector spaces over $\mathbb{F}$. Dually, an $\mathbb{F}$-coalgebra is a comonoid object in the category of vector spaces over $\mathbb{F}$. That’s all well and good, but what’s a comonoid object? We’ve mentioned them before, but let’s be more explicit this time around.
Remember that a monoid object was a functor from a certain category we cooked up to mirror the axioms of a monoid. We gave the category objects $M^{\otimes n}$ corresponding to the natural numbers, corresponding to lists of monoid elements. We have a map $\mu:M\otimes M\rightarrow M$ corresponding to multiplication, and a map $\iota:\mathbf{1}\rightarrow M$ picking out the unit in the monoid.
So a comonoid object will be a functor from the dual of this category! That is, we’ve still got all the same objects, but now we have a “comultiplication” arrow $\Delta:C\rightarrow C\otimes C$, and a “counit” arrow $\epsilon:C\rightarrow\mathbf{1}$.
Now, the model category describing monoid objects isn’t just objects and arrows. We also have the relations that make a monoid a monoid: the associative law $\mu\circ(\mu\otimes1_M)=\mu\circ(1_m\otimes\mu)$, and the left and right unit laws $\mu\circ(1_M\otimes\iota)=1_M=\mu\circ(\iota\otimes1_M)$.
Dually, we must have dual relations for comonoid objects. We have a coassociative law $(\Delta\otimes1_M)\circ\Delta=(1_M\otimes\Delta)\circ\Delta$, and left and right counit laws $(1_M\otimes\epsilon)\circ\Delta=1_M=(\epsilon\otimes1_M)\circ\Delta$.
We could write these down in terms of commuting diagrams, but it’s even more instructive to look at “string diagrams” like we did before. This makes the sense of what’s going on all the clearer.
So a coalgebra is a comonoid object in the category of vector spaces over $\mathbb{F}$. That is, it’s an $\mathbb{F}$ vector space $C$, equipped with a linear comultiplication $\Delta$ and a linear counit $\epsilon$, which satisfy the coassociative and counit laws. I’ll admit that this seems an extremely quirky structure to discuss, so an example is in order. The one we care most about right now is the group algebra. Yes, it turns out to also be a coalgebra!
To really wrap our heads around it, let’s start with a finite group $G$. Then we get a finite-dimensional vector space $\mathbb{F}[G]$, with a basis $e_g$ indexed by elements of $G$. Let’s forget, for the moment, that we have a multiplication and a unit. Instead, we define the comultiplication by $\Delta(e_g)=e_g\otimes e_g$ for each basis element. We also define the counit by $\epsilon(e_g)=1$ for each element $g\in G$. Both of these maps extend by linearity.
Now, let’s check the coassociative property. It suffices to check it on basis elements, because the extensions by linearity have to agree. In this case we have
$\begin{aligned}\left[\Delta\otimes1_{\mathbb{F}[G]}\right]\left(\Delta(e_g)\right)=\left[\Delta\otimes1_{\mathbb{F}[G]}\right](e_g\otimes e_g)=e_g\otimes e_g\otimes e_g\\=\left[1_{\mathbb{F}[G]}\otimes\Delta\right](e_g\otimes e_g)=\left[1_{\mathbb{F}[G]}\otimes\Delta\right]\left(\Delta(e_g)\right)\end{aligned}$
Similarly, we can check the right counit law:
$\begin{aligned}\left[1_{\mathbb{F}[G]}\otimes\epsilon\right]\left(\Delta(e_g)\right)=\left[1_{\mathbb{F}[G]}\otimes\epsilon\right](e_g\otimes e_g)=e_g\\=\left[\epsilon\otimes1_{\mathbb{F}[G]}\right](e_g\otimes e_g)=\left[\epsilon\otimes1_{\mathbb{F}[G]}\right]\left(\Delta(e_g)\right)\end{aligned}$
and the left counit law is similar. Thus these maps do indeed describe the structure of a coalgebra.
### Like this:
Posted by John Armstrong | Algebra, Category theory
## 3 Comments »
1. [...] In yesterday’s post I used the group algebra of a group as an example of a coalgebra. In fact, more is [...]
Pingback by | November 6, 2008 | Reply
2. [...] As we work with coalgebras, we’ll need a nice way to write out the comultiplication of an element. In the group algebra [...]
Pingback by | November 10, 2008 | Reply
3. [...] this concept, we say that a coalgebra is cocommutative if we can swap the outputs from the comultiplication. That is, if . Similarly, [...]
Pingback by | November 19, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904789924621582, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/314078/what-are-the-densities-of-branches-of-the-euclidean-tree
|
# What are the densities of branches of the euclidean tree?
The Euclidean algorithm shows how all coprime pairs of positive integers can be uniquely obtained from the pair $(1,1)$ by applying the two operations $(a,b) \to (a+b,b)$ and $(a,b) \to (a,a+b)$.
(or speaking with rationals, all the positive rationals $x=b/a$ can be obtained from $1$ by applying the two operations $x \to x/(x+1)$ and $x \to x+1$).
Furthermore, we know that the natural density of coprime pairs among the pairs of positive integers is $6/\pi^2$.
This brings the question : do the set of all childrens of some pair $(a,b)$ have a natural density $d(a,b)$, and if so, what is it ?
Allowing to start from any pair of positive reals, we have that $d(ka,kb) = d(a,b)/k$ if those exist, which suggests that we can simply look for a function $d(1,b/a) = f(b/a)$.
We have, for symmetry reasons, $f(x) = f(1/x)/x$. We also have from the tree construction, the functional equation $f(x) = f(x+1) + f(x/(x+1))/(x+1)$.
Using the symmetry equation, we can rewrite this to get the nicer functional equation : $f(x) + f(1/x) = f(1/(x+1)) + f(x/(x+1))$.
So, is there anything interesting we can say about these functional equations ? How many continuous (or even, differentiable) solutions does the system have ? Does the density we started with have a nice closed-form expression ?
-
A simplification: let $f(x) = g(x) / \sqrt{x}$. Then the first functional equation says $g(x) = g(1/x)$ and the second says $$g(x) \sqrt{x+1} = g(x+1) \sqrt{x} + g((x+1)/x)$$ – Hurkyl Feb 25 at 17:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403181672096252, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/26645/classifying-singular-points-as-local-min-max-or-saddle-points
|
# Classifying singular points as local min, max or saddle points
I want to determine if a singular point is a local min, max or saddle point.
We are dealing with singular points so we cannot use the hessian matrix.
What I have written, and I think I must of missed something is :
Say we have a function $f(x,y,z)$. To show that $(2,2,2)$ is a saddle point, we want to show :
$\forall \epsilon >0$, we want to find $k,w,h$ such that if $k^2+w^2+h^2<\epsilon^2$ then
And here is where what I have writen is incomplete:
$f(2+k, 2+w, 2+h) > f(2,2,2)$
...................................$< f(2,2,2)$
Is this correct as $f(2-k, 2-w, 2-h)<f(2,2,2)$?
Edit: I'm not sure how standard these terms are, so to be clear a singular point here is a point at which the partial of the function does not exist.
-
## 1 Answer
A saddle point is neither a local max nor a min. For $(2,2,2)$ to be a saddle point we need the following. For every $\epsilon$$>0$ there exists $(x_1,y_1,z_1)$ and $(x_2,y_2,z_2)$ such that $d=\sqrt{(2-x_k)^2+(2-y_k)^2+(2-z_k)^2}$$<\epsilon$ for $k=1,2$ and $f(x_1,y_1,z_1)<f(2,2,2)<f(x_2,y_2,z_2)$.
ie, no mater how small of an interval we put around the domain point $(2,2,2)$ there are points in that interval producing a larger and smaller value of $f$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8887996077537537, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/123502/largest-number-of-k-arithmetic-progressions-without-a-k1-arithmetic-progressio
|
Largest number of k-arithmetic progressions without a (k+1)-arithmetic progression
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $A \subseteq \{1,\dots,n\}$ does not contain any arithmetic progressions of length $k+1$. What is the largest number of $k$-term arithmetic progressions that $A$ can have? (one may also wish to put some lower or upper on the size of $A$) We can work over $\mathbb{Z}_p$ if it makes the answer any easier. The "degenerate" case $k=2$ asks for the largest size of the set without arithmetic progressions and it is known that there exist $A$'s with this property of almost linear size.
-
Nb. for graphs (i.e. asking for the maximal number of $l$-cliques a graph can contain before it contains a $k$-clique) there are explicit bounds. – Marcin Kotowski Mar 4 at 17:25
1
Dear Marcin This is nice and natural question, but I doubt if much is known. Did you look at the number of k-term APs in Berend-type examples for sets without (k+1)-terms arthmetic progressions? This looks like the best shot for an answer presently. – Gil Kalai Mar 6 at 7:27
1 Answer
Let $B\geq2k$ and let `$$A=\left\{\sum_{i=0}^na_iB^i:n=0,1,...;a_i=0,1,...,k-1\right\}$$` It's not hard to show that $A$ has no $k+1$-long arithmetic progression. Using the density Hales-Jewett theorem we get that any subset $B\subset A$ with positive relative density has a $k$-long arithmetic progression.
I don't know the best bounds on the density Hales-Jewett, but I think there are some from the polymath proof, so in principle this would give an answer to your question.
-
I upvote the answer, but the conclusion "any subset B⊂A with positive relative density has a k-long arithmetic progression." seems too weak to say anything about the number of such k-progressions in the whole set. – Marcin Kotowski Mar 4 at 16:23
I think one can in principle use a Varnavides-type argument, but I am not sure how that would work out exactly (I am using the term "Varnavides-type" from a section on this blog post: terrytao.wordpress.com/2008/02/10/…) – Joel Moreira Mar 4 at 18:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107338190078735, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/76069/finiteness-of-etale-cohomology-groups/76070
|
## Finiteness of étale Cohomology Groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Mr. Milne, in "Étale Cohomology", gives the following proposition (p.224, Corollary VI.2.8):
Proposition: Let $F$ a constructible sheaf on $X_{et}$, the small étale site of $X$, $X$ proper over a field $k$. Then $H^{i}(X,F)$ is finite for $i\geq0$. (false?)
He deduces it via Hochschild-Serre from the statement, that on the big étale site of $X$, constructible sheaves are stable under higher direct images of proper Morphisms (p.223, Theorem VI.2.1).
My Question is: Is there a "basic" proof of the proposition, which doesn't involve other Grothendieck topologies than the small étale sites (and possible the Zariski-topology)?
Thanks!
Edit: Actually, Milne himself states in his course notes (http://www.jmilne.org/math/CourseNotes/lec.html, Remark 17.9) that the proposition is wrong for not seperably closed fields $k$, giving the example of $X=Spec(Q)$ and $F=(Z/2Z)_X$. Moreover, he gives the desired proof in the small étale site for $X$ proper over seperably closed fields (the same notes, 17.5-17.8: Please accept my apologies, if I stole your time..
As a new question arises: Where went the proof in "Étale Cohomology" wrong - or did I misunderstand something?
-
## 2 Answers
(This was going to be a comment, but it's too long.)
I don't see how the big étale site appears, even in the proof of cor IV.2.8 of Milne. Seems like he's just base changing to the integral closure of the field but using small étale sites all time. (Though I'm not that familiar with Milne, as I use SGA 4 and 4 1/2 as references.)
Anyway, I think the problem with the proof of corollary IV.2.8 is when he says it just follows from the Hochschild-Serre spectral sequence. I would say the Hochschild-Serre spectral sequence gives you a spectral sequence
`$E_2^{pq}=H^p(Gal(k_s/k),H^q(X\otimes k_s,F))\Longrightarrow H^{p+q}(X,F)$`
($k_s$ is the separable closure of $k$ as in Milne)
Then Milne IV.2.1 tells you that the groups $H^q(X\otimes k_s,F)$ are finite, but it doesn't follow that their $Gal(k_s/k)$-cohomology is finite too. In fact Milne himself gives a counterexample in the part of his notes that you quote.
(Does that make sense ?)
-
IMHO that makes perfect sense, and that's the answer to the OP's post-edit question. +1 – Joël Sep 21 2011 at 19:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This mistake appears in James Milne's list of errata at http://www.jmilne.org/math/Books/add/ECPUP.pdf.
-
Thanks, I was not even aware of this list ! – Alex Sep 21 2011 at 19:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923740565776825, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Minimum_description_length
|
# Minimum description length
The minimum description length (MDL) principle is a formalization of Occam's Razor in which the best hypothesis for a given set of data is the one that leads to the best compression of the data. MDL was introduced by Jorma Rissanen in 1978.[1] It is an important concept in information theory and learning theory.[2][3][4]
## Overview
Any set of data can be represented by a string of symbols from a finite (say, binary) alphabet.
[The MDL Principle] is based on the following insight: any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally." (Grünwald, 1998)[5]
To select the hypothesis that captures the most regularity in the data, scientists look for the hypothesis with which the best compression can be achieved. In order to do this, a code is fixed to compress the data, most generally with a (Turing-complete) computer language. A program to output the data is written in that language; thus the program effectively represents the data. The length of the shortest program that outputs the data is called the Kolmogorov complexity of the data. This is the central idea of Ray Solomonoff's idealized theory of inductive inference.
### Inference
However, this mathematical theory does not provide a practical way of reaching an inference. The most important reasons for this are:
• Kolmogorov complexity is uncomputable: there exists no algorithm that, when input an arbitrary sequence of data, outputs the shortest program that produces the data.
• Kolmogorov complexity depends on what computer language is used. This is an arbitrary choice, but it does influence the complexity up to some constant additive term. For that reason, constant terms tend to be disregarded in Kolmogorov complexity theory. In practice, however, where often only a small amount of data is available, such constants may have a very large influence on the inference results: good results cannot be guaranteed when one is working with limited data.
MDL attempts to remedy these, by:
• Restricting the set of allowed codes in such a way that it becomes possible (computable) to find the shortest codelength of the data, relative to the allowed codes, and
• Choosing a code that is reasonably efficient, whatever the data at hand. This point is somewhat elusive and much research is still going on in this area.
Rather than "programs", in MDL theory one usually speaks of candidate hypotheses, models or codes. The set of allowed codes is then called the model class. (Some authors refer to the model class as the model.) The code is then selected for which the sum of the description of the code and the description of the data using the code is minimal.
One of the important properties of MDL methods is that they provide a natural safeguard against overfitting, because they implement a tradeoff between the complexity of the hypothesis (model class) and the complexity of the data given the hypothesis[citation needed].
## Example of MDL
A coin is flipped 1,000 times and the numbers of heads and tails are recorded. Consider two model classes:
• The first is a code that represents outcomes with a 0 for heads or a 1 for tails. This code represents the hypothesis that the coin is fair. The code length according to this code is always exactly 1,000 bits.
• The second consists of all codes that are efficient for a coin with some specific bias, representing the hypothesis that the coin is not fair. Say that we observe 510 heads and 490 tails. Then the code length according to the best code in the second model class is shorter than 1,000 bits.
For this reason a naive statistical method might choose the second model as a better explanation for the data. However, an MDL approach would construct a single code based on the hypothesis, instead of just using the best one. To do this, it is simplest to use a two-part code in which the element of the model class with the best performance is specified. Then the data is specified using that code. A lot of bits are needed to specify which code to use; thus the total codelength based on the second model class could be larger than 1,000 bits. Therefore the conclusion when following an MDL approach is inevitably that there is not enough evidence to support the hypothesis of the biased coin, even though the best element of the second model class provides better fit to the data.
## MDL Notation
Central to MDL theory is the one-to-one correspondence between code length functions and probability distributions. (This follows from the Kraft-McMillan inequality.) For any probability distribution $P$, it is possible to construct a code $C$ such that the length (in bits) of $C(x)$ is equal to $-\log_2 P(x)$; this code minimizes the expected code length. Vice versa, given a code $C$, one can construct a probability distribution $P$ such that the same holds. (Rounding issues are ignored here.) In other words, searching for an efficient code reduces to searching for a good probability distribution, and vice versa.
## Related concepts
MDL is very strongly connected to probability theory and statistics through the correspondence between codes and probability distributions mentioned above. This has led researchers such as David MacKay to view MDL as equivalent to Bayesian inference: code length of the model and code length of model and data together in MDL correspond to prior probability and marginal likelihood respectively in the Bayesian framework.[6]
While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov normalized maximum likelihood code, which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about the true data generating process: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense.[7][8] In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on the Kolmogorov structure function.
According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafe priors that would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in so-called objective Bayesian analysis; there, however, the motivation is usually different.[9]
### Other Systems
MDL was not the first information-theoretic approach to learning; as early as 1968 Wallace and Boulton pioneered a related concept called Minimum Message Length (MML). The difference between MDL and MML is a source of ongoing confusion. Superficially, the methods appear mostly equivalent, but there are some significant differences, especially in interpretation:
• MML is a fully subjective Bayesian approach: it starts from the idea that one represents one's beliefs about the data generating process in the form of a prior distribution. MDL avoids assumptions about the data generating process.
• Both methods make use of two-part codes: the first part always represents the information that one is trying to learn, such as the index of a model class (model selection), or parameter values (parameter estimation); the second part is an encoding of the data given the information in the first part. The difference between the methods is that, in the MDL literature, it is advocated that unwanted parameters should be moved to the second part of the code, where they can be represented with the data by using a so-called one-part code, which is often more efficient than a two-part code. In the original description of MML, all parameters are encoded in the first part, so all parameters are learned.
## References
1. Rissanen, J. (1978). "Modeling by shortest data description". Automatica 14 (5): 465–658. doi:10.1016/0005-1098(78)90005-5.
2. "Minimum Description Length". University of Helsinki. Retrieved 2010-07-03.
3. Grünwald, P. (June 2007). "the Minimum Description Length principle". MIT Press. Retrieved 2010-07-03.
4. Grünwald, P (April 2005). "Advances in Minimum Description Length: Theory and Applications". MIT Press. Retrieved 2010-07-03.
5. Grünwald, Peter. "MDL Tutorial". Retrieved 2010-07-03.
6. MacKay, David (2003). "Information Theory, Inference, and Learning Algorithms". Cambridge University Press. Retrieved 2010-07-03.
7. Rissanen, Jorma. "Homepage of Jorma Rissanen". Retrieved 2010-07-03.
8. Rissanen, J. (2007). "Information and Complexity in Statistical Modeling". Springer. Retrieved 2010-07-03.
9. Nannen, Volker. "A short introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length.". Retrieved 2010-07-03.
## Further reading
• Minimum Description Length on the Web, by the University of Helsinki. Features readings, demonstrations, events and links to MDL researchers.
• Homepage of Jorma Rissanen, containing lecture notes and other recent material on MDL.
• Homepage of Peter Grünwald, containing his very good tutorial on MDL.
• J. Rissanen, Information and Complexity in Statistical Modeling, Springer, 2007.
• ISBN0-262-07262-9.
• David MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9029366970062256, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/21666-related-rates-problems.html
|
# Thread:
1. ## Related Rates Problems
1. A spotlight on the ground shines on the wall 12 m away. If a man 2 m tall walks from the spotlight toward the building at a speed of 1.6 m/s, how fast is the length of his shadow on the building decreasing when he is 4 m from the building?
2. A boat is pulled into a dock by a rope attached to the bow of the boat and passing through a pulley on the dock that is m higher than the bow of the boat. If the rope is pulled in at a rate of 1 m/s, how fast is the boat approaching the dock when it is 8 m from the dock.
So far related rates problems have been pretty easy since we have been using geometric figures. I can't come up with an equation for either of these so I can solve for the change of rate. Can someone help me come up with the equations and show me how to do it in case I get something similar to these in the future?
2. Originally Posted by FalconPUNCH!
1. A spotlight on the ground shines on the wall 12 m away. If a man 2 m tall walks from the spotlight toward the building at a speed of 1.6 m/s, how fast is the length of his shadow on the building decreasing when he is 4 m from the building?
So far related rates problems have been pretty easy since we have been using geometric figures. I can't come up with an equation for either of these so I can solve for the change of rate. Can someone help me come up with the equations and show me how to do it in case I get something similar to these in the future?
1) using similar triangles, you have
$\frac{x}{2}=\frac{12}{y}$
which implies that
$xy=24$
and
$\frac{dy}{dt} x + \frac{dx}{dt} y =0$
use the given and evaluate at x=12-4=8 (note that if x=8, then y=3)
for 2) what do you min by "... m higher than..."?
Attached Thumbnails
3. Hello, FalconPUNCH!
You left out a measurement in #2.
. . I'll pick a convenient value . . .
2. A boat is pulled into a dock by a rope attached to the bow of the boat and
passing through a pulley on the dock that is 6 m higher than the bow of the boat.
If the rope is pulled in at a rate of 1 m/s,
how fast is the boat approaching the dock when it is 8 m from the dock?
Code:
``` * P
* |
R * |
* | 6
* |
* |
B * * * * * * *
x```
The boat is at $B$, the pulley is at $P.$
The length of the rope is: . $R \,=\, BP$ . and . $\frac{dR}{dt} \,=\, -1\text{ m/s}$
From Pythagorus, we have: . $x^2 + 6^2 \:=\:R^2$
Differerentiate with respect to time: . $2x\left(\frac{dx}{dt}\right) \:=\:2R\left(\frac{dR}{dt}\right)$
. . and we have: . $\frac{dx}{dt} \:=\:\frac{R}{x}\left(\frac{dR}{dt}\right)$
Can you finish it now?
4. Oh sorry I left out 1 m. Thanks for your help. I understand now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547165036201477, "perplexity_flag": "head"}
|
http://mishabucko.wordpress.com/2013/01/13/a-question-regarding-two-sequences/
|
Seeking objectiveness
A question regarding two sequences
As you know, I have been working with the R-sequence for some time. Some time ago I found this interesting feature that the R-sequence, part of it (3,9,10,27,28,30,81) resembles https://oeis.org/A060140 and this means, by its definition, the numbers of the form 9x+1 that occupy the same positions in S that 1 occupies in the infinite Fibbonaci word (https://oeis.org/A003849). How interesting this is that the prime numbers might be closely connected to the Fibonacci sequence?
To grasp the more general pick at the universe of the numbers, we’d have to understand more about the Fibonacci sequence and the rationale for its potential connection with the primes. Still, wanted to share that.
For all integers $r$ find all $b=[b_1, b_2,..., b_n]$, $b_i \in \{0,1\}$, such that $r = \sum_{i=1}^{n}{b_i p_i}$, where $p_i$ are consecutive primes, for positive integers $i,n$.
Below R-sequence sketched in R.
Below, in blue, you can see the x/lgx from the PNT together with the R-sequence. The amount of the elements in the R-sequence is the same as the one of primes, thus the idea to approx. the PNT theorem with the use of the R-sequence.
Below, started to look for the very approximation of the PNT with the use of the R-sequence. The thing is that we could see a cunning resemblance between the number of primes and the R-sequence. And R-sequence itself is more embedded into the life of numbers than the infamous PNT. Below a part of research I conducted where I tried to use a linear approximation instead of a logarithmical one. That could work only if we were summing over a number of such approximations, knowing exactly how to do it.
Below, you can see how the R-sequence looks in comparison to the PNT formula.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570673108100891, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/16479/direct-sum-of-modules
|
# Direct sum of modules
Let $R$ be a PID,$M$ an $R$-module and $M$ is the direct sum of $M_1, \dots, M_k$ where $M_i \leq M$ for $1 \leq i \leq k$.
Now let $N$ be a submodule of $M$ .
Is it true that $N$ is the direct sum of $N_1,\dots,N_t$ where $N_i \lt N$ and $N_{k-t+1} \le M_i$?
-
I have tried to format it. Please correct if there were mistakes. – Aryabhata Jan 5 '11 at 21:04
There are no mistakes.Thank you for your help.Next time I' ll try to do it myself – t.k Jan 7 '11 at 9:28
## 3 Answers
For a general module M over a PID R, its submodules need not even be isomorphic to direct sums of submodules of its proper direct summands. For instance, the Z-module Q10 is a direct sum of proper submodules, but it has directly indecomposable submodules of rank 10 (so they cannot be written as any non-trivial direct sum). Since the torsion-free rank of a submodule can only decrease, this means M = Q⊕Q9 is a counterexample.
If we try to do what the question asks, and get a direct sum decomposition from submodules of the direct summands, then things go wrong even for M=Z/2Z ⊕ Z/2Z and the submodule N={(x,x):x in Z/2Z }. N is directly indecomposable, and is not a submodule of the specified direct summands of M (though it is itself a summand in a different decomposition).
-
Is $N \simeq \mathbb{Z}/2\mathbb{Z}$, via $(x,x) \mapsto x$ – Juan S Jun 3 '11 at 23:33
@Qwirk: yes, exactly. – Jack Schmidt Jun 3 '11 at 23:35
and so if $M = M_1 \oplus M_2 = \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$, then $N \cap M_1 = N \cap M_2 = \mathbb{Z}/2\mathbb{Z}$? (I am trying to reconcile all the answer's here - specifically Prometheus' comment that $N \cap M_1 = 0)$ – Juan S Jun 3 '11 at 23:43
@Qwirk, the problem is ≅-isomorphism is not the same as set equality. M1 = { (0,0), (1,0) }, M2 = { (0,0), (0,1) } and N = { (0,0), (1,1) }, so M1∩N = M2∩N = { (0,0) }. Rasmus's and Prometheus's examples are the same as this one, using C or Z instead of my Z/2Z. – Jack Schmidt Jun 3 '11 at 23:47
ahh, got it, thank you! – Juan S Jun 4 '11 at 0:51
Take $R=\mathbb{Z}$, $M=\mathbb{Z} \oplus \mathbb{Z}= M_1 \oplus M_2$ and $N=\mathbb{Z}\cdot (1,1)$.
$N\cap M_1 =N\cap M_2 = 0$ so it can't be a direct sum of submodules of $M_1, M_2$
-
If I understand the question correctly then $$\mathbb C\cdot(1,1)\subset \mathbb C\oplus\mathbb C$$ is a counterexample (here if have chosen $R=\mathbb C$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8696092963218689, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/1005/is-there-a-secret-sharing-scheme-which-allows-delegation-re-sharing-without-reco/1009
|
# Is there a secret sharing scheme which allows delegation/re-sharing without reconstructing the original secret?
EDIT: Ilmari Karonen's answer below well not exactly what I want, gives a very good idea of what I am trying to accomplish.
Are there any known secret sharing schemes that allow new parties to be read in on a portion secret, preferably without all parties having to be online at the same time? I don't want to have to keep the original trusted authority around. The question is, is this possible without reconstructing the original secret?
My idea is that I divide a secret between $N$ of $k$ people at time $T$. At time $T+1$, I want to change it to $n$ of $k'$ people. This may or may not be strictly possible, but is there something possible along those lines?
-
1
Read in by whom? Someone who knows the entire secret? Or must it be possible to read them in without anyone learning or using the entire secret? – David Schwartz Oct 20 '11 at 22:01
1
Are the $n$ and $N$ in the last paragraph the same number, or one smaller/bigger than the other? – Paŭlo Ebermann♦ Oct 20 '11 at 23:50
I know this is an older question, but I found some thing recently then came across your question. Thought you might be interested. csis.gmu.edu/faculty/desmedt%207-25-97.pdf and cs.cmu.edu/~wing/publications/CMU-CS-01-155.pdf – mikeazo♦ Mar 6 at 14:04
## 4 Answers
A trivial example showing that this is possible, at least in some cases, is the $n$-out-of-$n$ secret sharing scheme based on modular addition. Let $s \in \mathbb Z / m \mathbb Z$ be the secret, and construct $n$ shares of it by picking $x_1, \dotsc, x_{n-1}$ randomly from $\mathbb Z / m \mathbb Z$ and letting $x_n = s - (x_1 + \dotsm + x_{n-1}) \mod m$. Thus, the secret can be reconstructed by calculating $s = x_1 + \dotsm + x_n \mod m$.
In this scheme, anyone who holds a share $x_i$ can further split their share into $j$ subshares $\xi_1, \dotsc, \xi_j$ in the same way, such that $x_i = \xi_1 + \dotsm + \xi_j \mod m$. If they then discard the original share $x_i$, they'll have expanded the number of shares from $n$ to $n+j-1$. Further, this expansion is completely transparent to the other participants, in that reconstructing the original secret still requires merely adding up all the $n+j-1$ shares modulo $m$.
I can't right now think of any obvious way to devise an $k$-out-of-$n$ secret sharing scheme that could be similarly expanded into an $(k+j)$-out-of-$(n+j)$ scheme by some subset of less than $k$ participants, but I wouldn't be surprised if one did exist.
Addendum: Now that I'm not quite as tired as I was when I first wrote the answer above, I see that the trick I used does not generalize the way the OP apparently wants. In particular, we can prove the following:
Lemma 1: It is not possible to expand an effective $(k,n)$ threshold secret sharing scheme into an effective $(j,m)$ threshold scheme, where $m-j > n-k$, without access to at least $k$ shares.
Proof: Already given by Dilip Sarwate. Essentially, if the holders of $k-1$ shares could do this, they could assign all the $m-n$ new shares to themselves, and so obtain $k+m-n-1 \ge j$ new shares, which would let them recover the secret under the expanded scheme and thus break the original scheme.
Lemma 2: It is not possible to expand an effective $(k,n)$ threshold secret sharing scheme into an effective $(j,m)$ threshold scheme, where $j > k$, without access to at least $n-k+1$ shares.
Proof: As above, if the holders of $n-k$ shares could do this, then the holders of the remaining $k$ shares could still recover the secret, thus breaking the new scheme.
Put together, these lemmata yield the following theorem:
Theorem: It is not possible to expand an effective $(k,n)$ threshold secret sharing scheme into an effective $(j,m)$ threshold scheme, where $m > n$, without access to at least $k$ or $n-k+1$ shares.
For lemma 1, the lower bound of $k$ shares is tight, as shown by poncho. For lemma 2, my example above shows the tightness of the lower bound for the specific case of $k=n$; I'm not sure whether or not it can be tightened further for $k < n$.
Of course, if you're willing to allow more general secret sharing schemes, where not all shares are equivalent, then various kinds of expansion are indeed possible. In particular, it's always possible for any shareholder(s) to further share their own shares with any number of people using any secret sharing scheme of their choosing. These derived shares will not, however, generally be equivalent to the original ones.
-
Thanks. This is along the lines of what I was looking for. The question of course is can it work for non n out of n schemes? – imichaelmiers Oct 24 '11 at 16:33
This scheme gives the holder of any share the power to unilaterally change the scheme. Now more shares have to be present before the secret can be recovered by the group, and the other group members (who did not participate in the share splitting) may well object. If the $i$-th share holder splits his share but retains his original share in addition to his portion of the split share, then the $k$ original share holders can still recover the secret while the new holders of the split shares essentially become second-class citizens. Just something for the OP to think about. – Dilip Sarwate Oct 24 '11 at 21:38
I changed a $j$ into a $k$ in the Proof of Lemma 1, which is what I think was intended. Please roll back if you did mean to say $j$. – Dilip Sarwate Dec 4 '12 at 11:43
@Dilip: I did mean to say $j$. The hypothetical expanded scheme requires $j$ shares to reconstruct the secret, while the original scheme is supposed to require $k$. If the holders of $k-1$ shares under the original scheme could somehow obtain $j$ or more shares under the new scheme, that would break the original scheme, contradicting the assumption that the original scheme was secure. – Ilmari Karonen Dec 4 '12 at 14:18
But if the holders of $k-1$ shares can get together and construct just one additional share (instead of $j > 1$) without knowing the secret, they have broken the original scheme which has the property that knowledge of $k-1$ or fewer shares is insufficient to reconstruct the secret but $k$ (or more) shares suffice to reconstruct the secret. Yes, getting more than one new share also breaks the original scheme, and indeed the expanded scheme is also broken in that any $k-1$ shareholders can construct more shares and break the new scheme. – Dilip Sarwate Dec 4 '12 at 14:25
show 1 more comment
Suppose we have a $(k, n)$ threshold scheme meaning that there are $n$ shares of a secret distributed to different parties, and any $k$ shares can be used to re-create the secret. A new person joins the club and wants to have a share of the secret too. I contend that the secret must be available to a trusted party who can create the extra share. Because if someone with little knowledge of the secret (or even as much knowledge as a cabal of $k-1$ shareholders trying to break the scheme) could create a new share of the secret, then this someone could repeat the process many times until $k$ shares are available, and thus recover the secret via the standard reconstruction technique. Now, it is possible to have a mathematical algorithm or computer program that will take $k$ (or more shares) and create new shares without the secret being explicitly reconstructed, that is, none of the quantities used internally or stored anywhere (register or memory cell or disk or tape drive) will actually be the secret itself and so the new share creation process is safe from the casual eavesdropper. However, a savvy opponent who can view the process or get a core dump from the processor will be able to recreate the secret.
-
Agreed. Hence why I said along those lines. Obviously if a subset of k people can generate a fresh full (1/n) secret share, then they could colude and beat the system. However, what if they could create a share that didn't have a full share of the information (b/c they don't have it themselves).But when combined with others ( issued by a different subset), could reconstruct the full share. It seems that we certainly would have left the realm of information theoretically secure schemes, but it still might be possible from a computationally secure standpoint. – imichaelmiers Oct 23 '11 at 21:15
For a $(k,n)$ threshold scheme, $k$ people can re-create the secret; in fact they recover not just the secret but also the polynomial $f(x)$ with randomly chosen coefficients for $x$, $x^2, \ldots, x^{k-1}$. Remember that $f_0$ is the secret and that $(\alpha_i, f(\alpha_i))$ are the shares. So once $k$ people have recovered $f(x)$, they can create additional shares. They might even (possibly inadvertently) re-create shares $(\alpha_j, f(\alpha_i)$ that are in the possession of other people not currently present and hand them out to their friends waiting outside the meeting room. – Dilip Sarwate Oct 24 '11 at 17:33
Actually, this appears to be quite straight-forward.
I'll give you an example of this using Shamir's original method: in Shamir's method, the trusted party which generates the shares picks a random polynomial, with the secret as the constant element, and then evaluates that polynomial over various nonzero element, and distributes the pairs $(e, P(e))$ as the shares. Now, to implement what you are requesting, and trusted party wouldn't discard the polynomial, but instead store it somewhere secret. When a request comes in for another share, he'd pick a fresh nonzero element $f$, and generate a fresh share $(f, P(f))$.
Obviously, this is just extending the share distribution task over time, and so it can be done by any secret sharing method. In addition, we don't even need the trusted party to do this (at least, with Shamir's method); if we can get people with $N$ separate shares, those shares are enough to allow us to reconstruct the polynomial, and create fresh shares. The only tricky part might be finding shares that haven't already been distributed; one obvious way around this is to use a large field (say, one with $\ge 2^{128}$ elements), and pick fresh shares randomly.
-
Yeah that obviously works. I wanted to do this without having any trusted authorities around or anyone learning the secret. – imichaelmiers Oct 20 '11 at 22:40
2
Well, someone who can generate fresh shares obviously can get the secret (by the simple expedient of generating N distinct fresh shares, and use the newly minted shares to recover the secret). Hence, I don't see any alternative to either keeping the secret around explicitly (that is, keeping the parameters that the trusted party used), or doing it implicitly by having N people with shares cooperate to generate fresh ones. – poncho Oct 21 '11 at 3:34
2
"Now, to implement what you are requesting, and trusted party wouldn't discard the polynomial, but instead store it somewhere secret." Perhaps it is worth pointing out that by storing the coefficients of the polynomial, the trusted party is actually storing the secret itself "in the clear" as the constant term of the polynomial being stored. If the anticipated number of future requests for shares is fairly small compared to the number who must be present for the secret to be recovered, storing a few $(f, P(f)$ to meet sucg future requests needs might be a better solution. – Dilip Sarwate Oct 22 '11 at 1:03
The Lagrange interpolation polynomial $L(x)$ is evaluated at $x=0$ to get the secret (the constant term). It can just as easily be evaluated at any $x$ to get another share. This requires that the threshold $k$ number of shares are present, just as they would be required to evaluate $L(0)$ to get the secret.
This approach does not require one to store the polynomial or re-construct the secret. As others have pointed out, however, there is a possibility that this will generate shares that have already been distributed, unless the $x_i$ of each distributed share is recorded and not repeated. This may or may not be important, depending on your security needs, as it can sometimes be ok to distribute identical shares.
(Edit: This clearly does not meet the OP's requirement that not all original parties have to be online at the same time. The threshold number of parties need to be present to make a new share. However, this does away with the original trusted authority.
From the standpoint that the secret must never be calculated or visible, this works only if the threshold number of parties calculate $L(x_i)$ for share $x_i$. The obvious caveat is that there is nothing stopping these parties from calculating $L(0)$ while they're at it, so it's not 100% secure.
Just another way to think about the problem.)
-
How is the Lagrange interpolation polynomial created in order to evaluate it at some $x_i$? – Dilip Sarwate Dec 8 '12 at 23:23
In the typical programming scenario, it wouldn't be created then evaluated. It would just be evaluated at xi by summing the values of the basis polynomials evaluated at xi. The same programming construct the parties use to evaluate L(0) can be used to evaluate L(xi). Not sure if you need me to recreate the steps or the polynomial here, or if you are getting at something else? – ampersand Dec 9 '12 at 12:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946224570274353, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/115264/how-can-one-define-in-terms-of-equations-independence-of-elements-in-an-algebr?answertab=active
|
# How can one define, in terms of equations, independence of elements in an algebraic structure defined by identities?
I have been thinking about free algebraic structures. I know the definition by a universal property. But there is a common interpretation that a free structure is one generated by a set of "independent" elements, or elements that are in "no relation" to one another. I have not found a rigorous treatment of this interpretation in the books I have read.
What is a relation between elements of a structure? I understand it's a kind of equation that those elements satisfy, the kind being specific to the kind of structure. $\mathbb Z$ is the free group generated by the one-element set, $\{x_1\}=\{1\}$. There is an equation this set satisfies:
$$x_1=x_1.$$
So it's not enough for a set of generators to satisfy any equation in order not to be independent. I know from group theory that a set $X$ of elements of a group $G$ is a basis of a free subgroup of $G$ iff there is no finite product of elements of $X\cup X^{-1},$ without sequences of the form $x_ix_i^{-1}$ and $x_i^{-1}x_i$ in it, such that this product is equal to $1$. I hope this is true at least, as this is not a theorem I have read. But I think this is correct.
There's a lot of equations this definition doesn't take into account. Brackets are not considered, no terms of the form $((x_i^{-1})^{-1})^{-1}$ and the like are allowed. And the right-hand side has to be $1$.
For semigroups, this changes. I have not seen a definition of a "free basis" for semigroups given in these terms but I think it should say what follows. A subset $X$ of a semigroup $S$ is a set of free generators iff the equality of two finite products of elements of $X$ implies the equality of the words over $X$ used to write those products.
So in this case, the right-hand side doesn't have to be $1$ (which doesn't change for monoids, even though $1$ exists in them). But brackets are still not considered.
For magmas, I think the definition should be this.
A subset $X$ of a magma $M$ is a free basis iff no equation is satisfied by its elements with the exception of equating identical (correct) strings of variables and brackets.
With more complex structures like modules, it gets (in a sense) even more complicated (that is more equations are not considered). The definition in this case is that of linear independence, which is the only definition of independence I've encountered.
A great many of equations equations are not considered in this deifnition. The right-hand side has to be $0$; there must at least one coefficient equal to $0$; the same element of the basis may appear only once; the coefficients are just elements of the ring, not sums, products additive inverses of elements of the ring; brackets are not considered.
I would like to know if there is a general definition that would allow me to find out which equations I should consider when I want to define an independent set of elements of some specific type of structure not mentioned above. (I think it's important to reduce the scope of this question to structures whose axioms are identities, unlike fields for example.)
-
One reason you might run into some circular problems: the equations that specific commutative rings satisfy are polynomials, elements of free commutative rings; the equations that groups satisfy are reduced words, elements of free groups. You might like "varieties". They start with some idea of "absolutely free" that is either explicitly constructed or a universal property, but then all the relatively free definitions are clearly non-circular. "Free abelian group" is a group whose set of identities is the normal subgroup generated by xy=yx. – Jack Schmidt Mar 1 '12 at 16:42
1
– Arturo Magidin Mar 1 '12 at 17:05
I second Arturo's recommendation of Bergman's superb textbook. It is probably the best place to learn about the various ways of defining and constructing free algebras. It contains the answers to your questions (and much more). – Gone Mar 1 '12 at 17:16
@ArturoMagidin Thank you! – user23211 Mar 3 '12 at 16:21
## 1 Answer
Supposing you stay within the realm of structures $S$ for which a "free $S$ on a set $X$" is defined (by a universal property), there is a precise sense one can give to a set $Y$ of elements of any structure $S_0$ to be independent, but you might find it a bit circular and therefore disappointing. The sense is as follows: let $X$ be a copy of $Y$ (detached from the structure $S_0$ in which $Y$ was contained), let $F$ be the free $S$ on $X$, and consider the unique morphism $f:F\to S_0$ that sends each element of $X$ to the corresponding (because $X$ is a copy of $Y$) element of $Y$ (which exists by the universal property). Then the set $Y$ is by definition independent in $S_0$ if $f$ is injective. So for the free $S$ on any $X$ itself, $X$ is trivially independent.
Every expression in the language of structures $S$ involving (apart from constants of the language) only elements of $X$ desginates a unique element in the free structure $F$, and by replacing the elements of $X$ by their counterparts in $Y$ the expression also designates an element in $S_0$. The injectivity requirement now says that two expressions designating the same element of $S_0$ (an equation between these expressions) necessarily already designate the same element in $F$ (such equations are those you call "not considered"; these are precisely the equations that always hold in structures $S$).
You may check that for instance in a free group an expression such as $(x^{-1})^{-1}$ just designates $x$, so the equation $x=(x^{-1})^{-1}$ is one that is not considered, and in the end no expression that contains the inverse of an inverse needs to be considered, since one gets an equivalent expression (in any group, including in a free group) by dropping both inverses. Similarly words that contain a letter multiplied directly by its inverse can be simplified, as can expressions with redundant parentheses. So all elements of the free group on $X$ can be described by strings over $X\cup X^{-1}$ (no parentheses, no nested inverses) without any occurrence of a letter next to its inverse (on either side). Also equations of the form $E_1=E_2$ can be replaced by $E_1(E_2)^{-1}=e$ (which has as consequence that to test a morphism for being injective it suffices to consider its kernel: the inverse image of $e$); this explains why you need only consider equations with one member the identity, at least for groups. If you look closely, you will find that all the peculiarities of equations that need to be considered for different structures can be explained by the details of their "free structures", and the properties of their morphisms.
-
Thank you for your answer. Yes, I do find this definition a bit disappointing because it uses the concept of a free structure. I would like to be able to define a free structure as a structure generated by a set of free generators. Although not necessarily any kind of structure. (Actually, I'm interested the most in what happens when we consider "vector spaces" over "fields" but omit the existence of additive inverses in both the "fields" and the "vector spaces".) Would that be impossible? – user23211 Mar 1 '12 at 15:06
1
My answer gives a way to determine what "independent" should mean whenever a "free $S$" is meaningful. You can keep it a secret that you looked into the construction details of a free $S$ to find out; in concrete cases the resulting characterisation of "independent" is stand-alone. However you cannot expect something that is uniform over all kinds of structures, since as you remarked the details do vary. Another option is using logic: a set is independent if the only relations that hold are those implied by the axioms of the structure $S$. – Marc van Leeuwen Mar 1 '12 at 18:00
– Marc van Leeuwen Mar 1 '12 at 18:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948082447052002, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/131368-if-f-continuous-b.html
|
# Thread:
1. ## if f is continuous on [a,b]....
Prove that if f is continuous on [a,b] with $f(x) \geq 0 \ , \forall x \in [a,b]$ then there is $c \in [a,b] \ such \ that \ f(c)=[\frac{1}{b-a} \int_a^bf^2]$
2. Originally Posted by flower3
Prove that if f is continuous on [a,b] with $f(x) \geq 0 \ , \forall x \in [a,b]$ then there is $c \in [a,b] \ such \ that \ f(c)=[\frac{1}{b-a} \int_a^bf^2]$
This is false: take $f(x)=10\,,\,\,[a,b]=[0,1]\Longrightarrow \frac{1}{1-0}\int\limits_0^1 10^2\,dx=100\neq 10=f(c)\,\,\forall\,c\in [0,1]$ .
The claim though is true if instead $f^2$ we put $f$ in the integral: is just Roll's theorem.
Tonio
3. Originally Posted by flower3
Prove that if f is continuous on [a,b] with $f(x) \geq 0 \ , \forall x \in [a,b]$ then there is $c \in [a,b] \ such \ that \ f(c)=[\frac{1}{b-a} \int_a^bf^2]$
I agree with tonio, except I think he meant MVT. Let $F(x)=\int_a^x f(x)\text{ }dx$. By the FTC this is continuous on $[a,b]$ and diff. on $(a,b)$ so there exists some $c\in(a,b)$ such that $f(c)=F'(c)=\frac{1}{b-a}\left\{\int_a^b \text{ }f(x)dx-\int_a^a\text{ }f(x)dx\right\}=\frac{1}{b-a}\int_a^b\text{ }f(x)\text{ }dx$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695740342140198, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/216387/linear-algebra-problem-involving-the-characteristic-of-a-field/216399
|
# Linear algebra problem involving the characteristic of a field
I'm having trouble with the following problem:
Let $\tau_A: F^2\times F^2 \rightarrow F$ be a symmetric bilinear form given by $\tau_A (v,w)=v^tAw$, $\forall v,w\in F^2$ and $A=\begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix}$, where $F$ is a field.
Suppose the characteristic of $F$ is not equal to $2$. Prove that there exists a basis $\mathcal{B}=\{v_1,v_2\}$ of $F^2$ such that $\tau_A (v_1,v_1)=\tau_A (v_2,v_2)=0$
So far I've tried seeing if I could milk anything out of the non-degeneracy of $\tau_A$ (so that $(F^2,\ \tau_A)$ is an inner product space), but got stuck. I also split this problem into two cases: $Char(F)=0$ and $Char(F)=p$, but wasn't able to get anywhere. I have no experience dealing with the characteristic of a field, so I think conceptually I'm having a hard time understanding why it would matter in a problem like this.
Any tips or solutions (preferably as elementary as possible) would be appreciated! Thanks in advance!
-
## 2 Answers
We can construct the vectors $v_1,v_2$ explicitly: $$v_1=(1,1),\qquad v_2=(1,-1).$$
It is clear that $\tau_A(v_1,v_1)= \tau_A(v_2, v_2)=0$. But when are they a basis?
They are a basis exactly when $1$ and $-1$ are different, i.e. everytime $\mathrm{char}(F)\neq 2$.
On the other hand if $\mathrm{char}(F)=2$ then $A=I$ the identity matrix. So $\tau_A((x,y),(x,y))=0$ implies $x^2+y^2=(x+y)^2=0$, so $x=-y=y$.
Then those vectors form a vector space of dimension $1$, so you cannot find a basis for your vector space of dimension $2$.
-
Just proceed straightforwardly: if $v=(x,y)$ then $\tau_A(v)=0$ means $x^2=y^2$, so just take as basis $\{(1,1),(1,-1)\}$. Note that this is not a basis if the characteristic is 2.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532975554466248, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/88871/why-is-this-rational/88872
|
## Why is this rational? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Why is ln(`$\pi^{\pi}$`) rational?
-
## 1 Answer
It is not known if it is rational. It's suspected to be transcendental over $\mathbb Q$. It is known that $e^\pi$ is transcendental, but none of $e^e,\pi^e,\pi^\pi$ is known to be irrational or transcendental.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507079124450684, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/302/which-came-first-the-fibonacci-numbers-or-the-golden-ratio/121448
|
## Which came first: the Fibonacci Numbers or the Golden Ratio?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?
-
8
The ratio doesn't converge to $.618$, it converges to $\frac{\sqrt5-1}2$. – Kevin O'Bryant Jun 30 2010 at 16:16
I don't understand why this question has so many down-votes. Any down-voters care to explain themselves? – Kevin O'Bryant Jun 30 2010 at 16:17
8
@Kevin: ...or, depending upon exactly what you mean, the successive ratios converge to $\varphi = \frac{\sqrt{5}+1}{2}$. (FWIW, the latter is usually defined to be the golden ratio, not its reciprocal.) I didn't downvote, but I think that at least the second question asking about the significance of the golden ratio is not a good one for our site. I expect that most research mathematicians have heard more than enough about $\varphi$. That pretty much goes for me, although I wouldn't mind watching Donald Duck in Mathemagic Land once more for old times' sake. – Pete L. Clark Jun 30 2010 at 17:28
3
The chicken or the egg: that is the question. – Victor Protsak Jul 1 2010 at 5:46
3
My daughter (aged something like 6 at the time - a long time ago) told me: "well God didn't say 'let there be eggs'". The question becomes 'which is the chicken?' – Mark Bennet Mar 16 2011 at 19:41
show 1 more comment
## 8 Answers
The golden ratio in mathematics dates back to the Pythagoreans, circa 500 BC, it's true. But the Fibonacci numbers also have a long heritage going back to Pingala in India circa 200 BC.
However, the mystical claims about the golden ratio and Fibonacci numbers going back hundreds of millions of years in biology and showing up in every piece of ancient art and architecture seem to date back only to Pacioli in the 16th century AD.
-
3
David, I agree with your criticism about the Fibonnaci numbers not "...showing up in every piece of ancient art and architecture...". But why is it 'mystical' that patterns related to the Fibonacci numbers that have certain useful phenomenological properties (with respect to things like leaf/petal arrangements/staggerings for example), were discovered by evolutionary algorithms? My 'hundreds of millions of years' number simply comes from the divergence of angiospermae (flowering plants) from gymnospermae approximately ~200-250 million years ago. – Mensen Nov 4 2009 at 20:54
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Golden ratio came first. Wikipedia has a rather thorough article on it. It's not nearly as pervasive in nature or architecture as people like to say it is. It will show up in anything with regular pentagons, though.
-
Yeah, Golden ratio been here from ancient times. – Ilya Nikokoshev Oct 11 2009 at 19:53
As previous answers have pointed out, both the golden ratio and the Fibonacci numbers go back thousands of years. However, I believe the connection between the two was discovered around 1730. At that time, Daniel Bernoulli and Abraham de Moivre independently came up with the generating function for the Fibonacci numbers, and the resulting formula for the $n$th Fibonacci number in terms of the golden ratio.
-
1
What about Kepler? He clearly states the relation in "Hexagonal snow" (1611). – Victor Protsak Jul 1 2010 at 5:08
Thanks for drawing my attention to this, Victor. It seems that Kepler should get credit for guessing that the golden ratio is the limiting ratio of consecutive Fibonacci numbers, but apparently this was from numerical experiments (I can't find his "Snowflake" book right now). I still think that Daniel Bernoulli and Abraham de Moivre found the first proofs. – John Stillwell Jul 1 2010 at 5:32
1
Translation from the Russian edition, at the end of the section "Regular solids based on number five and their genesis from divine proportions", after stating the relation, Kepler says: "I omit all further arguments that I could give in confirmation of these most pleasant reasonings. A special place would be required for them". I don't know whether the notion of limit Kepler possessed was robust enough to allow for a possibility of a proof, but his manner of expression leaves little doubt that he understood the precise relationship between the "extreme and mean ratio" and $F_{n+1}/F_n.$ – Victor Protsak Jul 1 2010 at 5:57
The book A Mathematical History of the Golden Number by Roger Herz-Fischler is an exhaustive study of nearly all references to the golden ratio, from the earliest times, and is available as a free e-book. As has been pointed out by others, the golden ratio is older than the Fibonacci numbers. On page 53, Herz-Fischler notes that a pentagram appears as "a pot mark on a jar" dating from 3100 BC in Egypt.
-
2
I cannot see the page in question in google, but unless there is evidence that the golden ratio was used in the construction, it might be a bit of a stretch. The fact that the golden ratio appears in a figure does not mean it was discovered. – Thierry Zell Aug 15 2010 at 12:33
The section I quoted is at the bottom of page 53; the link I gave should direct you to that page. You have to scroll down the page to see it. You are right that the mere use of a pentagram does not prove that the ancient Egyptians were aware of the golden ratio and its significance. But if you read further in that section, entitled "Examples Before Pythagoras (before c. -550)," the author lists many examples from various times and geographical locations, that show at least some understanding of the golden ratio. I think the earliest evidence he cites is from 4,500 B.C. in Palestine (page 57), – Marko Amnell Aug 15 2010 at 16:52
(continued): which suggests that prehistoric cultures may have had some familiarity with the golden ratio. In any case, the question was just about which came first, the golden ratio or the Fibonacci numbers. The golden ratio was definitely understood in the ancient world. The first unequivocal mention of it appears to be by Euclid in The Elements. There is evidence that the Fibonacci numbers were understood in ancient India, with Wikipedia citing a date as early as 200 B.C. That is 100 years after Euclid, but close enough that one could claim the question is not settled. – Marko Amnell Aug 15 2010 at 17:05
Golden ration came first in nature long before humans evolved to think about Fibonacci numbers.
-
1
Are you sure that the Fibonacci numbers didn't appear in nature? Please see the comments attached to other answers. – S. Carnahan♦ Feb 11 at 8:08
The golden ratio was used extensively in ancient art, but the man named Fibonacci (Leonardo of Pisa) lived around 1200 AD. It's possible that the Fibonacci series was known before Fibonacci but I'm not aware of this. So I think it's safe to assume the golden ratio is older.
-
2
The Fibonacci sequence (not series!) was known in ancient India, long before Fibonacci. – David Eppstein Oct 22 2009 at 20:43
We've have Fibonacci numbers as long as we've had rabbits.. :) – userN Jun 30 2010 at 18:42
2
Somewhat more accurately, we've had Fibonacci numbers as long as we've had bees. – Michael Lugo Jun 30 2010 at 20:11
4
The golden ratio was used extensively in ancient art ... try this ... Misconceptions about the Golden Ratio, George Markowsky, College Math Journal: Volume 23, Number 1, Pages: 2-19 1992 – Gerald Edgar Jul 1 2010 at 1:13
We've had Fibonacci numbers as long as we've had pineapples. Or sunflowers. – Todd Trimble Feb 11 at 12:29
What is the significance? Most of the nice properties of the golden mean can be attributed to the fact that its continued fraction coefficients are uniformly bounded, as will be true in particular for any periodic continued fraction, which is to say any quadratic irrational, such as arises as the spectral radius of an indecomposable two-term linear recurrence relation. Among such continued fractions, the unique one with the minimum possible upper bound of 1 naturally exhibits these effects most prominantly, and it arises from (arguably) the simplest such recurrence.
-
The answer for either of these is "hundreds of millions of years" due to their emergence/use in biological development programs, the self-assembly of symmetrical viral capsids (the adenovirus for example), and maybe even protein structure. Because of their close relationship I'd be hard pressed to say which 'came first'.
If you google for it, you'll find plenty of books and papers. However, be extremely careful about examples without a well-explained functional role... there are an arbitrarily large number of coincidences out there if you're looking for them, and humans excel at numerology.
-
9
But I suspect the question was about human, conscious discovery of these concepts, insofar as existing records can show us. Otherwise this becomes partly a philosophical question; I may as a platonist argue that both concepts are eternal. – Jonas Meyer Dec 25 2009 at 1:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955481231212616, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/5456/the-speed-of-gravity/5520
|
# The speed of gravity?
Sorry for the layman question, but it's not my field.
Suppose this thought experiment is performed. Light takes 8 minutes to go from the surface of the Sun to Earth. Imagine the Sun is suddenly removed. Clearly, for the remaining 8 minutes, we won't see any difference.
However, I am wondering about the gravitational effect of the Sun. If the propagation of the gravitational force travels with the speed of light, for 8 minutes the Earth will continue to follow an orbit around nothing. If however, gravity is due to a distortion of spacetime, this distortion will cease to exist as soon as the mass is removed, thus the Earth will leave through the orbit tangent.
What is the state of the art of research for this thought experiment? I am pretty sure this is knowledge that can be inferred from observation.
-
– David Zaslavsky♦ Feb 18 '11 at 23:30
3
Gravity is due to a distortion in spacetime, and, perturbations of the gravitational field do travel at the speed of light. There is no contradiction between these two aspects as your 3rd para seems to suggest. – user346 Feb 19 '11 at 15:55
## 6 Answers
Since general relativity is a local theory just like any good classical field theory, the Earth will respond to the local curvature which can change only once the information about the disappearance of the Sun has been communicated to the Earth's position (through the propagation of gravitational waves).
So yes, the Earth would continue to orbit what should've been the position of the Sun for 8 minutes before flying off tangentially. But I should add that such a disappearance of mass is unphysical anyway since you can't have mass-energy just poofing away or even disappearing and instantaneously appearing somewhere else. (In the second case, mass-energy would be conserved only in the frame of reference in which the disappearance and appearance are simultaneous - this is all a consequence of GR being a classical field theory).
A more realistic situation would be some mass configuration shifting its shape non-spherically in which case the orbits of satellites would be perturbed but only once there has been enough time for gravitational waves to reach the satellite.
-
Gravitational influences do propagate at the speed of light, not instantaneously.
The question of what would happen if the Sun instantly disappeared is actually a funny one in general relativity. The equations of general relativity imply as a mathematical consequence that energy must be locally conserved. Therefore, there is no valid solution to the equations that describes the Sun suddenly disappearing (since that scenario violates local energy conservation).
(A similar statement holds in electromagnetism, by the way: charge conservation is a logical consequence of Maxwell's equations, so if someone asks you what the electric field does when a charge suddenly disappears, there is no correct answer.)
But you can sensibly ask what would happen if the Sun suddenly changed its mass distribution -- if it exploded, say, sending its mass in different directions at high speeds. The answer is that the Earth's orbit wouldn't change for 8 minutes.
-
Beat you to it with the same answer :) – dbrane Feb 18 '11 at 22:18
Uh, I don't like this discussion of energy conservation. There is no such implication made by GR. And there is also no problem in using discontinuous stress-energy tensor which would in turn imply discontinuity in curvature (in the very same way as happens at characteristic surfaces of gravitational waves). And the very same stuff can be said about EM. – Marek Feb 18 '11 at 22:29
4
I'm talking about local energy conservation, $T^{\mu\nu}_{;\nu}=0$. It follows from the Einstein equation as an identity. There's no solution corresponding to the Sun disappearing, because the Sun's disappearance would violate it. – Ted Bunn Feb 18 '11 at 23:09
Oh, and @Marek, if you really think there's no problem finding solutions to the field equations with sources that don't satisfy the conservation laws, then tell me this: what is a solution to Maxwell's equations corresponding to a point charge $q$ at the origin, which abruptly disappears at $t=0$ -- that is, $\rho({\bf r},t)=q\delta({\bf r})θ(−t)$, ${\bf J}=0$? – Ted Bunn Feb 19 '11 at 16:24
1
A followup: I agree that it would be better to insert "locally" before "conserved" in my answer. I'm aware of the problems with global energy conservation in GR. I'm making that edit now. – Ted Bunn Feb 19 '11 at 17:23
show 7 more comments
All observations are consistent with standard GR so far, but I don't think the speed of gravity, in particular, has ever been measured.
Experimental measurements of the speed of gravity was quite a controversy a few years ago when a paper came out claiming that the speed of gravity was very close to $c$ as measured by the Shapiro delay. To see papers on the subject google shapiro+speed+gravity:
Clifford Will is an expert in the area and says that there was no measurement. He has a website on the subject that gives links to the various papers:
http://wugrav.wustl.edu/people/CMW/SpeedofGravity.html
My guess is that the Will side won. But academia means "never having to admit you were wrong". Here's a pair of dueling papers on the subject, published in the same journal at the same time (that date to after Clifford Will last updated his page above):
Class.Quant.Grav. 22 (2005) 5181-5186, Sergei M. Kopeikin, Comment on 'Model-dependence of Shapiro time delay and the "speed of gravity/speed of light" controversy'
http://arxiv.org/abs/gr-qc/0510048
Class.Quant.Grav.22 (2005) 5187-5190, S. Carlip, Reply to ``Comment on Model-dependence of Shapiro time delay and the`speed of gravity/speed of light' controversy''
http://arxiv.org/abs/gr-qc/0510056
-
Your question was first asked by Laplace. The following is from the Wikipedia article on "The speed of gravity"
"Laplace
The first attempt to combine a finite gravitational speed with Newton's theory was made by Laplace in 1805. Based on Newton's force law he considered a model in which the gravitational field is defined as a radiation field or fluid. Changes in the motion of the attracting body are transmitted by some sort of waves.[4] Therefore, the movements of the celestial bodies should be modified in the order v/c, where v is the relative speed between the bodies and c is the speed of gravity. The effect of a finite speed of gravity goes to zero as c goes to infinity, but not as 1/c2 as it does in modern theories. This led Laplace to conclude that the speed of gravitational interactions is at least 7×106 times the speed of light. This velocity was used by many in the 19th century to criticize any model based on a finite speed of gravity, like electrical or mechanical explanations of gravitation.
From a modern point of view, Laplace's analysis is incorrect. Not knowing about Lorentz invariance of static fields, Laplace assumed that when an object like the Earth is moving around the Sun, the attraction of the Earth would not be toward the instantaneous position of the Sun, but toward where the Sun had been if its position was retarded using the relative velocity (this retardation actually does happen with the optical position of the Sun, and is called annual solar aberration). Putting the Sun immobile at the origin, when the Earth is moving in an orbit of radius R with velocity v presuming that the gravitational influence moves with velocity c, moves the Sun's true position ahead of its optical position, by an amount equal to vR/c, which is the travel time of gravity from the sun to the Earth times the relative velocity of the sun and the Earth. The pull of gravity (if it behaved like a wave, such as light) would then be always displaced in the direction of the Earth's velocity, so that the Earth would always be pulled toward the optical position of the Sun, rather than its actual position. This would cause a pull ahead of the Earth, which would cause the orbit of the Earth to spiral outward. Such an outspiral would be suppressed by an amount v/c compared to the force which keeps the Earth in orbit; and since the Earth's orbit is observed to be stable, Laplace's c must be very large. In fact, as is now known, it may be considered to be infinite, since as a static influence, it is instantaneous at distance, when seen by observers at constant transverse velocity.
In a field equation consistent with special relativity (i.e., a Lortentz invariant equation), the attraction between static charges is always toward the instantaneous position of the charge (in this case, the "gravitational charge" of the Sun), not the time-retarded position of the Sun. When an object is moving at a steady speed, the effect on the orbit is order v2/c2, and the effect preserves energy and angular momentum, so that, and orbits do not decay. The attraction toward an object moving with a steady velocity is towards its instantaneous position with no delay, for both gravity and electric charge."
-
1
And they used to think that gravity was the longitudinal wave to go with the transverse wave of electromagnetism (i.e. as in the P and S waves of seismology). – Carl Brannen Feb 22 '11 at 3:53
the fact that distortion travels 'as soon' as a mass is removed or not is not implied in any way by gravity being due to a distortion of spacetime. In fact distortions of spacetime are as limited to travel to the speed of light as any other physical influence.
-
I feel like this question is being asked wrong and/or it is being interpreted wrong for what you're actually asking. It is understood that the propagation of anything cannot exceed 'c', but I don't think propagation is necessary to answer the question, or to create a valid thought experiment. First off, gravity is not fully understood by any mainstream science and a lot of the paradoxial problems inherent within our current accepted understanding tend to leave many scratching their heads. I'm no physicist, or scientist for that matter, but this has been on my mind for a very long time and I decided to throw it out here and allow you all to tear it to pieces or at least lead me in a better direction lol.
The question, how would the sudden disappearance of the sun affect gravitation, and would it follow 'c' or happen instantaneously?
My answer is Both.
Lets look at gravity in a couple of different ways to explain why I believe this. I see a lot of references to gravity as a wave...I assume this is because of the apparent "propagation" that occurs within a gravitationally active region. I accept that any physical change made by object A that "could" effect object B must travel to object B no faster than 'c'. So yeah, sun goes poof, we wait the 8 minutes before gravity is released. Here's where I go left.....That "wave" isn't necessary to get information from A to B instantly. Look at it backwards, mass is the force (cause), gravity is the result of that force (effect). I don't view gravity as we observe it as a force but the released energy of another force.....displacement. The region that would see a net change if the sun went poof would be space-time. Look at it in a simplified way, I stand at one end of a field and you at the other with 2 cans and a string, pull it taunt and yell into it.....the vibrations travel down the string to my can at the speed of sound and I can hear it. For the sake of this example, lets assume the speed of sound represents 'c', and the sound wave represents gravity....the string would represent space-time. Everything works just as you would expect it. Now, i would ask you to make a constant humming noise into the can. Several milliseconds later, I begin to hear it. Suddenly, you pass out from humming instead of breathing and drop the can. Again, I must wait several milliseconds before I realize something has happened and you've stopped. What I failed to realize was that I already had that information. As the can left your hand (the pull of your gravity), the gravitational constant in local space-time was changed (the tension on the string went slack). Does this not happen instantly? Granted, I know of no device that can measure the gravitational constant in a specific region of space-time but is this not a method of reading the net effect of a massive and sudden gravitational change? What if I lay at the bottom of a pool with an air hose and blow bubbles? The bubbles travel to the surface at (hypothetical) 'c' but the bubbles themselves displace the water causing it to slightly rise in apparent volume. Does this increase in net volume not happen the instant the bubble displaces the water?
Bottom line, I agree that if the sun vanished, it would take 8 minutes for a change in its gravitational influence on the Earth to be observed, but I believe that the net effect on the region of space-time between the earth and sun could be observed instantly using the proper equipment to detect those changes.
-
1
This looks more like a separate (though related) question than an answer. I suggest you make it into a question. – Wouter Mar 3 at 11:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567606449127197, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/115690-adding-monster-rational-expression.html
|
# Thread:
1. ## Adding a monster of a rational expression
I am having a hard time with this, I am not sure if its just the sheer size or something in my calculations. I am grateful to anyone who wants to take this on:
4a^2-20a/a^2+2a-35 + 3a-6/3a^2-10a+8
So far all I have is the simplified version of the two denominators
(a+7)(a-5) (3a-4)(a-2)
Then after this the equation is too huge for me to deal, with it seems.
2. Originally Posted by Charchar
I am having a hard time with this, I am not sure if its just the sheer size or something in my calculations. I am grateful to anyone who wants to take this on:
4a^2-20a/a^2+2a-35 + 3a-6/3a^2-10a+8
So far all I have is the simplified version of the two denominators
(a+7)(a-5) (3a-4)(a-2)
Then after this the equation is too huge for me to deal, with it seems.
$\frac{4a^2 -20a}{a^2+2a-35} + \frac{3a-6}{3a^2-10a+8}$
$\frac{4a(a-5)}{(a+7)(a-5) } + \frac{3(a-2)}{(3a-4)(a-2)}$
$\frac{4a}{a+7} + \frac{3}{(3a-4)}$
it is simple now, is all things clear
3. ahhh, I see clearly now. Thank you very much for your help!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507105946540833, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/88939/list
|
## Return to Question
4 deleted 3 characters in body
Can anyone give me a relatively simple proof or Some reference for the following fact.(I know that there is a proof of this theorem in Gerard J. Murphy'book: "$C^*$-Algebras and Operator Theory", but I'm sure that there should be a simple proof of this.
Every hereditary $C^$-subalgebra C*-subalgebra of a simple $C^$-algebra C^*\$-algebra is also simple!
Maybe this is easy for someone, but it makes me confused for a long time. I am a novice!
3 added 9 characters in body; edited title
# Question about hereditary C*-algebra.$C^*$-algebra.
Can anyone give me a relatively simple proof or Some reference for the following fact.(I know that there is a proof of this theorem in Gerard J. Murphy'book: "C*-Algebras $C^*$-Algebras and Operator Theory", but I'm sure that there should be a simple proof of this.
Every hereditary C*-subalgebra $C^$-subalgebra of a simple C*-algebra $C^$-algebra is also simple!
Maybe this is easy for someone, but it makes me confused for a long time. I am a novice!
2 edited title
1
# Simple hereditary C*-algebra.
Can anyone give me a relatively simple proof or Some reference for the following fact.(I know that there is a proof of this theorem in Gerard J. Murphy'book: "C*-Algebras and Operator Theory", but I'm sure that there should be a simple proof of this.
Every hereditary C*-subalgebra of a simple C*-algebra is also simple!
Maybe this is easy for someone, but it makes me confused for a long time. I am a novice!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255759716033936, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/10042/list
|
## Return to Answer
2 added clarification of handwavy thing; deleted 87 characters in body
Probably you already figured this out since it's been almost a month, but anyway...
If you have an arbitrary (i.e. possibly non-split) reductive group $G$ and an Iwahori $I$ and you want to write the function-theoretic Hecke algebra $\mathcal{H}(G;I)$ as one of these other combinatorial Hecke algebras, you should do the following: take $W$ to be the abstract extended affine Weyl group, take $S$ to be a base of the affine Weyl group $W_{aff} \subset W$ and form the combinatorial Hecke algebra in the usual way (e.g. chapter 7 in Humphreys's little grey book), except that when you are choosing the parameters $a_s$ and $b_s$ for the relations, you should replace $q$ with the index $[IsI:I]$.
I believe this non-split situation was done by Macdonald, but I'm not sure.
Edit - I realized that "usual way" above requires some clarification:
First, you write $W$ as the internal semidirect product $W_{aff} \rtimes \Omega$ where $\Omega$ is the subset of elements stabilizing the base alcove. The length function $\ell$ extends from $W_{aff}$ to $W$ by simply taking the length of the projections of elements in $W_{aff}$ (so now there are non-zero but zero length elements). The Bruhat order also extends but that's not necessary now. Then the "usual way" means:
• label your basis using all elements of $W$ (not just $W_{aff}$),
• interpret the relation "$T_{w_1 w_2}=T_{w_1}T_{w_2}$ whenever $\ell(w_1 w_2)=\ell(w_1)+\ell(w_2)$ for all $w_1, w_2 \in W$" verbatim, but using the extended length function, and
• interpret the relation "$(T_s)^2=\ldots$ for all $s\in S$" verbatim, not worrying that $S$ does not generate $W$.
1
Probably you already figured this out since it's been almost a month, but anyway...
If you have an arbitrary (i.e. possibly non-split) reductive group $G$ and an Iwahori $I$ and you want to write the function-theoretic Hecke algebra $\mathcal{H}(G;I)$ as one of these other combinatorial Hecke algebras, you should do the following: take $W$ to be the abstract extended affine Weyl group, take $S$ to be a base of $W$ and form the combinatorial Hecke algebra in the usual way (e.g. chapter 7 in Humphreys's little grey book), except that when you are choosing the parameters $a_s$ and $b_s$ for the relations, you should replace $q$ with the index $[IsI:I]$.
I believe this non-split situation was done by Macdonald, but I'm not sure.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153440594673157, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/142470/prove-that-contextfreelanguage-regularlanguage-is-always-a-context-free-lang
|
# Prove that [ContextFreeLanguage - RegularLanguage] is always a context free language, but the opposite is false
Let L be a context-free grammar and R a regular language. Show that L-R is always context-free, but R-L is not. Hint: try to connect both automata)
The above hint did not help me :(
-
1
Are you sure that you've stated this correctly? $L\setminus R$ should be context-free, but it need not be regular. – Brian M. Scott May 8 '12 at 3:13
Thanks, Brian, my mistake – marcos May 8 '12 at 3:25
Phew! Good, it makes sense now. Can you show the second part? It's easier than the first. – Brian M. Scott May 8 '12 at 3:27
## 2 Answers
The harder part is showing that $L\setminus R$ is always context-free. The hint actually is useful, once you figure out what to do with it. Let me point you in the right direction by talking about a simpler situation. Suppose that you have finite-state automata $M_1$ and $M_2$ for two regular languages, with state sets $S_1$ and $S_2$, respectively. Make a new FSA $M$ whose state set is $S_1\times S_2$; in $M$ there will be a transition from a state $\langle s_1,s_2\rangle$ to $\langle s_1',s_2'\rangle$ on input $\alpha$ iff $M_1$ has a transition $s_1\stackrel{\alpha}\longrightarrow s_1'$ and $M_2$ has a transition $s_2\stackrel{\alpha}\longrightarrow s_2'$. In essence, $M$ runs $M_1$ and $M_2$ simultaneously on the same input. That makes it easy to adjust the acceptor states of $M$ so that accepts exactly those words that are accepted by $M_1$ but not by $M_2$; I'll leave the details for you to think about. (You'll also have to figure out what the initial state should be, but that's very easy once you get the idea of how $M$ works.) Then you just have to modify the idea so that one of the automata is a pushdown automaton.
The second part is much easier. Two hints:
• If $\Sigma$ is the alphabet, is $\Sigma^*$ a regular language?
• Is the class of context-free languages closed under complementation?
-
Cool, I think I got the 1st part: <Ln,Rn> will be accepting on the combined automaton, only if Ln is accepting on the original automaton and Rn is not. – marcos May 8 '12 at 4:39
@mpm: That's right. This trick of making one automaton do the work of two is pretty useful. – Brian M. Scott May 8 '12 at 4:43
Hints: express $R-L$ more basically in set-theoretic terms. Notice anything about what you get in terms of things you know about CFLs? Try some very simple $R$ (always a good tactic, at least to start).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468101263046265, "perplexity_flag": "head"}
|
http://windowsontheory.org/2012/09/05/from-discrepancy-to-privacy-and-back/
|
## A Research Blog
by
Different areas of mathematics, and theoretical computer science, freely borrow tools and ideas from each other and these interactions, like barters, can make both parties richer. In fact it’s even better than barter: the objects being ideas, you don’t have to actually give them up in an exchange.
And so it is no surprise that differential privacy has used tools from several other fields, including complexity, cryptography, learning theory and high-dimensional geometry. Today I want to talk about a little giving back. A small interest payment, if you will. Below I will describe how differential privacy tools helped us resolve a question of Alon and Kalai.
Discrepancy
This particular instance deals with discrepancy theory, more specifically, with the Erdös Discrepancy problem (EDP)(polymath 5 being discussed here).
In 1932, Erdös conjectured:
Conjecture[Problem 9 here] For any constant ${C}$, there is an ${n}$ such that the following holds. For any function ${f:[n] \rightarrow \{-1, +1\}}$, there exists an ${a}$ and a $k \leq n/a$ such that
$\displaystyle |\sum_{i = 1}^{k}{f(ia)}| > C.$
For any ${a,k}$, the sets ${S_{a,k} = \{ia: 0\leq i \leq k\}}$ is a arithmetic progression containing ${0}$; we call such a set a Homogenous Arithmetic Progression (HAP). The conjecture above says that for any red blue coloring of the [n], there is some HAP which has a lot more of red than blue (or vice versa).
In modern language, this is a question about discrepancy of HAPs. So let me define discrepancy first. Let ${U}$ be a universe and let ${\mathcal{S}}$ denote a family of subsets of ${U}$. A coloring ${f}$ of ${U}$ is an assignment ${f: U \rightarrow \{+1,-1\}}$. The discrepancy of a set ${S_i}$ under the coloring is simply ${| \sum_{j \in S_i} f(j) |}$; the imbalance of the set. The discrepancy of a set system under a coloring is the maximum discrepancy of any of the sets in the family. The minimum of this quantity over all colorings is then defined to be the discrepancy of the set system. We want a coloring in which all sets in ${\mathcal{S}}$ are as close to balanced as possible. In other words, if ${A}$ denotes the set-element incidence matrix of the set system, then ${\mathsf{disc}(A) = \min_{x \in \{-1,1\}^n} |Ax|_{\infty}}$.
Thus the conjecture says that the discrepancy of HAPs over ${[n]}$ grows with ${n}$.
This post deals with the related concept of Hereditary Discrepancy. The discrepancy can often be small by accident: even though the set system is complex enough to contain a high discrepancy set system in it, it can have discrepancy zero. The hereditary discrepancy measures the maximum discrepancy of any set system “contained” in ${(U,\mathcal{S})}$.
Formally, given a set system ${(U,\mathcal{S})}$, and a subset ${V \subseteq U}$, the restriction of ${(U,\mathcal{S})}$ to ${V}$ is another set system ${(V,\mathcal{S}_{|V})}$, where ${\mathcal{S}_{|V} = \{ S_i \cap V : S_i \in \mathcal{S}\}}$. If we think of the set system as a hypergraph on ${U}$, then the restriction is the induced hypergraph on ${V}$. The hereditary discrepancy ${\mathsf{herdisc}(U,\mathcal{S})}$ is the maximum discrepancy of any of its restrictions. In matrix language, ${\mathsf{herdisc}(A)}$ is simply ${\max_{A'} \mathsf{disc}(A')}$ where the maximum is taken over all submatrices of ${A}$.
Some examples. Let ${n}$ denote ${|U|}$. A totally unimodular matrix ${A}$ gives us a set system with hereditary discrepancy at most ${1}$. An arbitrary collection of ${n}$ sets has discrepancy at most ${O(\sqrt{n})}$. Note that a random coloring will give discrepancy about ${O(\sqrt{n \log n})}$. In a famous paper, Spencershowed the ${O(\sqrt{n})}$ upper bound, and Bansal and recently Lovett and Meka gave constructive versions of this result.
Privacy
Given a vector ${y \in \Re^n}$, and ${0}$-${1}$ matrix ${A}$, consider the problem of outputting a vector ${z}$ such that ${|z - Ay|_{\infty}}$ is small, and yet the distribution of ${z}$ isdifferentially private, i.e. for ${y}$ and ${y'}$ that are close the distributions of the corresponding ${z}$‘s are close. If you are a regular reader of this blog, you must be no stranger to differential privacy. For the purposes of this post, a mechanism ${M : \Re^n \rightarrow \Re^m}$ satisfies differential privacy if for any ${y,y' \in \Re^n}$ such that ${|x-x'|_1 \leq 1}$, and any (measurable) ${S \subseteq \Re^m}$:
$\displaystyle \Pr[M(y) \in S] \leq 2 \Pr[M(y') \in S] .$
Thus if ${y,y'}$ are close in ${\ell_1}$, the distributions ${M(y),M(y')}$ are close in ${L_{\infty}}$.
Researchers have studied the question of designing good mechanisms for specific matrices ${A}$. Here by good, we mean that the expected value of the error say ${|z-Ay|_\infty}$, or ${|z-Ay|_2}$ is as small as possible. It is natural to also prove lower bounds on the error needed to answer specific queries of interest.
A particular set of queries of interest is the following. The coordinates of the vector ${y}$ are associated with the hypercube ${\{0,1\}^d}$. Think of ${d}$ binary attributes people may have, and for ${v \in \{0,1\}^d}$, ${y_{v}}$ denotes the number of people with ${v}$ as their attribute vector. For each subcube, defined by fixing ${k}$ of the ${d}$ bits, we look at the counting query corresponding to that subcube: i.e. ${\sum_{v \in \mbox{ subcube}} y_v}$. This corresponds to dot product with a vector ${a}$ with ${a_v}$ being one on the subcube, and zero elsewhere. Thus we may want to ask how many people have a ${0}$ in the first and second attribute, and a ${1}$ in the fourth. Consider the matrix ${A}$ defined by all the possible subcube queries.
Subcubes defined by fixing ${k}$ bits are ${k}$-juntas to some people, contingency table queries to others. These queries being important from a statistical point of view, Kasiviswanathan, Rudelson, Smith and Ullman showed lower bounds on the amount of error any differentrially private mechanism must add, for any constant ${k}$. When ${k}$ is ${\Omega(d)}$, their work suggests that one should get a lower bound that is ${2^{\Omega(d)}}$.
The connection
So what has discrepancy got to do with privacy? Muthukrishnan and Nikolov showed that if ${A}$ has large hereditary discrepancy, then any differentially private mechanism must incur large expected squared error. In fact, one can go back and check that nearly all known lower bounds for differentially private mechanism are really hereditary discrepancy lower bounds in disguise. Thus there is a deep connection between ${\mathsf{herdisc}(A)}$ and the minimum achievable error for ${A}$.
For the EDP, it is natural to ask how large the hereditary discrepancy is. Alon and Kalai show that it is ${\tilde{\Omega}(\sqrt{\log n})}$ and at most ${n^{O(\frac{1}{\log\log n})}}$. They also showed that for constant ${\epsilon}$, it is possible to delete an ${\epsilon}$ fraction of the integers in ${[n]}$, so that the remaining set system has hereditary discrepancy at most polylogarithmic in ${n}$. Gil guessed that the truth is closer to the lower bound.
Alex Nikolov and I managed to show that this is not the case. Since hereditary discrepancy is, well, hereditary, a lower bound on the hereditary discrepancy of a submatrix is also a lower bound on the hereditary discrepancy of whole matrix. In the EDP matrix on ${[n]}$, we will first find our subcubes-juntas-contingency-tables matrix ${A}$ above as a submatrix; one for which ${d}$ is ${\Theta(\frac{\log n}{\log \log n})}$. Having done that, it would remain to prove a lower bound for ${\mathsf{herdisc}(A)}$ itself.
The first step is done as follows: associate each dimension ${i \leq d}$ with ${2i}$th and the ${(2i+1)}$th prime. A point ${v = (v_1,v_2,\ldots,v_d)}$ in the hypercube is naturally associated with the integer ${f(v)=\Pi_{i} p_{2i+v_i}}$. A subcube query can be specified by a vector ${q \in \{0,1,\star\}^d}$: ${q}$ is set to the appropriate ${0}$-${1}$ value for the coordinates that we fix, and ${\star}$ for the unconstrained coordinates. We can associate a subcube ${q}$ with the integer ${f(q)=\Pi_{i: q_i \neq \star} p_{2i+q_i}}$. It is easy to see that ${x}$ is in the subcube corresponding to ${q}$ if and only if ${f(q)}$ divides ${f(x)}$. Thus if we restrict ourselves to the integers ${\{f(v): v \in \{0,1\}^d\}}$ and HAPs corresponding to ${\{f(q): q \in \{0,1,\star\}^d\}}$, we have found a submatrix of the EDP matrix that looks exactly like our contingency tables matrix ${A}$. Thus the hereditary discrepancy of the EDP matrix is at least as large as that of this submatrix that we found.
Lower bounds for private mechanisms for ${A}$ can be derived in many ways. For constant ${k}$, the results of Kasiviswanathan et al. referred to above suffice and it is likely that they can be pushed to get the ${2^{\Omega(d)}}$ lower bounds we are shooting for. A very different approach of extracting from weak random sources also implies such a lower bound. It is likely that from these, one could get a lower bound on ${\mathsf{herdisc}}$ of the kind we need.
However, given what we know about ${A}$, we can in fact remove the privacy scaffolding and get a simpler direct proof of the lower bound of ${2^{\Omega(d)}}$ on ${\mathsf{herdisc}(A)}$, and can write a proof of the lower bound without any mention of privacy. This implies that the hereditary discrepancy for the EDP is at least ${2^{\Omega(d)} = n^{\Omega(\frac{1}{\log\log n})}}$, which matches the upper bound up to a constant in the exponent. A brief note with the proof is here.
Of course the EDP itself is wide open. Head on here to help settle the conjecture.
Many thanks to Alex Nikolov for his contribution to the content and the writing of this post.
### Like this:
from → Uncategorized
13 Comments leave one →
1. September 6, 2012 12:57 am
Great post. Reminds me of a talk Avi once gave titled something like “why it’s worthwhile sitting in talks in different fields.” I should add that sometimes fields don’t only borrow ideas but also borrow the researchers from other fields. So beware guys …
2. September 11, 2012 12:19 pm
Where did you define EDP?
• Kunal Talwar
September 11, 2012 3:31 pm
My bad. EDP is the Erd\”os discrepancy problem that we started the post with. Now fixed.
3. Gil Kalai
September 11, 2012 7:04 pm
Great! Just saw the post.
4. September 13, 2012 5:18 pm
Very nice post! I have always thought about constructing DP lower bounds as equivalent to showing the existence of large packings around the optimal value. For example, our sample complexity bounds for learning in [1] and statistics [2] are all based on this concept of packings. Do you know if there’s a formal relationship between the two concepts?
[1] Sample Complexity Bounds for Differentially Private Learning
Kamalika Chaudhuri and Daniel Hsu, COLT 2011
[2] Convergence Rates for Differentially Private Statistical Estimation
Kamalika Chaudhuri and Daniel Hsu, ICML 2012
• Kunal Talwar
September 14, 2012 5:56 pm
Thanks for the pointers.
For (eps,0)-differential privacy, pretty much all lower bounds I know of are based on packing arguments. For linear queries, Moritz and I showed[1] that this is tight up to a polylog factor. We can go a little further than packing based lower bounds[2] by”adding up” packing-based lower bounds over orthogonal subspaces. It is a good question if packing based LBs are nearly tight in some more general setting.
For (eps,delta)-differential privacy, the picture is quite different. Packing based lower bounds only apply when delta is really small, and do not apply for say polynomially small delta. In fact, you can sometimes get (eps,1/poly(n))-DP mechanisms with accuracy significantly better than the packing based lower bounds for (eps,0) [3].
So in most settings, for proving non-trivial lower bounds of (eps,delta), one must go beyond packings. Many of the lower bounds go via lower bounds for some variant of blatant non-privacy. As this post mentions, these are often lower bounds on discrepancy, or on the smallest eigenvalue of an associated matrix.
[1] Hardt and Talwar, The Geometry of Differential Privacy. http://arxiv.org/abs/0907.3754
[2] Bhaskara et al. Unconditional Differentially Private Mechanisms for linear queries. http://www.cs.princeton.edu/~bhaskara/files/privacy.pdf
[3] Anindya De, Lower bounds on Differential Privacy, http://arxiv.org/abs/1107.2183
• Adam Smith
September 19, 2012 12:59 pm
Another paper from the same period that uses the packing technique:
A. Beimel, S. P. Kasiviswanathan, and K. Nissim. Bounds on the Sample Complexity for Private Learning and Private Data Release. In the Seventh Theory of Cryptography Conference, 2010. http://www.cs.bgu.ac.il/~beimel/Papers/BKN.pdf
• Kunal Talwar
September 19, 2012 10:23 pm
The references [2] and [3] were flipped. Just fixed that.
Also, just to clarify, I did not mean to suggest that [1] introduced packing arguments. There are earlier papers with a packing argument, and certainly people in the community knew of it independently well before it appeared in print. What I find surprising is that in many settings, it is tight or nearly tight for (eps,0). Our failure to come up with more clever lower bounding techniques is, in many cases, not our fault.
5. Alec Edgington
October 7, 2012 6:46 am
The definition of EDP given above is not quite correct. The problem is to bound all partial sums of the function along HAPs, not just the sums that go all the way up to $n$.
(Indeed, the conjecture as stated is trivially false: one can take $C=1$ and define $f$ by $f(m) = +1$ if $m \leq n/2$ and $f(m) = -1$ if $m > n/2$.)
(The definition given in the paper is correct …)
• Kunal Talwar
October 7, 2012 4:19 pm
Thanks for the correction. Now fixed.
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 133, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288803339004517, "perplexity_flag": "head"}
|
http://mathlesstraveled.com/2009/12/12/irrationality-of-pi-the-unpossible-function/
|
Explorations in mathematical beauty
## Irrationality of pi: the unpossible function
Posted on December 12, 2009 by
Recall from my last post what we are trying to accomplish: by assuming that $\pi$ is a rational number, we are going to define an unpossible function! So, without further ado:
Suppose $\pi = \frac{a}{b}$, where $a$ and $b$ are positive integers. Define the function $f$ like this:
$\displaystyle f(x) = \frac{x^n(a - bx)^n}{n!}.$
(In case you’ve forgotten, $n!$, pronounced “n factorial,” is the product of all the numbers from 1 to $n$.) “OK… but… what is $n$?” I hear you ask. Good question. The short answer is, it doesn’t matter: $n$ can be any positive integer. We will show a bunch of things that are true about $f$ no matter what $n$ is. Later, we will see that we get a contradiction only for values of $n$ which are “big enough.” But that’s OK; since everything we prove up to that point will be true no matter what $n$ is, we can pick a value of $n$ which is as big as we like.
Let’s explore some properties of $f(x)$. First, it’s easy to see that $f(0) = \frac{0^n a^n}{n!} = 0$. It’s not too hard to see that $f(\pi) = 0$ as well (remembering that $\pi = a/b$, of course, which means that $a-b\pi = a - a = 0$):
$\begin{array}{rcl} f(\pi) & = & \frac{\pi^n (a - b\pi)^n}{n!} \\ & = & \frac{\pi^n 0^n}{n!} = 0. \end{array}$
So $f(x)$ has zeros at $x = 0$ and $x = \pi$. But more is true: in fact, $f(x)$ is symmetric (a mirror reflection of itself) around the line $x = \pi/2$. That is,
$f(x) = f(\pi - x) = f(a/b - x).$
Let’s prove this:
$\begin{array}{rcl} f(a/b - x) & = & \frac{(a/b - x)^n(a - b(a/b - x))^n}{n!} \\ & = & \frac{(a/b - x)^n(a - a + bx)^n}{n!} \\ & = & \frac{(a/b - x)^n b^n x^n}{n!} \\ & = & \frac{(a - bx)^n x^n}{n!} = f(x). \end{array}$
“I don’t see what’s so unpossible about $f$ so far,” you say? Patience! (Of course, it isn’t really $f$ itself which is the problem; the problem is our insistence that $f$ is actually defined in terms of the “numerator” and “denominator” of $\pi$…)
Next time, we’ll see that the derivatives of $f$ also have some special properties.
39.953605 -75.213937
### Share this:
This entry was posted in calculus, famous numbers, proof and tagged irrational, Ivan Niven, pi, proof, symmetric. Bookmark the permalink.
### 7 Responses to Irrationality of pi: the unpossible function
1. Jack says:
I like this very-short style actually, it is quite nice just to read little bits. I hope you’ll be doing this with other interesting proofs too.
2. Brent says:
Jack: I’d love to do it with other interesting proofs too. If you find any interesting ones you’d like to see explained, feel free to send me a link!
3. Dave says:
I’m with you so far! I would like to have been a fly on the wall when Niven came up with the f(x) function. I took a look at the original paper and just as I suspected, the function seems to have arrived out of thin air. And so we can observe it and dissect it, but I can only imagine what his process was like in coming up with that function. Do you have any insight on that process?
4. Brent says:
Dave: I know exactly what you mean! This is a problem I have with the way math papers are often written in general: they present only the finished results and throw away everything used to get there in the first place—which is often the most interesting part! As for this paper in particular, I’m starting to get a bit of intuition as to the process Niven might have used to arrive at f(x); hopefully my intuition will continue to grow as I write more, and I’ll be able to give some insight later. But it will have to wait a few posts, since you have to see where we’re going with this function before you can appreciate why it is what it is.
5. Pingback: Irrationality of pi: derivatives of f « The Math Less Traveled
6. Pingback: Irrationality of pi: curiouser and curiouser « The Math Less Traveled
7. Pingback: Irrationality of pi: derivatives of f | The Math Less Traveled
Comments are closed.
• Brent's blogging goal
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484502673149109, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/48622/has-the-mathematical-content-of-grothendiecks-recoltes-et-semailles-been-used/48946
|
## Has the mathematical content of Grothendieck’s “Récoltes et Semailles” been used?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is partly motivated by this one.
## Motivation
Grothendieck's "Récoltes et Semailles" has been cited on various occasions on this forum. See for instance the answers to this question or this one. However, these citations reflect only one aspect of "Récoltes et Semailles", namely the nontechnical reflexion about Mathematics and mathematical activity. Putting aside the wonderful "Clef du Yin et du Yang", whis is a great reading almost unrelated to Mathematics, I remember reading in "Récoltes et Semailles" a bunch of technical mathematical reflexions, almost all of which were above my head due to my having but a smattering of algebraic geometry. However, I recall for instance reading Grothendieck's opinion that standard conjectures were false, and claiming he had in mind a few related conjectures (which he doesn't state precisely) which might turn out to be the right ones. I still don't even know what the standard conjectures state and thus didn't understand anything, but I know many people are working hard to prove these conjectures. I've thus often wondered what was the value of Grothendieck's mathematical statements (which are not limited to standard conjectures) in "Récoltes et Semailles".
The questions I'd like to ask here are the following:
Have the mathematical parts of "Récoltes et Semailles" proved influential? If so, is there any written evidence of it, or any account of the development of the mathematical ideas that Grothendieck has expressed in this text? If the answer to the first question is negative, what are the difficulties involved in implementing Grothendieck's ideas?
## Idle thoughts
In the latter case, I could come up with some possible explanations:
1. Those who could have developed and spread these ideas didn't read "Récoltes et Semailles" seriously and thus nobody was aware of their existence.
2. Those people took the mathematical content seriously but it was beyond anyone's reach to understand what Grothendieck was trying to get at because of the idiosyncratic writing style.
Should one of these two suppositions be backed by evidence, I'd appreciate a factual answer.
3. The ideas were already outdated or have been proven wrong.
If this is the case, I'd appreciate a reference.
## Epanorthosis
Given that "Pursuing Stacks" and "Les Dérivateurs" were written approximately in 1983 and 1990 respectively and have proved influential (see Maltsiniotis's page for the latter text, somewhat less known), I would be surprised should Grothendieck's mathematical ideas expressed around 1985 be worthless.
-
7
4. Récoltes et Semailles is written in a French far beyound the reach of an average mathematician, who may be able to understand a proof but not a heuristical treatise with philosophical and spiritual undercurrents. – darij grinberg Dec 8 2010 at 10:52
1
I am still not sure I interpret you question correctly, so I will refrain from editing it myself. But a few more changes possible: (a) the word "influential" is, to a certain extent, subjective. For a specialist in PDEs, certainly the answer is a resounding no. If you are seeking reference request as Thierry interprets, then it would be much better to use a neutral word like "implemented" or "used". That way the question becomes purely factual. (b) If it is a reference request, please tag it as such. – Willie Wong Dec 8 2010 at 12:53
1
In view of your comment, I'd add (c) a better way to ask your third question is "If the answer to the first is negative, what are the mathematical difficulties involved in implementing G's ideas?" That was the question is targeted and no-longer open to Idle Speculation. (You may also want to remove or rephrase that section in your question.) – Willie Wong Dec 8 2010 at 12:56
1
Perhaps a natural pre-question is "What are the mathematical ideas in R&S that do not appear elsewhere in Grothendieck's work?" – jc Dec 9 2010 at 14:43
1
Dear jc: To answer this question, one would have to have read not only "Pursuing Stacks" and "Les Dérivateurs", but also the letters Grothendieck have sent to various mathematicians and which may have influenced them. Therefore I thought I had better ask the question the way I did, but sure enough I wish someone could answer yours. – Jonathan Chiche Dec 9 2010 at 21:19
show 4 more comments
## 2 Answers
Begging your pardon for indulging in some personal history (perhaps personal propaganda), I will explain how I ended up applying R\'ecollte et Semaille. I do apologize in advance for interpreting the question in such a self-centered fashion!
I didn't come anywhere near to reading the whole thing, but I did spend many hours dipping into various portions while I was a graduate student. Serge Lang had put his copy into the mathematics library at Yale, a very cozy place then for hiding among the shelves and getting lost in thoughts or words. Even the bits I read of course were hard going. However, one thing was clear even to my superficial understanding: Grothendieck, at that point, was dissatisfied with motives. Even though I wasn't knowledgeable enough to have an opinion about the social commentary in the book, I did wonder quite a bit if some of the discontent could have a purely mathematical source.
A clue came shortly afterwards, when I heard from Faltings Grothendieck's ideas on anabelian geometry. I still recall my initial reaction to the section conjecture: `Surely there are more splittings than points!' to which Faltings replied with a characteristically brief question:' Why?' Now I don't remember if it's in R&S as well, but I did read somewhere or hear from someone that Grothendieck had been somewhat pleased that the proof of the Mordell conjecture came from outside of the French school. Again, I have no opinion about the social aspect of such a sentiment (assuming the story true), but it is interesting to speculate on the mathematical context.
There were in Orsay and Paris some tremendously powerful people in arithmetic geometry. Szpiro, meanwhile, had a very lively interest in the Mordell conjecture, as you can see from his writings and seminars in the late 70's and early 80's. But somehow, the whole thing didn't come together. One suspects that the habits of the Grothendieck school, whereby the six operations had to be established first in every situation where a problem seemed worth solving, could be enormously helpful in some situations, and limiting in some others. In fact, my impression is that Grothendieck's discussion of the operations in R&S has an ironical tinge. [This could well be a misunderstanding due to faulty French or faulty memory.] Years later, I had an informative conversation with Jim McClure at Purdue on the demise of sheaf theory in topology. [The situation has changed since then.] But already in the 80's, I did come to realize that the motivic machinery didn't fit in very well with homotopy theory.
To summarize, I'm suggesting that the mathematical content of Grothendieck's strong objection to motives was inextricably linked with his ideas on homotopy theory as appeared in 'Pursuing Stacks' and the anabelian letter to Faltings, and catalyzed by his realization that the motivic philosophy had been of limited use (maybe even a bit of an obstruction) in the proof of the Mordell conjecture. More precisely, motives were inadequate for the study of points (the most basic maps between schemes!) in any non-abelian setting, but Faltings' pragmatic approach using all kinds of Archimedean techniques may not have been quite Grothendieck's style either. Hence, arithmetic homotopy theory.
Correct or not, this overall impression was what I came away with from the reading of R&S and my conversations with Faltings, and it became quite natural to start thinking about a workable approach to Diophantine geometry that used homotopy groups. Since I'm rather afraid of extremes, it was pleasant to find out eventually that one had to go back and find some middle ground between the anabelian and motivic philosophies to get definite results.
This is perhaps mostly a story about inspiration and inference, but I can't help feeling like I did apply R&S in some small way. (For a bit of an update, see my paper with Coates here.)
Added, 14 December: I've thought about this question on and off since posting, and now I'm quite curious about the bit of R&S I was referring to, but I no longer have access to the book. So I wonder if someone knowledgeable could be troubled to give a brief summary of what it is Grothendieck really says there about the six operations. I do remember there was a lot, and this is a question of mathematical interest.
-
Thanks for the story and the links! I too sense an ironic undertone in R&S-parts on the six operations. Would it be ok to say that the motive of anabelian studies is to see, how far "arithmetics" and "topology" coincide? In view of the speculations on "fractional motives", which pop up sometimes as maybe-spaces with non-integer dimensions: Could something anabelian/homotopic connect with that? – Thomas Riepe Dec 10 2010 at 17:30
I can't say I understand much of this, but I really enjoyed it. Thanks for posting it. – Deane Yang Dec 10 2010 at 22:34
Deane: I'm glad you enjoyed it. Happy holidays and all that to you and your family. Thomas: I don't know much about fractional motives. I wouldn't quite put matters the way you did, but I have no real objection to that formulation either. – Minhyong Kim Dec 11 2010 at 12:19
Dear Minhyong: I don't know much of the underlying Mathematics either but I enjoyed it too, thanks a lot for sharing your experience. Now I've got two answers quite different in nature and in tone and since I'm equally pleased with both of them perhaps I should just let them benefit the community and don't decide to officially accept one of them. – Jonathan Chiche Dec 11 2010 at 14:18
I honestly liked it! Mathematics is not just about the problems, it is also about the (eventual) solutions, and the ones who solve them of course! And the hierarchy goes on... solutions lead to more problems... until such time that all complexities are simplified... – Jose Arnaldo Dris Dec 11 2010 at 14:39
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I would dare to say that yes, R&S has proved influential in the mathematical sense. At least it made Grothendieck's "Esquisse d'un programme" more visible and it is clear that the topics there like anabelian geometry or the new foundations for homotopical algebra have been two avenues of research of great interest recently. As for "tame topology" my impression is that he topic has not taken off, but I may be wrong about this.[Edit: I am wrong: see Thierry Zell comment after this.]
Also it is clear that motives have won a renewed interest since the 90's and the importance of his visions about this (though perhaps not specific details) is amply explained in R&S.
On another front he has expressed the interest in D-modules as a central topic in the cohomology of algebraic varieties together with the philosophy of "six operations" and "cohomological coefficients" that has produced a lot of results and extensions, including, for instance, $p$-adic and logarithmic versions.
A topic that perhaps has not been so intensely pursued is his point of view on the cohomology of singular spaces. According to R&S, there should be a theory o crystals and a theory of co-crystals over any (reasonable) scheme. With smoothness assumptions (over a regular base, say) they should agree (a sort of "Poincaré duality") but on the general case there should be a relationship (related to the nature of singularity). This ideas are presented in a series of footnotes in the 4th part of R&S.
It seems to me that this line of research has not been pursued, mainly for two reasons. Grothendieck himself expressed the possibility of using resolution of singularity and simplicial techniques (or variants) to study the cohomology of a singular variety reducing it to its resolution and resolution of certain open subsets. This was accomplished successfully by Deligne in his "Theorie de Hodge". However the lack of advance in the characteristic $p$ case gives sense to R&S approach, but it seems that mathematicians have other priorities. On the other hand, the big panoply of new objects (algebraic spaces, stacks, derived algebro-geometric objects) possibly has drained people from working on this questions.
Another topic from R&S that has not been addressed is: What is the correct definition of D-module (or crystal) over $\mathrm{Spec}(\mathbb{Z})$? I have no doubt that this is a really hard question to tackle. The advances so far have been small and using a great deal of machinery, I am thinking on the various generalizations of De Rham-Witt theory to mixed characteristic situations.
-
5
"As for "tame topology" my impression is that he topic has not taken off". Some of the inspirations for the theory of o-minimal structures can be traced back to Grothendieck's ideas; early trailblazers like Lou van den Dries and Angus MacIntyre are on the record about this. Now, is the theory developing the way Grothendieck envisioned, that's another story, but the topic has definitely taken off. – Thierry Zell Dec 9 2010 at 14:03
Thanks Thiery for correcting my misconception. It would be interesting to know Grothendieck's ideas in greater detail. No doubt the emphasis would be in scheme-like categorical constructions and "devissage" style properties. Are you familiar with the work of his students from Montpellier, such as Ladegallerie, Magloire and others? – Leo Alonso Dec 9 2010 at 15:20
2
Leo: I am not familiar with the Montpellier people. To expand on my last comment, the theory of o-minimal structures has been very successful, especially in the constructions of new and unexpectedly tame structures (and applications). But the transfer results between tame theories that are sketched in "Esquisse" are not really there yet, and I'm not sure if it's because it's too early or because it's not really what researchers in the field are after. (Moreover, there are some negative results, since there is no largest tame category available, but even partial results don't seem here yet.) – Thierry Zell Dec 9 2010 at 17:18
Dear Leo: Thank you very much for this informative answer, which makes clear that some mathematicians have studied the mathematical part in R&S. Given that there are some texts developing these ideas, do you know if any of them explicitly mentions R&S as a mathematical source? As regards the development of homotopical algebra, I'd rather incline to think that it stems from "Pursuing Stacks" and "Les Dérivateurs" rather than R&S, since I've never come across R&S as a reference in texts developing homotopical algebra à la Grothendieck. But I've come across a very small portion of them only. – Jonathan Chiche Dec 9 2010 at 21:38
3
Jonathan: There are a few transcriptions of parts of R&S at the beginning of some papers about motives, like a leiv-motiv. I vaguely recall also to have spotted R&S in some bibliographies but I can't recall any example now. The controversial aspect of Grothendieck's opinions has had the effect of making the math in R&S less visible and people less inclined to cite it in their publications. – Leo Alonso Dec 10 2010 at 10:45
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9691567420959473, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Group_velocity
|
Group velocity
Frequency dispersion in groups of gravity waves on the surface of deep water. The red dot moves with the phase velocity, and the green dots propagate with the group velocity. In this deep-water case, the phase velocity is twice the group velocity. The red dot overtakes two green dots when moving from the left to the right of the figure.
New waves seem to emerge at the back of a wave group, grow in amplitude until they are at the center of the group, and vanish at the wave group front.
For surface gravity waves, the water particle velocities are much smaller than the phase velocity, in most cases.
This shows a wave with the group velocity and phase velocity going in different directions. (The group velocity is positive and the phase velocity is negative.)
The group velocity of a wave is the velocity with which the overall shape of the waves' amplitudes — known as the modulation or envelope of the wave — propagates through space.
For example, imagine what happens if a stone is thrown into the middle of a very still pond. When the stone hits the surface of the water, a circular pattern of waves appears. It soon turns into a circular ring of waves with a quiescent center. The ever expanding ring of waves is the wave group, within which one can discern individual wavelets of differing wavelengths traveling at different speeds. The longer waves travel faster than the group as a whole, but they die out as they approach the leading edge. The shorter waves travel more slowly and they die out as they emerge from the trailing boundary of the group.
Definition and interpretation
Definition
Solid line: A wave packet. Dashed line: The envelope of the wave packet. The envelope moves at the group velocity.
The group velocity vg is defined by the equation:[1][2][3][4]
$v_g \ \equiv\ \frac{\partial \omega}{\partial k}\,$
where ω is the wave's angular frequency (usually expressed in radians per second), and k is the angular wavenumber (usually expressed in radians per meter).
The function ω(k), which gives ω as a function of k, is known as the dispersion relation. If ω is directly proportional to k, then the group velocity is exactly equal to the phase velocity. Otherwise, the envelope of the wave will become distorted as it propagates. This "group velocity dispersion" is an important effect in the propagation of signals through optical fibers and in the design of high-power, short-pulse lasers.
Note: The above definition of group velocity is only useful for wavepackets, which is a pulse that is localized in both real space and frequency space. Because waves at different frequencies propagate at differing phase velocities in dispersive media, for a large frequency range (a narrow envelope in space) the observed pulse would change shape while traveling, making group velocity an unclear or useless quantity.
Derivation
One derivation of the formula for group velocity is as follows.[5][6]
Consider a wave packet as a function of position x and time t: α(x,t). Let A(k) be its Fourier transform at time t=0:
$\alpha(x,0)= \int_{-\infty}^\infty dk \, A(k) e^{ikx},$
By the superposition principle, the wavepacket at any time t is:
$\alpha(x,t)= \int_{-\infty}^\infty dk \, A(k) e^{i(kx-\omega t)},$
where ω is implicitly a function of k. We assume that the wave packet α is almost monochromatic, so that A(k) is nonzero only in the vicinity of a central wavenumber k0. Then, linearization gives:
$\omega(k) \approx \omega_0 + (k-k_0)\omega'_0$
where $\omega_0=\omega(k_0)$ and $\omega'_0=\frac{\partial \omega(k)}{\partial k} |_{k=k_0}$. Then, after some algebra,
$\alpha(x,t)= e^{it(\omega'_0 k_0-\omega_0)}\int_{-\infty}^\infty dk \, A(k) e^{ik(x-\omega'_0 t)}.$
The factor in front of the integral has absolute value 1. Therefore,
$|\alpha(x,t)| = |\alpha(x-\omega'_0 t, 0)|, \,$
i.e. the envelope of the wavepacket travels at velocity $\omega'_0=(d\omega/dk)_{k=k_0}$. This explains the group velocity formula.
Higher order terms in dispersion
Distortion of wave groups by higher-order dispersion effects, for surface gravity waves on deep water (with vg = ½vp). The superposition of three wave components – with respectively 22, 25 and 29 wavelengths, fitting in a periodic horizontal domain of 2 km length – is shown. The wave amplitudes of the components are respectively 1, 2 and 1 metre.
Part of the previous derivation is the assumption:
$\omega(k) \approx \omega_0 + (k-k_0)\omega'_0$
If the wavepacket has a relatively large frequency spread, or if the dispersion $\omega(k)$ has sharp variations (such as due to a resonance), or if the packet travels over very long distances, this assumption is not valid. As a result, the envelope of the wave packet not only moves, but also distorts. Loosely speaking, different frequency-components of the wavepacket travel at different speeds, with the faster components moving towards the front of the wavepacket and the slower moving towards the back. Eventually, the wave packet gets stretched out.
The next-higher term in the Taylor series (related to the second derivative of $\omega(k)$) is called group velocity dispersion.
Physical interpretation
The group velocity is often thought of as the velocity at which energy or information is conveyed along a wave. In most cases this is accurate, and the group velocity can be thought of as the signal velocity of the waveform. However, if the wave is travelling through an absorptive medium, this does not always hold. Since the 1980s, various experiments have verified that it is possible for the group velocity of laser light pulses sent through specially prepared materials to significantly exceed the speed of light in vacuum. However, superluminal communication is not possible in this case, since the signal velocity remains less than the speed of light. It is also possible to reduce the group velocity to zero, stopping the pulse, or have negative group velocity, making the pulse appear to propagate backwards. However, in all these cases, photons continue to propagate at the expected speed of light in the medium.[7][8][9][10]
Anomalous dispersion happens in areas of rapid spectral variation with respect to the refractive index. Therefore, negative values of the group velocity will occur in these areas. Anomalous dispersion plays a fundamental role in achieving backward propagating and superluminal light. Anomalous dispersion can also be used to produce group and phase velocities that are in different directions.[8] Materials that exhibit large anomalous dispersion allow the group velocity of the light to exceed c and/or become negative.[10][11]
History
The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877.[12]
Other expressions
For light, the refractive index n, vacuum wavelength λ0, and wavelength in the medium λ, are related by
$\lambda_0=\frac{2\pi c}{\omega}, \;\; \lambda = \frac{2\pi}{k} = \frac{2\pi v_p}{\omega}, \;\; n=\frac{c}{v_p}=\frac{\lambda_0}{\lambda},$
with vp = ω/k the phase velocity.
The group velocity, therefore, satisfies:
$v_g = \frac{c}{n + \omega \frac{\partial n}{\partial \omega}} = \frac{c}{n - \lambda_0 \frac{\partial n}{\partial \lambda_0}} = v_p \left( 1+\frac{\lambda}{n} \frac{\partial n}{\partial \lambda} \right) = v_p - \lambda \frac{\partial v_p}{\partial \lambda} = v_p + k \frac{\partial v_p}{\partial k}.$
In three dimensions
See also: Plane wave
For waves traveling through three dimensions, such as light waves, sound waves, and matter waves, the formulas for phase and group velocity are generalized in a straightforward way:[13]
One dimension: $v_p = \omega/k, \quad v_g = \frac{\partial \omega}{\partial k}, \,$
Three dimensions: $\mathbf{v}_p = \hat{\mathbf{k}} \frac{\omega}{|\mathbf{k}|}, \quad \mathbf{v}_g = \vec{\nabla}_{\mathbf{k}} \, \omega \,$
where $\vec{\nabla}_{\mathbf{k}} \, \omega$ means the gradient of the angular frequency $\omega$ as a function of the wave vector $\mathbf{k}$, and $\hat{\mathbf{k}}$ is the unit vector in direction k.
If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions.
Matter-wave group velocity
See also: Matter wave
Albert Einstein first explained the wave–particle duality of light in 1905. Louis de Broglie hypothesized that any particle should also exhibit such a duality. The velocity of a particle, he concluded then (but may be questioned today, see above), should always equal the group velocity of the corresponding wave. De Broglie deduced that if the duality equations already known for light were the same for any particle, then his hypothesis would hold. This means that
$v_g = \frac{\partial \omega}{\partial k} = \frac{\partial (E/\hbar)}{\partial (p/\hbar)} = \frac{\partial E}{\partial p}$
where E is the total energy of the particle, p is its momentum, ħ is the reduced Planck constant. For a free non-relativistic particle it follows that
$\begin{align} v_g &= \frac{\partial E}{\partial p} = \frac{\partial}{\partial p} \left( \frac{1}{2}\frac{p^2}{m} \right),\\ &= \frac{p}{m},\\ &= v. \end{align}$
where $m$ is the mass of the particle and v its velocity.
Also in special relativity we find that
$\begin{align} v_g &= \frac{\partial E}{\partial p} = \frac{\partial}{\partial p} \left( \sqrt{p^2c^2+m^2c^4} \right),\\ &= \frac{pc^2}{\sqrt{p^2c^2 + m^2c^4}},\\ &= \frac{p}{m\sqrt{\left(\frac{p}{mc}\right)^2+1}},\\ &= \frac{p}{m\gamma},\\ &= \frac{mv\gamma}{m\gamma},\\ &= v. \end{align}$
where m is the mass of the particle, c is the speed of light in a vacuum, $\gamma$ is the Lorentz factor, and v is the velocity of the particle regardless of wave behavior.
Group velocity (equal to an electron's speed) should not be confused with phase velocity (equal to the product of the electron's frequency multiplied by its wavelength).
Both in relativistic and non-relativistic quantum physics, we can identify the group velocity of a particle's wave function with the particle velocity. Quantum mechanics has very accurately demonstrated this hypothesis, and the relation has been shown explicitly for particles as large as molecules.[citation needed]
See also
• Wave propagation
• Dispersion (optics) for a full discussion of wave velocities
• Phase velocity
• Front velocity
• Group delay and phase delay
• Signal velocity
• Slow light
• Wave propagation speed
• Defining equation (physics)
References
Notes
1. Brillouin, Léon (2003) [1946], Wave Propagation in Periodic Structures: Electric Filters and Crystal Lattices, Dover, p. 75, ISBN 978-0-486-49556-9
2. Lighthill, James (2001) [1978], Waves in fluids, Cambridge University Press, p. 242, ISBN 978-0-521-01045-0
3. Griffiths, David J. (1995). Introduction to Quantum Mechanics. Prentice Hall. p. 48.
4. David K. Ferry (2001). Quantum Mechanics: An Introduction for Device Physicists and Electrical Engineers (2nd ed.). CRC Press. pp. 18–19. ISBN 978-0-7503-0725-3.
5. Gehring, George M.; Schweinsberg, Aaron; Barsi, Christopher; Kostinski, Natalie; Boyd, Robert W. (2006), "Observation of a Backward Pulse Propagation Through a Medium with a Negative Group Velocity", Science 312 (5775): 895–897, Bibcode:2006Sci...312..895G, doi:10.1126/science.1124524, PMID 16690861
6. ^ a b Dolling, Gunnar; Enkrich, Christian; Wegener, Martin; Soukoulis, Costas M.; Linden, Stefan (2006), "Simultaneous Negative Phase and Group Velocity of Light in a Metamaterial", Science 312 (5775): 892–894, Bibcode:2006Sci...312..892D, doi:10.1126/science.1126021, PMID 16690860
7. Schweinsberg, A.; Lepeshkin, N. N.; Bigelow, M.S.; Boyd, R. W.; Jarabo, S. (2005), "Observation of superluminal and slow light propagation in erbium-doped optical fiber", Europhysics Letters 73 (2): 218–224, Bibcode:2006EL.....73..218S, doi:10.1209/epl/i2005-10371-0
8. ^ a b Bigelow, Matthew S.; Lepeshkin, Nick N.; Shin, Heedeuk; Boyd, Robert W. (2006), "Propagation of a smooth and discontinuous pulses through materials with very large or very small group velocities", Journal of Physics: Condensed Matter 18 (11): 3117–3126, Bibcode:2006JPCM...18.3117B, doi:10.1088/0953-8984/18/11/017
9. Withayachumnankul, W.; Fischer, B. M.; Ferguson, B.; Davis, B. R.; Abbott, D. (2010), "A Systemized View of Superluminal Wave Propagation", Proceedings of the IEEE 98 (10): 1775–1786, doi:10.1109/JPROC.2010.2052910
10. Brillouin, Léon (1960), Wave Propagation and Group Velocity, New York: Academic Press Inc., OCLC 537250
Further reading
• Tipler, Paul A. (2003), Modern Physics (4th ed.), New York: W. H. Freeman and Company, ISBN 0-7167-4345-0. Unknown parameter `|unused_data=` ignored (help) 223 p.
• Biot, M. A. (1957), "General theorems on the equivalence of group velocity and energy transport", Physical Review 105 (4): 1129–1137, Bibcode:1957PhRv..105.1129B, doi:10.1103/PhysRev.105.1129
• Whitham, G. B. (1961), "Group velocity and energy propagation for three-dimensional waves", Communications on Pure and Applied Mathematics 14 (3): 675–691, doi:10.1002/cpa.3160140337
• Lighthill, M. J. (1965), "Group velocity", IMA Journal of Applied Mathematics 1 (1): 1–28, doi:10.1093/imamat/1.1.1
• Bretherton, F. P.; Garrett, C. J. R. (1968), "Wavetrains in inhomogeneous moving media", Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences 302 (1471): 529–554, Bibcode:1968RSPSA.302..529B, doi:10.1098/rspa.1968.0034
• Hayes, W. D. (1973), "Group velocity and nonlinear dispersive wave propagation", Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences 332 (1589): 199–221, Bibcode:1973RSPSA.332..199H, doi:10.1098/rspa.1973.0021
• Whitham, G. B. (1974), Linear and nonlinear waves, Wiley, ISBN 0471940909
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8574402332305908, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/145711-completeness-vector-space-print.html
|
# Completeness of Vector Space
Printable View
• May 20th 2010, 08:22 AM
ejgmath
Completeness of Vector Space
Let $H$ be the space of sequences $(x(1),x(2),x(3),...)$ such that $\sum^{\infty}_{j=1}\frac{|x(j)|^2}{j^2}<\infty$, with inner product $<x,y>=\sum^{\infty}_{j=1}\frac{x(j)\overline{y(j)} }{j^2}$ and norm $\|x\|=<x,x>^{1/2}$.
I need to show that $H$ is complete with respect to the inner product i.e. a Hilbert Space. I am assuming that $\mathbb{C}$ is complete.
I know that I need to show that every Cauchy sequence in H is convergent. I can see that this follows relatively easily from the definition of H.
So I have taken a sequence $x_{n}(j)$ but for this to be Cauchy $|x_{n}(j)-x_{m}(j)|<\epsilon$. But I can't see how to show that, given that the inner product on $H$ has a $j^2$ below it. Any help would be great. Thanks
• May 20th 2010, 12:46 PM
Focus
Quote:
Originally Posted by ejgmath
Let $H$ be the space of sequences $(x(1),x(2),x(3),...)$ such that $\sum^{\infty}_{j=1}\frac{|x(j)|^2}{j^2}<\infty$, with inner product $<x,y>=\sum^{\infty}_{j=1}\frac{x(j)\overline{y(j)} }{j^2}$ and norm $\|x\|=<x,x>^{1/2}$.
I need to show that $H$ is complete with respect to the inner product i.e. a Hilbert Space. I am assuming that $\mathbb{C}$ is complete.
I know that I need to show that every Cauchy sequence in H is convergent. I can see that this follows relatively easily from the definition of H.
So I have taken a sequence $x_{n}(j)$ but for this to be Cauchy $|x_{n}(j)-x_{m}(j)|<\epsilon$. But I can't see how to show that, given that the inner product on $H$ has a $j^2$ below it. Any help would be great. Thanks
You can just get rid of the j. Suppose that $x_n$ is Cauchy, then
$<br /> \sum\frac{|x_n(j)-x_m(j)|^2}{j^2} \rightarrow 0$
In particular for each j
$<br /> \frac{|x_n(j)-x_m(j)|^2}{j^2} \leq ||x_n-x_m|| \rightarrow 0$
which can only happen if the top part of the LHS goes to zero (I am taking m,n to infinity).
• May 20th 2010, 06:15 PM
ejgmath
Thanks for the reply, okay so I was already assuming that $x_n$ was Cauchy, the bit I was confused about was the inequality
$\frac{|x_n(j)-x_m(j)|}{j}\leq\|x_n-x_m\|<\epsilon$
Which is need to show that $x_n(j)$ is also Cauchy, for each j.
My question is: If you have the $\frac{1}{j}$ in the inequality, does that still satisfy the definition of Cauchy?
• May 20th 2010, 06:27 PM
Focus
Quote:
Originally Posted by ejgmath
Thanks for the reply, okay so I was already assuming that $x_n$ was Cauchy, the bit I was confused about was the inequality
$\frac{|x_n(j)-x_m(j)|}{j}\leq\|x_n-x_m\|<\epsilon$
Which is need to show that $x_n(j)$ is also Cauchy, for each j.
My question is: If you have the $\frac{1}{j}$ in the inequality, does that still satisfy the definition of Cauchy?
Why would it? You are considering j fixed, so if you really are worried about it just pick $\frac{\epsilon}{j}$.
A formal way would be to say, fix j, and let epsilon be greater than zero, then there exists an N such that for all n,m>N;
$<br /> \frac{|x_n(j)-x_m(j)|^2}{j^2} \leq ||x_n-x_m||< \frac{\epsilon^2}{j^2}$
Thus for n,m>N
$<br /> |x_n(j)-x_m(j)|< \epsilon$
i.e. x_n(j) is Cauchy.
All times are GMT -8. The time now is 06:46 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9756948351860046, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/31088/are-inconsistent-estimators-ever-preferable/31095
|
# Are inconsistent estimators ever preferable?
Consistency is obviously a natural and important property estimators, but are there situations where it may be better to use an inconsistent estimator rather than a consistent one?
More specifically, are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite $n$ (with respect to some suitable loss function)?
-
1
There is an interesting tradeoff in performance between consistency of model selection and parameter consistency in estimation problems using the lasso and its (many!) variants. This is detailed, e.g., in Bühlmann and van der Geer's recent text. – cardinal Jun 25 '12 at 18:46
Thanks @cardinal, I'll check that text out! – MånsT Jun 25 '12 at 18:48
Wouldn't the argument in my, now deleted, answer still hold? Namely: in small samples it is better to have an unbiased estimator with low variance. Or can one show that an consistent estimator always has lower variance than any other unbiased estimator? – Bob Jansen Jun 25 '12 at 18:53
Perhaps, @Bootvis! Do you have an example of an inconsistent estimator with low MSE? – MånsT Jun 25 '12 at 18:56
3
@Bootvis: If you happen to look at the extensive comments on an answer to a recent question asking about consistency vs. unbiasedness, you will see that a consistent estimator can have arbitrarily wild behavior of both the variance and bias (even, simultaneously!). That should remove all doubt regarding your comment. – cardinal Jun 25 '12 at 18:58
show 5 more comments
## 1 Answer
This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is motivated by the idea that consistency is best suited for quadratic losses, so using a loss departing strongly from that (such as an asymmetric loss) should render consistency almost useless in evaluating the performance of estimators.
Suppose your client wishes to estimate the mean of a variable (assumed to have a symmetric distribution) from an iid sample $(x_1, \ldots, x_n)$, but they are averse to either (a) underestimating it or (b) grossly overestimating it.
To see how this might work out, let us adopt a simple loss function, understanding that in practice the loss might differ from this one quantitatively (but not qualitatively). Choose units of measurement so that $1$ is the largest tolerable overestimate and set the loss of an estimate $t$ when the true mean is $\mu$ to equal $0$ whenever $\mu \le t\le \mu+1$ and equal to $1$ otherwise.
The calculations are particularly simple for a Normal family of distributions with mean $\mu$ and variance $\sigma^2 \gt 0$, for then the sample mean $\bar{x}=\frac{1}{n}\sum_i x_i$ has a Normal$(\mu, \sigma^2/n)$ distribution. The sample mean is a consistent estimator of $\mu$, as is well known (and obvious). Writing $\Phi$ for the standard normal CDF, the expected loss of the sample mean equals $1/2 + \Phi(-\sqrt{n}/\sigma)$: $1/2$ comes from the 50% chance that the sample mean will underestimate the true mean and $\Phi(-\sqrt{n}/\sigma)$ comes from the chance of overestimating the true mean by more than $1$.
The expected loss of $\bar{x}$ equals the blue area under this standard normal PDF. The red area gives the expected loss of the alternative estimator, below. They differ by replacing the solid blue area between $-\sqrt{n}/(2\sigma)$ and $0$ by the smaller solid red area between $\sqrt{n}/(2\sigma)$ and $\sqrt{n}/\sigma$. That difference grows as $n$ increases.
An alternative estimator given by $\bar{x}+1/2$ has an expected loss of $2\Phi(-\sqrt{n}/(2\sigma))$. The symmetry and unimodality of normal distributions imply its expected loss is always better than that of the sample mean. (This makes the sample mean inadmissible for this loss.) Indeed, the expected loss of the sample mean has a lower limit of $1/2$ whereas that of the alternative converges to $0$ as $n$ grows. However, the alternative clearly is inconsistent: as $n$ grows, it converges in probability to $\mu+1/2 \ne \mu$.
Blue dots show loss for $\bar{x}$ and red dots show loss for $\bar{x}+1/2$ as a function of sample size $n$.
-
2
(+1) Your comment "consistency is best suited for quadratic losses" interests me also but it's not blatantly obvious to me (and perhaps others) where that comes from. Clearly convergence in $L_2$ is best suited for quadratic losses and $L_2$ convergence implies convergence in probability but what is the motivation for this quote in the context of almost sure convergence a.k.a. "strong consistency"? – Macro Jun 25 '12 at 20:45
3
@Macro The thinking is somewhat indirect and not intended to be rigorous but I believe it is natural: quadratic loss implies minimizing variance which (via Chebyshev) leads to convergence in probability. Whence, a heuristic for finding a counterexample should focus on losses which are so far from quadratic that such manipulations are unsuccessful. – whuber♦ Jun 25 '12 at 21:48
The inconsistent estimator may be better for many values of n but I would think that it would not maintain an advantage over a consistent estimator for sufficiently large n. – Michael Chernick Jun 25 '12 at 22:21
I don't understand the basis of your comment, @Michael: look at the last graphic. The expected loss for the consistent estimator decreases to $1/2$ while that of the inconsistent estimator decreases (exponentially) to $0$: it is thus exponentially better than the consistent one as $n$ grows large. – whuber♦ Jun 25 '12 at 23:47
1
@Michael OK, thank you for explaining that. In this context, with a non-quadratic loss, an "advantage" is not expressed terms of bias. One might criticize this loss function, but I don't want to reject it outright: it models situations where, for instance, the data are measurements of an item manufactured to certain tolerances and it would be disastrous (as in Shuttle o-ring failure or business bankruptcy disastrous) for the true mean to fall outside those tolerances. – whuber♦ Jun 26 '12 at 2:24
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245960712432861, "perplexity_flag": "middle"}
|
http://en.wikibooks.org/wiki/F_Sharp_Programming/Higher_Order_Functions
|
# F Sharp Programming/Higher Order Functions
F# : Higher Order Functions
A higher-order function is a function that takes another function as a parameter, or a function that returns another function as a value, or a function which does both.
## Contents
### Familiar Higher Order Functions
To put higher order functions in perspective, if you've ever taken a first-semester course on calculus, you're undoubtedly familiar with two functions: the limit function and the derivative function.
The limit function is defined as follows:
$\lim_{x \to p}f(x) = L$
The limit function, `lim`, takes another function `f(x)` as a parameter, and it returns a value `L` to represent the limit.
Similarly, the derivative function is defined as follows:
$deriv(f(x))=\lim_{h\to 0}{f(a+h)-f(a)\over h}=f'(x)$
The derivative function, `deriv`, takes a function `f(x)` as a parameter, and it returns a completely different function `f'(x)` as a result.
In this respect, we can correctly assume the limit and derivative functions are higher-order functions. If we have a good understanding of higher-order functions in mathematics, then we can apply the same principles in F# code.
In F#, we can pass a function to another function just as if it was a literal value, and we call it just like we call any other function. For example, here's a very trivial function:
```let passFive f = (f 5)
```
In F# notation, `passFive` has the following type:
`val passFive : (int -> 'a) -> 'a`
In other words, `passFive` takes a function `f`, where `f` must take an `int` and return any generic type `'a`. Our function `passFive` has the return type `'a` because we don't know the return type of `f 5` in advance.
```open System
let square x = x * x
let cube x = x * x * x
let sign x =
if x > 0 then "positive"
else if x < 0 then "negative"
else "zero"
let passFive f = (f 5)
printfn "%A" (passFive square) // 25
printfn "%A" (passFive cube) // 125
printfn "%A" (passFive sign) // "positive"
```
These functions have the following types:
```val square : int -> int
val cube : int -> int
val sign : int -> string
val passFive : (int -> 'a) -> 'a
```
Unlike many other languages, F# makes no distinction between functions and values. We pass functions to other functions in the exact same way that we pass ints, strings, and other values.
#### Creating a Map Function
A map function converts one type of data to another type of data. A simple map function in F# looks like this:
```let map item converter = converter item
```
This has the type `val map : 'a -> ('a -> 'b) -> 'b`. In other words, `map` takes a two parameters: an item 'a, and a function that takes an `'a` and returns a `'b`; map returns a `'b`.
Let's examine the following code:
```open System
let map x f = f x
let square x = x * x
let cubeAndConvertToString x =
let temp = x * x * x
temp.ToString()
let answer x =
if x = true then "yes"
else "no"
let first = map 5 square
let second = map 5 cubeAndConvertToString
let third = map true answer
```
These functions have the following signatures:
```val map : 'a -> ('a -> 'b) -> 'b
val square : int -> int
val cubeAndConvertToString : int -> string
val answer : bool -> string
val first : int
val second : string
val third : string
```
The `first` function passes a datatype `int` and a function with the signature `(int -> int)`; this means the placeholders `'a` and `'b` in the map function both become `int`s.
The `second` function passes a datatype `int` and a function `(int -> string)`, and `map` predictably returns a `string`.
The `third` function passes a datatype `bool` and a function `(bool -> string)`, and `map` returns a `string` just as we expect.
Since our generic code is typesafe, we would get an error if we wrote:
```let fourth = map true square
```
Because the `true` constrains our function to a type `(bool -> 'b)`, but the `square` function has the type `(int -> int)`, so it's obviously incorrect.
#### The Composition Function (`<<` operator)
In algebra, the composition function is defined as `compose(f, g, x) = f(g(x))`, denoted f o g. In F#, the composition function is defined as follows:
```let inline (<<) f g x = f (g x)
```
Which has the somewhat cumbersome signature `val << : ('b -> 'c) -> ('a -> 'b) -> 'a -> 'c`.
If I had two functions:
f(x) = x^2
g(x) = -x/2 + 5
And I wanted to model f o g, I could write:
```open System
let f x = x*x
let g x = -x/2.0 + 5.0
let fog = f << g
Console.WriteLine(fog 0.0) // 25
Console.WriteLine(fog 1.0) // 20.25
Console.WriteLine(fog 2.0) // 16
Console.WriteLine(fog 3.0) // 12.25
Console.WriteLine(fog 4.0) // 9
Console.WriteLine(fog 5.0) // 6.25
```
Note that `fog` doesn't return a value, it returns another function whose signature is `(float -> float)`.
Of course, there's no reason why the compose function needs to be limited to numbers; since it's generic, it can work with any datatype, such as `int array`s, `tuple`s, `string`s, and so on.
There also exists the `>>` operator, which similarly performs function composition, but in reverse order. It is defined as follows:
```let inline (>>) f g x = g (f x)
```
This operator's signature is as follows: `val >> : ('a -> 'b) -> ('b -> 'c) -> 'a -> 'c`.
The advantage of doing composition using the `>>` operator is that the functions in the composition are listed in the order in which they are called.
```let gof = f >> g
```
This will first apply `f` and then apply `g` on the result.
### The `|>` Operator
The pipeline operator, `|>`, is one of the most important operators in F#. The definition of the pipeline operator is remarkably simple:
```let inline (|>) x f = f x
```
Let's take 3 functions:
```let square x = x * x
let add x y = x + y
let toString x = x.ToString()
```
Let's also say we had a complicated function which squared a number, added five to it, and converted it to a string? Normally, we'd write this:
```let complexFunction x =
toString (add 5 (square x))
```
We can improve the readability of this function somewhat using the pipeline operator:
```let complexFunction x =
x |> square |> add 5 |> toString
```
`x` is piped to the `square` function, which is piped to `add 5` method, and finally to the `toString` method.
### Anonymous Functions
Until now, all functions shown in this book have been named. For example, the function above is named `add`. F# allows programmers to declare nameless, or anonymous functions using the `fun` keyword.
```let complexFunction =
2 (* 2 *)
|> ( fun x -> x + 5) (* 2 + 5 = 7 *)
|> ( fun x -> x * x) (* 7 * 7 = 49 *)
|> ( fun x -> x.ToString() ) (* 49.ToString = "49" *)
```
Anonymous functions are convenient and find a use in a surprising number of places.
#### A Timer Function
```open System
let duration f =
let timer = new System.Diagnostics.Stopwatch()
timer.Start()
let returnValue = f()
printfn "Elapsed Time: %i" timer.ElapsedMilliseconds
returnValue
let rec fib = function
| 0 -> 0
| 1 -> 1
| n -> fib (n - 1) + fib (n - 2)
let main() =
printfn "fib 5: %i" (duration ( fun() -> fib 5 ))
printfn "fib 30: %i" (duration ( fun() -> fib 30 ))
main()
```
The `duration` function has the type `val duration : (unit -> 'a) -> 'a`. This program prints:
```Duration (ms): 0.976500
fib 5: 5
Duration (ms): 24.412500
fib 30: 832040
```
Note: the actual duration to execute these functions will vary from machine to machine.
### Currying and Partial Functions
A fascinating feature in F# is called "currying", which means that F# does not require programmers to provide all of the arguments when calling a function. For example, lets say we have a function:
```let add x y = x + y
```
`add` takes two integers and returns another integer. In F# notation, this is written as `val add : int -> int -> int`
We can define another function as follows:
```let addFive = add 5
```
The `addFive` calls the `add` function with one its parameters, so what is the return value of this function? That's easy: `addFive` returns another function which is waiting for the rest of its arguments. In this case, addFive returns a function that takes an `int` and returns another `int`, denoted in F# notation as `val addFive : (int -> int)`.
You call `addFive` just in the same way that you call other functions:
```open System
let add x y = x + y
let addFive = add 5
Console.WriteLine(addFive 12) // prints 17
```
#### How Currying Works
The function `let add x y = x + y` has the type `val add : int -> int -> int`. F# use the slightly unconventional arrow notation to denote function signatures for a reason: arrows notation is intrinsically connected to currying and anonymous functions. Currying works because, behind the scenes, F# converts function parameters to a style that looks like this:
```let add = (fun x -> (fun y -> x + y) )
```
The type `int -> int -> int` is semantically equivalent to `(int -> (int -> int))`.
When you call `add` with no arguments, it returns `fun x -> fun y -> x + y` (or equivalently `fun x y -> x + y`), another function waiting for the rest of its arguments. Likewise, when you supply one argument to the function above, say 5, it returns `fun y -> 5 + y`, another function waiting for the rest of its arguments, with all occurrences of x being replaced by the argument 5.
Currying is built on the principle that each argument actually returns a separate function, which is why calling a function with only part of its parameters returns another function. The familiar F# syntax that we've seen so far, `let add x y = x + y`, is actually a kind of syntatic sugar for the explicit currying style shown above.
#### Two Pattern Matching Syntaxes
You may have wondered why there are two pattern matching syntaxes:
Traditional Syntax Shortcut Syntax
```let getPrice food =
match food with
| "banana" -> 0.79
| "watermelon" -> 3.49
| "tofu" -> 1.09
| _ -> nan
```
```let getPrice2 = function
| "banana" -> 0.79
| "watermelon" -> 3.49
| "tofu" -> 1.09
| _ -> nan
```
Both snippets of code are identical, but why does the shortcut syntax allow programmers to omit the `food` parameter in the function definition? The answer is related to currying: behind the scenes, the F# compiler converts the `function` keyword into the following construct:
```let getPrice2 =
(fun x ->
match x with
| "banana" -> 0.79
| "watermelon" -> 3.49
| "tofu" -> 1.09
| _ -> nan)
```
In other words, F# treats the `function` keyword as an anonymous function that takes one parameter and returns one value. The `getPrice2` function actually returns an anonymous function; arguments passed to `getPrice2` are actually applied and evaluated by the anonymous function instead.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8837265372276306, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/mathematical-physics?page=4&sort=active&pagesize=50
|
# Tagged Questions
DO NOT USE THIS TAG just because your question involves math! If your question is on simplification of a mathematical expression, please ask it at math.stackexchange.com Mathematical physics is the mathematically rigorous study of the foundations of physics, and the application of advanced ...
2answers
44 views
### Examples of heterotic CFTs
I'm trying to get a global idea of the world of conformal field theories. Many authors restrict attention to CFTs where the algebras of left and right movers agree. I'd like to increase my intuition ...
1answer
165 views
### Lorentz Invariant Equation of Motion for Scalar Field
I'm trying to understand why you can't write down a first order equation of motion for a scalar field in special relativity. Suppose $\phi(x)$ a scalar field, $v^{\mu}$ a 4-vector. According to my ...
2answers
447 views
### What is the symmetry that corresponds to conservation of position?
We know that conserved quantities are associated with certain symmetries. For example conservation of momentum is associated with translational invariance, and conservation of angular momentum is ...
1answer
249 views
### Diffeomorphisms, Isometries And General Relativity
Apologies if this question is too naive, but it strikes at the heart of something that's been bothering me for a while. Under a diffeomorphism $\phi$ we can push forward an arbitrary tensor field $F$ ...
1answer
279 views
### Representations of Lorentz Group
I'd be grateful if someone could check that my exposition here is correct, and then venture an answer to the question at the end! $SO(3)$ has a fundamental representation (spin-1), and tensor product ...
1answer
112 views
### What is the physical meaning of a flux of gravitational field in classics?
I've stumbled upon an answer to a question about square power in Newton's law of gravity. After reading it I got a question whether the flux of gravitational field has actually any physical meaning. ...
2answers
148 views
### Gaussian type integral with negative power of variable in integrand
How can we compute the integral $\int_{-\infty}^\infty t^n e^{-t^2/2} dt$ when $n=-1$ or $-2$? It is a problem (1.11) in Prof James Nearing's course Mathematical Tools for Physics. Can a situation ...
0answers
192 views
### Interesting Math Topics Useful for Physics [closed]
What are some interesting, but less popular, math topics that are useful for physics that can be self-studied? Specifically, topics that might ultimately be useful in high energy theory (even if it is ...
14answers
4k views
### Number theory in Physics
As a Graduate Mathematics student, my interests lies in Number theory. I am curious to know if Number theory has any connections or applications to physics. I have never even heard of any applications ...
0answers
85 views
### Asymptotic limit of the two kink solution of the sine-gordon equation
I am reading a paper on the sine-gordon model. The solution for a two kink solution is given as: ...
0answers
36 views
### Minimal strings and topological strings
In http://arxiv.org/abs/hep-th/0206255 Dijkgraaf and Vafa showed that the closed string partition function of the topological B-model on a Calabi-Yau of the form $uv-H(x,y)=0$ coincides with the free ...
1answer
256 views
### AGT conjecture and WZW model
In 2009 Alday, Gaiotto and Tachikawa conjectured an expression for the Liouville theory conformal blocks and correlation functions on a Riemann surface of genus g and n punctures as the Nekrasov ...
1answer
54 views
### Miura transform for W-algebras of exceptional type
Miura transform for W-algebras of classical types can be found in e.g. Sec. 6.3.3 of Bouwknegt-Schoutens. Is there a similar explicit Miura transform for W-algebras of exceptional types, say, E6? It's ...
6answers
201 views
### Which QFTs were rigorously constructed?
Which QFTs have mathematically rigorous constructions a la AQFT? I understand there are many such constructions in 2D, in particular 2D CFT has been extensively studied mathematically. But even in 2D ...
0answers
63 views
### Divergence calculation of a lie algebra valued quantity having spinor indices
I am reading this paper by E. Weinberg - Fundamental monopoles and multimonopole solutions for arbitrary simple gauge groups. I am having a problem with a calculation. I don't have much experience ...
1answer
169 views
### Expectation value calculation for a weird operator
In the paper Fundamental monopoles and multimonopole solutions for arbitrary simple gauge groups.- E weinberg I am not being able to see one of the calculation. The author states (eqn 3.26) \langle ...
2answers
243 views
### WKB method of approximation
Would it be legitimate to use the WKB approximation for a particle in a spherically symmetric Gaussian potential? $$V(r)~=~V_0(1-e^{-r^2/a^2}).$$ I'm not sure when to use which approximation ...
2answers
142 views
### Topological twists of SUSY gauge theory
Consider $N=4$ super-symmetric gauge theory in 4 dimensions with gauge group $G$. As is explained in the beginning of the paper of Kapustin and Witten on geometric Langlands, this theory has 3 ...
11answers
827 views
### Negative probabilities in quantum physics
Negative probabilities are naturally found in the Wigner function (both the original one and its discrete variants), the Klein paradox (where it is an artifact of using a one-particle theory) and the ...
2answers
201 views
### Angular Momentum Operators Non-Degenerate
Typically one writes simultaneous eigenstates of the angular momentum operators $J_3$ and $J^2$ as $|j,m\rangle$, where $$J^2|j,m\rangle = \hbar^2 j(j+1)|j,m\rangle$$ J_3 |j,m\rangle = \hbar ...
1answer
102 views
### How Exactly Does Linear Regge Trajectories Imply Stability?
(for a more muddled version, see physics.stackexchange: http://physics.stackexchange.com/questions/14020/whats-with-mandelstams-argument-that-only-linear-regge-trajectories-are-stable) There is a ...
1answer
146 views
### What will happen when measuring unmeasurable object?
There is a set called Vitali Set which is not Lebesgue measurable. Analogously, there also exists a Vitali set $Y$ in $\mathbb R^3$ which is a subset of $[0,1]^3$ and $|Y\cap q|=1$ for all \$q\in ...
5answers
129 views
### Other processes than formal power series expansions in quantum field theory calculations
I am not sure if this question is too naive for this site, but here it goes. In QFT calculations, it seems that everything is rooted in formal power series expansions, i.e. , what dynamical systems ...
0answers
33 views
### What is the importance of studying degeneration on $M_g$
Let $M_g$ be the moduli space of smooth curves of genus $g$. Let $\overline{M_g}$ be its compactification; the moduli space of stable curves of genus $g$. It seems to be important in physics to study ...
5answers
196 views
### Where do theta functions and canonical Green functions appear in physics
In the beginning of Section 5 in his article, Wentworth mentions a result of Bost and proves it using the spin-1 bosonization formula. This result provides a link between theta functions, canonical ...
3answers
1k views
### Use of advanced mathematics in astronomy, like topology, abstract algebra, or others
I know that topology, abstract algebra, K-theory, Riemannian geometry and others, can be used in physics. Are some of these areas used in astronomy, and are some astronomical theories based on them? ...
1answer
82 views
### Metric interpretation of self-adjoint extensions?
I am wondering if beyond physical interpretation, the one dimensional contact interactions (self-adjoint extensions of the the free Hamiltonian when defined everywhere except at the origin) have a ...
6answers
771 views
### The Role of Rigor
The purpose of this question is to ask about the role of mathematical rigor in physics. In order to formulate a question that can be answered, and not just discussed, I divided this large issue into ...
2answers
29 views
### Extensions of DHR superselection theory to long range forces
For Haag-Kastler nets $M(O)$ of von-Neumann algebras $M$ indexed by open bounded subsets $O$ of the Minkowski space in AQFT (algebraic quantum field theory) the DHR (Doplicher-Haag-Roberts) ...
1answer
350 views
### Onsager's Regression Hypothesis, Explained and Demonstrated
Onsager's 1931 regression hypothesis asserts that “…the average regression of fluctuations will obey the same laws as the corresponding macroscopic irreversible process". (Here is the links to ...
0answers
139 views
### Hypersingular Boundary Operator in Physics
This has been a question I've been asking myself for quite some time now. Is there a physical Interpretation of the Hypersingular Boundary Operator? First, let me give some motivation why I think ...
3answers
63 views
### Status of local gauge invariance in axiomatic quantum field theory
In his recent review... Sergio Doplicher, The principle of locality: Effectiveness, fate, and challenges, J. Math. Phys. 51, 015218 (2010), doi ...Sergio Doplicher mentions an important open ...
1answer
175 views
### Mermin-Wagner theorem in the presence of hard-core interactions
It seems quite common in the theoretical physics literature to see applications of the "Mermin-Wagner theorem" (see wikipedia or scholarpedia for some limited background) to systems with hard-core ...
6answers
301 views
### Applications of delay differential equations
Being interested in the mathematical theory, I was wondering if there are up-to-date, nontrivial models/theories where delay differential equations play a role (PDE-s, or more general functional ...
1answer
66 views
### Are possible gauge fields in a Lagrangian theory always determined by the structure of the charged degrees of freedom?
An elementary example to explain what I mean. Consider introducing a classical point particle with a Lagrangian $L(\mathbf{q} ,\dot{\mathbf{q}}, t)$. The most general gauge transformation is \$L ...
0answers
101 views
### Spherical tensor [closed]
These equations are out of Sakurai and Napolitano Modern Quantum Mechanics. I'm trying to show that $T_q^{(2)}$--which is defined as the bilinear of two vector operators $V_i$ and $W_j$--transforms ...
0answers
72 views
### Does the attached “poster” work as a hook into the arXiv paper cited, “Nonlinear Wightman fields”? [closed]
"Nonlinear Wightman fields" are my current response to a wish to do interacting quantum field theory differently, no matter how successful what we currently do may be. The following image of a single ...
3answers
544 views
### Canonical Commutation Relations
Is it logically sound to accept the canonical commutation relation (CCR) $$[x,p]~=~i\hbar$$ as a postulate of quantum mechanics? Or is it more correct to derive it given some form for $p$ in the ...
4answers
302 views
### Why the Hamiltonian and the Lagrangian are used interchangeably in QFT perturbation calculations
Whenever one needs to calculate correlation functions in QFT using perturbations one encounters the following expression: $\langle 0| some\ operators \times \exp(iS_{(t)}) |0\rangle$ where, ...
2answers
145 views
### Is the step of analytic continuation unavoidable or can you model around it?
One sometimes considers the analytic continuation of certain quantities in physics and take them seriously. More so than the direct or actual values actually. For example if you use the procedure for ...
2answers
385 views
### How should a theoretical physicist study maths? [duplicate]
Possible Duplicate: How should a physics student study mathematics? If some-one wants to do research in string theory for example, Would the Nakahara Topology, geometry and physics book and ...
4answers
117 views
### Is the mathematical truth 1+1=2 analogous to the conservation of energy? [closed]
They seem to express the same concept in different fields.
2answers
136 views
### can we investigate physics through investigation of pure number? [closed]
If the consistency between the two is so absolute, why can we not investigate the physical nature of the universe through analysis of pure number? Particularly at the quantum scale?
2answers
262 views
### (Co)homology of the universe
In this post let $U$ be the universe considered as a manifold. From what I gather we don't really have any firm evidence whether the universe is closed or open. The evidence seems to point towards it ...
1answer
230 views
### Natural systems that test the primality of a number?
There might be none. But I was thinking of links between number theory and physics, and this would seem like an example that would definitely solidify that link. Are there any known natural systems, ...
2answers
654 views
### Calculate stainless steel pole necking limit
Background Trying to determine how much weight a post can support without necking when a monitor is attached to an articulated arm: a cantilever problem. Problem There are three objects involved in ...
1answer
117 views
### What is the definition of density as a function?
(Before I start, I don't know which tag is suitable for this post. Please retag my post if it bothers you.) Let's say there is a string on $[0,1]$ with a mass given by $m(x)$. ($m(x)$ means the mass ...
2answers
194 views
### Electromagnetism for Mathematician
I am trying to find a book on electromagnetism for mathematician (so it has to be rigorous). Preferably a book that extensively uses Stoke's theorem for Maxwell's equations (unlike other books that on ...
0answers
65 views
### A doubt about fuchsian functions in physics?
I'm not sure if this is the right place (or math.stackexchange?) to ask the next What is the difference between fuchsian, theta-fuchsian, and kleinian functions? Please, suggest me an introductory ...
1answer
184 views
### Killing vectors for SO(3) (rotational) symmetry
I am reading a paper$^1$ by Manton and Gibbons on the dynamics of BPS monopoles. In this, they write the Atiyah-Hitchin metric for a two-monopole system. The first part is for the one monopole moduli ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053033590316772, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Newton's_Basin&oldid=3298
|
# Newton's Basin
### From Math Images
Revision as of 15:55, 3 June 2009 by Ryang1 (Talk | contribs)
Newton's Basin
Newton's Basin is a visual representation of Newton's Method, which is a procedure for estimating the root of a function.
Newton's Basin
Fields: Fractals and Calculus
Created By: Nicholas Buroojy
# Basic Description
Animation Emphasizing Roots
Newton Basin with 3 Roots
This image is one of many examples of Newton's Basin or Newton's Fractal. Newton's Basin is based on a calculus concept called Newton's Method, a procedure Newton developed to estimate a root of an equation.
The colors in a Newton's Basin usually correspond to each individual root of the equation, and can be used to infer where each root is located. The region of each color reflects the set of coordinates (x,y) whose x-values, after undergoing iteration with the equation describing the fractal, will eventually get closer and closer to the value of the root.
The animation emphasizes the roots in a Newton's Basin, whose equation clearly has three roots. The image to the right is also a Newton's Basin with three roots, presented more artistically.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Calculus
[Click to view A More Mathematical Explanation]
The featured image on this page is a visual representation of Newton's Method for calculus expanded i [...]
[Click to hide A More Mathematical Explanation]
The featured image on this page is a visual representation of Newton's Method for calculus expanded into the complex plane. To read a brief explanation on this method, read the following section entitled Newton's Method.
### Newton's Method
Newton's Method for calculus is a procedure to find a root of a polynomial, using an estimated coordinate as a starting point. Usually, the roots of a linear equation: $y = mx + b$ can be simply found by setting y = 0 and solving for x. However, with higher degree polynomials, this method can be much more complicated.
Newton devised an iterated method (animated to the right) with the following steps:
• Estimate a starting coordinate on the graph near to the root
• Find the tangent line at that starting coordinate
• Find the root of the tangent line
• Using the root as the x-coordinate of the new starting coordinate, iterate the method to find a better estimate
The results of this method lead to very close estimates to the actual root. Newton's Method can also be expressed:
$f'(x_n) = \frac{\mathrm{\Delta y}}{\mathrm{\Delta x}} = \frac{f(x_n)}{x_n - x_{n+1}}$
$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$
### Newton's Basin
#### Creating Newton's Basin
Newton Basin with 5 Roots
To produce an interesting fractal, the Newton Method needs to be extended to the complex plane and to imaginary numbers. Newton's Basin is created using a complex polynomialOr a polynomial with co-efficients that are complex, such as $p(z) = z^3 - 2z + 2$, with real and/or complex roots. In addition, each root in a Newton's Basin fractal is usually given a distinctive color. It is clear that the fractal on the left has a total of five roots colored magenta, yellow, red, green, and blue.
Each pixel in the image is assigned a complex number coordinate. The coordinates are applied to the equation and iterated continually with the output of the previous iteration becoming the input of the next iteration. If the iterations lead the x-values of the coordinates to converge towards a particular root, the pixel is colored accordingly. If the iterations lead to a loop and not a root, then the pixel is usually colored black because the x-values do not converge.
Therefore, each root has a set of initial (or pixel) coordinates $x_0$ that converge to the root. This set of coordinates that are complex number values is called the root's basin of attraction- where the name of this fractal comes from.
Newton Basin with 3 Roots Close up of Newton Basin with 3 Roots
For example, the image above was created from the equation $p(z) = z^3 - 2z + 2$. roots... wolfram alpha image
#### Self-Similarity
As with all other fractals, Newton's Basin exhibits self-similarity. The video to the left is an interactive representation of the continual self-similarity displayed by a Newton's Basin with a root degree of 5 (similar to the fractal shown in the previous section). Towards the end of the video, you will notice that the pixels are no longer adequate to continue magnifying the image...however, the fractal still goes on.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# About the Creator of this Image
Nicholas Buroojy has created many math images including Newton's Lab fractals, Julia and Mandelbrot Sets, Cantor Sets...
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011533856391907, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/253048/proving-or-disproving-regularity-of-a-language
|
# Proving or disproving regularity of a language
The question is as follows:
```` If L1 and L2 are not regular and L1 ⊆ L ⊆ L2, then L is regular
````
My intuition says that it's wrong so I've been looking for a counterexample, so far I didn't succeed.
Can I please get a direction? is this claim might be true?
Thanks in advance
-
1
What about $L_1 = L_2$? – Hendrik Jan Dec 7 '12 at 12:57
I don't think I may do that. Besides, even if L1=L2, I can find a language L that is contained in it and regular, so it doesn't disproves the claim. – user1067083 Dec 7 '12 at 12:59
2
If $L_1=L_2$ is not regular, then $L=L_1$ saisfies all conditions, and cannot be regular. If that is not what you need or want, please rephrase the question. You might want to add "for all" or "there exist". – Hendrik Jan Dec 7 '12 at 13:02
No, that's probably the counterexample I was looking for, thanks alot :) Can you leave an answer so I can accept it? – user1067083 Dec 7 '12 at 13:15
## 1 Answer
If you take $L_1=L_2$ not regular, then $L=L_1$ satisfies your assumptions, but cannot be regular.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361763596534729, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/83241/list
|
## Return to Question
3 added 6 characters in body; deleted 6 characters in body
In 14.4 of "Introduction to Affine Group Schemes" (by William C. Waterhouse) it is proved (!) that if $A$ represents a finite connected group scheme over a perfect field $k$ of characteristic $p$ then $A$ has the form $k[X_{1}, X_{2}, ..., X_{n}] / (X_{1}^{p^{e_{1}}}, ...., X_{n}^{p^{e_{n}}})$. But what about $\mu_{p} = k[X]/(X^{p}-1)$? It is connected but not isomorphic to $k[X]/(X^{p})$ as $k$-groups. They are isomorphic as $k$-schemes. Does this theorem mean " ...... $A$ has the form $k[X_{1}, X_{2}, ..., X_{n}] / (X_{1}^{p^{e_{1}}}, ...., X_{n}^{p^{e_{n}}})$ up to isomorphism of $k$-schemes"?
2 added author of book and 3 more tags
In 14.4 of "Introduction to Affine Group Schemes" (by William C. Waterhouse) it is proved (!) that if $A$ represents a finite connected group scheme over a perfect field $k$ of characteristic $p$ then $A$ has the form $k[X_{1}, X_{2}, ..., X_{n}] / (X_{1}^{p^{e_{1}}}, ...., X_{n}^{p^{e_{n}}})$. But what about $\mu_{p} = k[X]/(X^{p}-1)$? It is connected but not isomorphic to $k[X]/(X^{p})$ as $k$-groups. They are isomorphic as $k$-schemes. Does this theorem mean " ...... $A$ has the form $k[X_{1}, X_{2}, ..., X_{n}] / (X_{1}^{p^{e_{1}}}, ...., X_{n}^{p^{e_{n}}})$ up to isomorphism of $k$-schemes"?
1
# Finite connected groups over a perfect field of characteristic p
In 14.4 of "Introduction to Affine Group Schemes" it is proved (!) that if $A$ represents a finite connected group scheme over a perfect field $k$ of characteristic $p$ then $A$ has the form $k[X_{1}, X_{2}, ..., X_{n}] / (X_{1}^{p^{e_{1}}}, ...., X_{n}^{p^{e_{n}}})$. But what about $\mu_{p} = k[X]/(X^{p}-1)$? It is connected but not isomorphic to $k[X]/(X^{p})$ as $k$-groups. They are isomorphic as $k$-schemes. Does this theorem mean " ...... $A$ has the form $k[X_{1}, X_{2}, ..., X_{n}] / (X_{1}^{p^{e_{1}}}, ...., X_{n}^{p^{e_{n}}})$ up to isomorphism of $k$-schemes"?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957459568977356, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/245973/using-characters-in-finite-fields-to-find-number-of-solutions-to-polynomials/248762
|
# Using characters in finite fields to find number of solutions to polynomials.
I am trying to use the theorem below to show that if $d_i=(m_i,p-1)$ then $\sum_ia_ix_i^{m_i}=b$ and $\sum_ia_ix_i^{d_i}=b$ have the same number of solutions. So far, I have been able to prove that if $d=(m,p-1)$ that the number of solutions to $x^m=a$ is the same as the number of solutions to $x^d=a$, but am having trouble getting to this next step.
Theorem: if $d=\gcd(p-1,n)$ then the number of solutions to $x^n=a$ in $F_p$ is equal to $\sum_{\chi^d=1} \chi(a)$
-
2
But, then $d|p-1$, and $d=bn+c(p-1)$, so $x^{bn}\equiv x^d \pmod p$. – Berci Nov 27 '12 at 22:11
## 1 Answer
The number of solutions to the left equation is the same as the number of solutions to the right equation: $\sum a_ix_i^{m_i}=b=\sum a_ix_i{d_i}$. The number of solutions to the left equation is $\sum_{\sum y_i=b} \prod N(x_i^{m_i}=y_i)$ while the number of solutions to the right equation is $\sum_{\sum y_i=b} \prod N(x_i^{d_i}=y_i)$. But these are equivalent since we showed $N(x^m=a)=N(x^d=a)$ whenever $d=(m,p-1)$ and thus we can make the subsitition and they are the same summands and thus both equations have the same number of solutions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524202346801758, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/simulation
|
# Tagged Questions
The simulation tag has no wiki summary.
learn more… | top users | synonyms
1answer
54 views
### How do we simulate Nuclear explosion? [closed]
I am interested in PC simulation, i use physics equations to simulate rain, fire, wind, cloud and lightning. it is a kind of VR (virtual reality). what equation is able to simulate a virtual nuclear ...
1answer
60 views
### Earth and Moon computer simulation [closed]
So I want to simulate the solar system but want to start simple with one orbiting body. However, I never did anything like this before and was wondering if anyone here could give me some hints. ...
1answer
79 views
### Initial position and velocity of rocket to escape earth's gravity
I'm trying to numerically simulate a spacecraft trajectory between earth and mars. I already wrote the solar system model where the sun is at the origin of the x,y,z plane Earth and Mars are orbiting ...
0answers
24 views
### If we find a way to simulate a universe, are we likely to be living in a simulation? [closed]
If we eventually find a way to use quantum computers to simulate the creation of a consistent universe, does that prove that we are extremely likely to be living in a simulation?
0answers
52 views
### How can I simulate metric equations in relativity theories? [closed]
I want to simulate tensor equations of general relativity like Einstein's field equation, what I have to do? What PC program I've to use?
2answers
656 views
### Why can't Humans run any faster?
If you wanted to at least semi-realistically model the key components of Human running, what are the factors that determine the top running speed of an individual? The primary things to consider would ...
1answer
54 views
### Deriving the change in the Helmholtz free energy in the context of the free energy perturbation method
I am reading Free Energy Calculations: Theory and Applications in Chemistry and Biology by Chipot and Pohorille. At the beginning of the text (page 19, for example), the authors define the Helmholtz ...
1answer
50 views
### Simulations of Planetary Motions
I wrote a spreadsheet that simulates the trajectory of 3 planets in 2D space. The method is simple: for each moment in space, calculate the force felt and velocity of each planet, then for the next ...
0answers
57 views
### Final-state baryons in $p \bar p$ collisions in Pythia
I'm trying to simulate inclusive $\chi_c$ production in $p \bar p$ collisions at very low energies (~ 5.5 GeV) using Pythia8 event generator. Leaving aside problems bounded with applicability of ...
1answer
77 views
### Local minima in Ising model in a Monte Carlo simulation
Is there any way to check whether in a Monte Carlo simulation using Ising model is stuck in any (false) local minima of energy or not, particularly in 3D system ?
2answers
82 views
### Erogodicity in a Monte Carlo simulation
Q1: What is the ergodicity and ergodicity breaking in a Monte Carlo simulation of a statistical physics problem? Q2: How does one ensure that the ergodicity is maintained ?
2answers
91 views
### Argument against computer consciousness [closed]
Imagine that we have a computer program that produces the conscious awareness of the present moment. Let us assume that every time the program is run a counter is incremented. Let us also assume ...
1answer
52 views
### Simulating an orbit, primary is not at focus
I've been toying around with some -very- simple orbital simulators, mostly using preexisting physics libraries (I took a layman's stab at doing it with vectors too). The thing that is confusing me is ...
0answers
129 views
### Simple thermodynamics simulation software [closed]
Just looking for some simulator that I can model a simple boiler with a heat source, overpressure valve and steam tap so I can play around with some parameters and learn the fundamentals of heat ...
0answers
44 views
### Should a 1D Guassian wave oscillate?
I wrote a few lines that numerically solve Maxwell's equations. The result is a moving wave that looks like a single pulse. This looks strange to me because I expect waves to move in oscillator ...
1answer
41 views
### What creates the differences between the two channels of a stereophonic signal?
Given two identical microphones arranged in an ideal XY pattern, recording a single sound source at equal distance from both capsules, the two signals obtained are equal in amplitude, perfectly in ...
1answer
69 views
### How to get the new direction of 2 disks colliding?
I'm developing a 2D game including collisions between many disks. I would like to know how I can get the angle corresponding to the new direction of each disk. For every disk I have this information ...
0answers
45 views
### Animating the Bosonic String
I am interested in studying the classical solutions to the Bosonic string in flat 3+1 dim. spacetime by having them rendered a moving picture on a computer. This is partly for fun, and partly to ...
2answers
60 views
### How should I simulate the electric potential field from a wavefunction?
I was interested in making what I thought would be a simple simulation of an electron encountering a positron by numerically solving the Schrodinger equation over several time steps, but I've run ...
2answers
196 views
### How to calculate linar velocity of planet orbit?
I try to simulate a solar system with planets (with random mass) placed randomly around a sun with a mass $X \times \text{solar mass}$. The simulation is going well when I use real data ...
2answers
138 views
### Planet's Moon attrated by sun [closed]
I'm currently writing a code to generate solar system and $N$ number of planets / moons. I use real data to test (earth / sun / moon data). I succeeded in placing the earth and make it orbit around ...
0answers
91 views
### Requesting decent analysis on FEMM simulation data
I have been running simulations in FEMM for different magnetic configurations as an attempt to understand better how do they work and why it is impossible to come up with the holy grail of energy: a ...
1answer
149 views
### How to simulate temperature change of oven?
I am trying to write a software, which will model the oven temperature change when turning on/off. The data I can get is graph, by taking temperature reading each second from T0 time up to some ...
0answers
78 views
### neutron transport approximations for nuclear rocket modelling
I'm pretty ignorant regarding neutron and nuclear transport modelling, but i'm interested in trying to pursue it for a particular pet project. It regards modelling of nuclear reactions like those ...
1answer
139 views
### Why do air bubbles stick to the side of plastic tubing?
I'm watching water with air bubbles flow through transparent plastic tubing. The inner diameter is a few mm. Bubbles typically are the same diameter as the tubing, with length about the same or up ...
1answer
207 views
### Simulating the evolution of a wavepacket through a crystal lattice
I am interested simulating the evolution of an electronic wave packet through a crystal lattice which does not exhibit perfect translational symmetry. Specifically, in the Hamiltonian below, the ...
1answer
261 views
### Are We Living in a Simulated Universe? [closed]
If the universe is just a Matrix- like simulation, how could we ever know? Physicist Silas Beane of the University of Bonn, Germany, thinks he has the answer!. His paper “Constraints on the Universe ...
0answers
23 views
### Rolling (without slipping) ball on a moving surface 2 [duplicate]
Possible Duplicate: Rolling (without slipping) ball on a moving surface Apparently I didn't log in properly when I asked a question this morning: Rolling (without slipping) ball on a moving ...
1answer
288 views
### Rolling (without slipping) ball on a moving surface
I've been looking at examples of a ball rolling without slipping down an inclined surface. What happens if the incline angle changes as the ball is rolling? More precisely I've been trying to find ...
1answer
83 views
### Convert latitude of lowest altitude to argument of perigee?
I am designing an orbit around Mercury. I know the values I want for the semi-major axis, eccentricity, inclination, and RAAN. I want the altitude of closest approach (periapse) to occur at ...
1answer
142 views
### Software to simulate dynamics of objects in a given gravitation field [closed]
I want to simulate and test set of 2D designs that basically have pulley/gear/chain-linked systems under Gravity (For e.g. to check how a pulley would rotate given particular weights, of course I'm ...
0answers
51 views
### Searching for a collaborator for a physics simulation of multi-party elections [closed]
I have completed a series of physics simulations on Matlab to find the equilibrium positions of parties in 2-D or 3-D voting space and compared them to the optimal positions that would provide the ...
1answer
81 views
### Mutual Interaction of $N$-Particles in a Cartesian Plane
I am making a simulation of $N$-Particles in a cartesian plane and need help with understanding the basics. At anytime, in my particle system, I will have $N$ number of particles. I am treating the ...
1answer
95 views
### boundary limit conditions in 3D water surface simulation
As is discused on this post, taking some assumptions, the water surface can be simulated by a discrete aproximation of a grid of heights using this formula Where: HT is the new height grid HT-1 ...
1answer
69 views
### Verify correctness of vintage BASIC drag race phyics simulation? [closed]
I wished to make a more user-friendly version of the vintage BASIC "Drag" program, as seen here, probably in the form of a light-hearted web game: ...
1answer
317 views
### What is the best tool for simulating Vacuum and Fluids together?
I require a software to simulate Fluid simulation with the capability of supporting vacuum simulation. My requirements are that all numbers must reflect their real counterparts almost exactly. For ...
1answer
168 views
### Help: 3D visualization of magnetic field around moving point charge
The diagrams I'm able to find online only show the concentric field lines in the particle's plane, perpendicular to the motion, which of course generalizes to the cylindrical shape around a conductor. ...
0answers
96 views
### Can critical slowing down be used to distinguish between first and second order phase transitions?
We are simulating a percolation dynamic system where we obtain first or second order phase transitions depending on certain parameters. For certain values it is clear that we are in the first-order ...
2answers
139 views
### Simulating a proton
How much computing power would it take to simulate a single proton from the bottom up, without taking any shortcuts whatsoever? My current understanding is that: A proton is basically a seething ...
1answer
599 views
### How realistic is the game “A slower speed of light”?
The game "A slower speed of light" from MIT claims to simulate effects of special relativity: Visual effects of special relativity gradually become apparent to the player, increasing the ...
1answer
152 views
### Kerr geodesics differential equations in equatorial plane
With friend, we are writing an interactive educational simulation of particle falling into a black hole. Currently we use Schwarzschild geodesics. However, we want to generalize it to the case of ...
3answers
95 views
### How much space to simulate a small Hilbert space?
I'm thinking about trying to do a numerical simulation of some very simple QM problems. How much space do I need? To simulate the Hilbert space? I'd like to eventually simulate the absorption or ...
1answer
141 views
### Parton showering in Pythia 6 Monte Carlo generator
I have Pythia Monte Carlo (MC) samples where I can't understand the parton showering model. If I print out full decay chains from the events, each event contains multiple string objects with pdgId 92. ...
0answers
87 views
### What is the correct way of integrating in astronomy simulations? [closed]
I'm creating a simple astronomy simulator that should use Newtonian physics to simulate movement of plants in a system (or any objects, for that matter). All the bodies are circles in an Euclidean ...
1answer
138 views
### Why doesn't my particle simulation end in a flat disc?
I've made a 3d particle simulator where particles are attracted to each other by the inverse of the square radius. The purpose of my experiment is to see if this alone would create a flat disk (like ...
0answers
58 views
### Toolbox for Complex Networks and Graphs
Is there a toolbox which helps in visual simulation and modeling of a network (say a mesh or ring) consisting of coupled synchronized system of nonlinear equations(ODE) which represent a system of ...
1answer
178 views
### How does this problems are solved (modeling/simulation)? [closed]
Can somebody guide me in what to read and learn in order to be able to solve or understand how to solve the following types of problems: The modeling/simulation of the bullet, shot into the water ...
1answer
368 views
### Simple 2D Vehicle collision physics
I'm trying to create a simplified GTA 2 clone to learn. I'm onto vehicle collisions physics. The basic idea I would say is, To apply force F determined by vehicle A's position and velocity onto point ...
0answers
63 views
### Good Magnetic Simulation Software? [duplicate]
Possible Duplicate: Where can I find simulation software for electricity and magnets? I am looking for a good, FREE software that allows me to play with magnets, coils, generators, the ...
2answers
2k views
### Free Optics Simulation Programs
I'm having an extremely difficult time finding an optics program that is easy to use and offers accurate physics simulations. I'm not asking for much, I just want to be able to simulate a laser going ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168052673339844, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/228884/lagrange-multipliers-word-problem/228902
|
# Lagrange multipliers word problem
How do i approach this word problem?
Say that you are in a pirate ship that is traveling along a curved river (which roughly follows the equation $y_1=x_1(sin(x_1)+1)$) as you travel from the south-west and you head to the north-east (from $-\infty<x_1<\infty$). The cannons out of the side of your ship can only fire perpendicular to the direction that the ship is pointing. Ahead, you see a castle tower (the outer walls of the castle are given by the parametric equation $(x_2,y_2)=(cos(t)-1,sin(t)+3)$ or the implicit equation $(y_2-3)^2+(x_2+1)^2=1$). As you float down the river you know that you might only get a few chances to fire on the castle and you want to do that from the closest possible points. The object of this problem is to find those points where you should fire on the tower.
Determine the points as you pass along the river where you will get the best shot on the castle tower. Do this by minimizing the equation of the distance between a point on the river and a point on the castle tower subject to the constraint that the slope of the line between the points must be perpendicular to the river. Make clear what function you are trying to minimize and what your constraint equation is.
Also, determine the points along the curve following the river and the points on the castle tower that you will hit. Determine if your points are maxima or minima and pick out the ones that minimize the distance.
-
1
I'd take everything that comes from an author or instructor who writes "minimizing the equation of the distance" with a grain of salt. It's the distance that's to be minimized, not the equation. – joriki Nov 4 '12 at 14:07
Also, there's no need for minimization here; the line of fire has to be perpendicular to both curves, and you can determine the points from that requirement alone. – joriki Nov 4 '12 at 14:09
1
Note that if you write out function names like $\cos$ and $\sin$ in letters as you've done here, they get interpreted as juxtaposed variable names and are italicized and spaced accordingly. To get the right formatting, you need to use the predefined commands `\cos`, `\sin` etc., or, if you need a name for which there's no predefined command, use `\operatorname{name}`. – joriki Nov 4 '12 at 14:11
Also it's not the slope of the line that's perpendicular to the river, but the line itself. – joriki Nov 4 '12 at 14:39
@joriki Thanks again. Also, how would i solve this using Lagrange multipliers? – user48146 Nov 4 '12 at 15:51
show 1 more comment
## 1 Answer
The tangent vectors of the river and the wall are $(1,x_1\cos x_1+\sin x_1)$ and $(-\sin t,\cos t)$, respectively, and the direction of fire is $(x_1-\cos t+1,x_1(\sin x_1+1)-\sin t-3)$. Thus the conditions that the direction is perpendicular to both tangents are
$$x_1-\cos t+1+(x_1(\sin x_1+1)-\sin t-3)(x_1\cos x_1+\sin x_1)=0$$
and
$$-(x_1-\cos t+1)\sin t+(x_1(\sin x_1+1)-\sin t-3)\cos t=0\;.$$
Eliminating $x_1-\cos t+1$ and dividing through by $x_1(\sin x_1+1)-\sin t-3$ yields
$$(x_1\cos x_1+\sin x_1)\sin t+\cos t=0\;,$$
which could also be derived as the condition that the tangents are colinear. Using this in the first equation simplifies it to
$$x_1+1+(x_1(\sin x_1+1)-3)(x_1\cos x_1+\sin x_1)=0\;.$$
This is a transcendental equation for $x_1$ that you can solve numerically. What a terrible, terrible problem to pose.
-
Thank you so much but i have a question, how did you come up with direction of fire? – user48146 Nov 4 '12 at 14:57
Strange, it seems you already deleted your account again. Anyway, the direction of fire is just the difference between the two positions. – joriki Nov 5 '12 at 5:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509938955307007, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=2400978
|
Physics Forums
## block and spring on ramp
1. The problem statement, all variables and given/known data
A 5 kg block is placed near the top of a frictionless ramp, which makes an angle of 30o degrees to the horizontal. A distance d = 1.3 m away from the block is an unstretched spring with k = 3000 N/m. The block slides down the ramp and compresses the spring. Find the magnitude of the maximum compression of the spring.
2. Relevant equations
Gravitational Potential Energy = mgh
now i throught i figured out gravitational force was the massXacceleration of g but i took
5X9.8 and got 49 or -49 and both were not correct so i couldnt then move on to figure out the Gravitational Potential Energy so i was stuck there.. and i have to figure that out in order to figure out the entire problem and im stuck.
3. The attempt at a solution
stated up above...
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions:
Homework Help
Quote by bricker9236 now i throught i figured out gravitational force was the massXacceleration of g but i took 5X9.8 and got 49 or -49 and both were not correct so i couldnt then move on to figure out the Gravitational Potential Energy
Why do you think that this is not correct? This is correct (in MKS).
Give us something more to go on, so that we can help you effectively. What is your plan to solve this problem? What physical principles will you use?
When the particle moves from the top to the spring, it loses potential energy. However, when it loses this energy, the energy must go somewhere. In this problem, they want to know the point of highest compression in the spring. What is the change kinetic energy of the mass at this point? What is the change in potential energy in the spring? Same for gravity? and how would you relate all these together to get an answer? (there is no friction so you do not need to worry of energy loss through heat) hint: $$\Delta E_i = \Delta E_f$$
Tags
block, compressed spring, ramp
Thread Tools
| | | |
|-----------------------------------------------|-------------------------------|---------|
| Similar Threads for: block and spring on ramp | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 5 |
| | Introductory Physics Homework | 4 |
| | Introductory Physics Homework | 7 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 5 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268526434898376, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/20954/degree-of-freedom-paradox-for-a-rigid-body?answertab=oldest
|
# Degree of freedom paradox for a rigid body
Suppose we consider a rigid body, which has $N$ particles. Then the number of degrees of freedom is $3N - (\mbox{# of constraints})$.
As the distance between any two points in a rigid body is fixed, we have $N\choose{2}$ constraints giving $$\mbox{d.o.f} = 3N - \frac{N(N-1)}{2}.$$ But as $N$ becomes large the second term being quadratic would dominate giving a negative number. How do we explain this negative degrees of freedom paradox?
-
## 6 Answers
Each particle that makes up a mechanical system, can be located by three independent variables labelling a point in space. You can choose any particle in the rigid body to start with and move it any where you want, meaning three independent variables are needed to specify its location. Choosing a second particle, you choose another set of three independent variables to specify its location, the obvious being spherical coordinates with the origin at the first particle. The constraint is that the radius is a constant, giving two remaining independent variables. Choosing a third particle, you have complete freedom to rotate it by any angle about the axis through the first and second particles giving just one variable. For the remaining particles, their three coordinates no matter what they are, are constants and so entirely constrained.
Therefore, the total number of degrees of freedom for a rigid body is 3+2+1 = 6, with 3(N-2) constraints.
-
This answer is completely wrong, accepted, and other correct answers are below it. It should be deleted. – Ron Maimon May 2 '12 at 14:57
The problem is that you are double counting a lot of your constraints. If the (vector) displacements between particles A and B, and between B and C is fixed, then the displacement between A and C is fixed. Therefore the constraint on distance between A and C is redundant, and you can't count it separately.
-
You've duplicated constraints because if any one particle is constrainined in all three dimensions with all the other particles this constrains all the particles. The number of constraints is 3(N - 1).
To give an example, take three particles a, b and c. If a is fixed relative to b and is also fixed relative to c, then b and c are fixed relative to each other without having to introduce new constraints.
Edit: damn, beaten to the first answer by 49 seconds :-)
JR
-
Yeah, but your answer is superior in that it actually gives an expression for the number of constraints. +1 – Colin K Feb 13 '12 at 18:24
1
Thanks though the number of degrees of freedom comes out to be 3. A rigid body is known to have 6 (3trans+3rot). – yayu Feb 13 '12 at 18:39
@yayu not necessarily. In the case of two point particles, there are only two rotational dof since the third axis has rotational symmetry. – user2963 Feb 13 '12 at 23:01
These constraints are not independent.
-
You're double counting here. Lets take three particles. You're counting $\binom{3}{2}=3$ DOFs, right? But fixing the vector distance between particle 1 and two, and then fixing it between 2 and 3 includes fixing it between 1 and 3. Mathematically, $\vec{d}_{1,3}=\vec{d}_{1,2}+\vec{d}_{2,3}$
The easier way to count DOFs is like this. For a molecule with N particles, number of DOFs is $3N$. Out of these, 3 will be translational. For a point molecule (i.e, a single atom), subtract 3 as it has 0 rotational DOFs. For a perfectly linear molecule, subtract 1, as it has 2 rotational DOFs (Rotation along its axis is irrelevant). Now, we usually neglect vibrational DOFs (at normal temperatures). Vibrational DOFs are whatever DOFs are remaining. Thus, we always have a total of 3N DOFs, out of which we may count only the translational (3) and rotational (2 or 3) DOFs. See the table here.
-
One could do this by mathematical induction. Begin with four particles that have distances between each other that do not change. Simple enumeration will show that there are only six degrees of freedom. Now add another particle that has its distances relative to the others that are fixed. There are no unconstrained degrees of freedom that this particle bring to the system. We can do the same for a system of N particles. This is not rigorously stated in mathematical parlance, but contains the principle of the proof.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471976161003113, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/32026/what-does-it-mean-to-say-a-language-is-context-free
|
# What does it mean to say a language is context-free?
What does it mean to say a language is context-free?
-
– Fixee Apr 10 '11 at 4:10
1
I did not quite understand the wikipedia, which was why I posted a question here, hoping someone could explain it in laymans terms. – petercpwong Apr 10 '11 at 4:15
## 4 Answers
A context-free grammar is defined as a grammar in which every production rule is of the form $A \rightarrow \alpha$, where $A$ is a variable and $\alpha$ is a sequence of variables and terminals.
Formally, a context-free grammar can be defined as a 4-tuple $(V, \Sigma, R, S)$, where $V$ is a finite set consisting of the variables, $\Sigma$ is a finite set consisting of the terminals, $R$ is a set of production rules (in the form mentioned above), and $S \in V$ is the starting variable.
The language of a context-free grammar is the set of strings that can be derived from its start variable. A context-free language is any language that is generated by a context-free grammar.
For example, $\{ 0^n1^n : n \ge 0 \}$ is context-free because it is generated by the context-free grammar $(\{S\}, \{0, 1\}, R, S)$, where the set of rules, $R$, is $$S \rightarrow 0S1 \mid \varepsilon.$$
(Note: I am using $\varepsilon$ to denote the empty or null string.)
As seen in this example, the set of context-free languages contains languages that are not regular. Also, since it is easy to mimic a DFA with a context-free grammar, the set of regular languages is a proper subset of the set of context-free languages. Pushdown automata are the automata cousins of context-free grammars; they accept context-free languages and there exist algorithms to convert between the two models.
Note that the language $\{ a^nb^nc^n : n \ge 0 \}$ is not context-free (try to write a context-free grammar to generate this language and you will get a feeling for why). It can be proven that the language is not context-free with the pumping-lemma for context-free languages, which I will leave as an exercise for the reader.
There are more powerful grammars, such as context-sensitive grammars, which allow the production rules to have the form $\beta A \gamma \rightarrow \beta\alpha\gamma$, where $\alpha$, $\beta$, and $\gamma$ are sequences of variables and terminals. Context-sensitive grammars are as powerful as linear bounded automata (LBA).
-
You should give an example for a non-context free grammar/language. – Raphael Apr 10 '11 at 10:19
Good idea! Added. – Zach Langley Apr 10 '11 at 14:54
Almost! ;) $\alpha A \beta \rightarrow \gamma$ is the general rule format, equivalent to TMs (since the context can be manipulated). Context-sensitive grammars are restricted to $\alpha A \beta \rightarrow \alpha \gamma \beta$. – Raphael Apr 10 '11 at 15:36
Oops, thanks, fixed. – Zach Langley Apr 10 '11 at 15:40
Given the technical definition of what a language is and what context-free is, it means that in the processing of rules defining the language, no context is used. That is, any variable is rewritten by itself, with no context. Once a variable is produced in a derivation, none of the string around that variable will ever be involved in any further derivation...no context is used in rewriting a variable.
More complicated languages do not have this restriction (they may allow use of context/other adjacent variables and terminals in rewriting a variable).
Context-free languages are "easier" to parse (quicker/more efficiently) than context sensitive ones.
Note that the term 'context' is very technical here; it is referring to the context of a substring when rewriting. Technical terms have a life of their own and don't necessarily relate well to the first layman's understanding of the word.
-
Another characterisation is this: The set of context-free languages CFL is the set of all languages that are accepted by (maybe nondeterministic) push-down automata (finite automata plus one stack).
More intuitively, context-free languages have the property that different parts of a word are independent in a certain sense, namely in the same way as dynamic programming optimises subproblems independently. Not conincidentally, context-free grammars can by parsed by dynamic programming (CYK algorithm), non-context-free can not (in general).
-
It might be a good idea to explicitly state that you are referring to non-deterministic PDAs. – Zach Langley Apr 10 '11 at 15:10
Agreed, thanks. I "grew up" with PDA $=$ NPDA $\cup$ DPDA with NPDA $\cap$ DPDA $= \emptyset$. DPDA $\subset$ NPDA = PDA is also common. – Raphael Apr 10 '11 at 15:40
My understanding is that a language is context-free if all statements can be understood without requiring external context, which is true for no natural language and those aspects of programming languages that do not rely on data or APIs from external sources.
-
2
Most programming languages are not context-free. What PL people call "static semantics" or "context restrictions" (often even subsumed in the notion of "syntax") are certain computable criteria (type correctness, visibility, ...) that require more computational power. It is, however, true that pure syntax of programming languages is usually context-free, i.e. given by a CFG. Not all "valid" programs in that regard are accepted by the compiler, obviously. – Raphael Apr 10 '11 at 10:29
I think I misunderestimated the prevalence of data and APIs external to the environment. – James Edward Lewis Apr 10 '11 at 11:30
Maybe, but that is not the point. Even assuming you have access to all code while compiling, you have certain problems. Very simple example: forward references (e.g. methods in a Java class) can not be resolved/checked in one pass, i.e. not by a PDA. Name resolution is problematic in general, since a PDA has no way to store arbitrarily many (valid) identifiers with random access. – Raphael Apr 10 '11 at 12:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121056199073792, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.